[e2e] What's wrong with this picture?

Ken Calvert calvert at netlab.uky.edu
Sat Sep 12 19:37:25 PDT 2009


What I think is interesting about this discussion is that 
the original "framers" [:-)] of TCP saw nothing wrong with 
an MSL denominated in minutes, and now delivering a datagram 
after it spends 10 seconds in the network is considered 
harmful.  In between came VJCC -- but we've had that all 
these years and this is the first time I've heard anyone 
suggest it's a problem that packets can survive in the 
network for one-sixth of a minute.

An MSL is required so TCP (or any *practical* reliable 
transport) can have certain safety properties.  This 
discussion shows that MSL has implications for the CC 
control loop as well.  TCP's correctness would be fine if 
the MSL were 10 seconds, so *if* the consensus is that 
multiple seconds of buffering is broken, why not acknowledge 
that the world has changed and make that an "official" IETF 
policy?

Indeed, as has been noted here recently (in a different 
discussion), some TCP implementations already assume a small 
MSL for other performance reasons.  But Reed's experiment 
shows there are real dangers in doing so.

KC

> Greetings David,
>
> 2009/9/12 David P. Reed <dpreed at reed.com>:
>>
>>
>> On 09/11/2009 05:41 PM, Lachlan Andrew wrote
>>>
>>> No, IP is claimed to run over a "best effort" network.  That means
>>> that the router *may* discard packets, but doesn't mean that it
>>> *must*.  If the delay is less than the IP lifetime (3 minutes?) then
>>> the router is within spec (from the E2E point of view).
>>
>> I disagree with this paragraph. No one ever claimed that IP would run over
>> *any* best efforts network.  One could argue that routers that take pains to
>> deliver packets at *any* cost (including buffering them for 10 seconds when
>> the travel time over the link between points is on the order of 1
>> microsecond, and the signalling rate is > 1 Megabit/sec) are not "best
>> efforts" but "heroic efforts" networks.
>
> You are right that "good" isn't one-dimensional.  I have always found
> it odd that people use "best effort" to mean something less that
> "trying as hard as is possible" (because best is a superlative --
> nothing is better than the best).  It was only in formulating a reply
> that I realised "best" could also mean "most appropriate".
>
> Still, a quick search for "discard" in the "Requirements for Internet
> Hosts" standard doesn't say that they *have* to discard packets.
>
> Again, my main motivation for writing a provocative email was that I'm
> frustrated at people saying "We're allowed to specify TCP to behave
> badly on highly-buffered links, but link designers aren't failing if
> they design links that behave badly with highly-aggressive E2E
> protocols".
>
> TCP congestion control was only ever intended as an emergency  hack.
> It is remarkable that it has worked as well as it has, but why do we
> have to keep designing networks to support the hack?  As I said in
> reply to Detlef, a good TCP can make link designers lazy.  However, we
> shouldn't let good links make us as TCP / E2E designers lazy.
>
>> In any case, research topics for future networks aside, the current IP
>> network was, is, and has been developed with the goal of minimizing
>> buffering and queueing delay in the network. The congestion control and
>> fairness mechanism developed by Van Jacobson and justified by Kelly (on game
>> theoretic grounds, which actually makes a great deal of sense, because it
>> punishes non-compliance to some extent) is both standardized and dependent
>> on tight control loops, which means no substantial queueing delay.
>
> The IETF may have standardised TCP, but what if the IEEE
> "standardises" a reliable link protocol (like data centre ethernet),
> or the ITU standardises high-reliability ATM (used by DSL modems,
> which also get the blame for TCP's excessive aggressiveness)?  Should
> we change their standards, or ours?  The IETF isn't the only
> standards body, or even the supreme one.  If there are standards that
> don't interact well, we should revisit all standards, starting with
> the ones we can control.
>
>> It's not the buffer capacity that is the problem.  It's the lack of
>> signalling congestion. And the introduction of "persistent traffic jams" in
>> layer 2 elements, since the drainage rate of a queue is crucial to recovery
>> time.
>
> Perhaps it is the lack of the IETF protocol paying attention to the
> congestion signals.  As I mentioned, VJ's breakthrough was realising
> that TCP should listen more closely to what the network was telling
> us.  Why should we not keep doing that?  When the link is screaming
> with high delay, why don't we back off?
>
>> One can dream of an entirely different network.  But this is NOT a political
>> problem where there is some weird idea that layer 2 networks offering layer
>> 3 transit should have political rights to just do what they please.  It's
>> merely a matter of what actually *works*.
>
> It is exactly a political problem, between different standards bodies.
>
> But closer to your analogy, who gives TCP the right to send just what
> it pleases?  I'm not talking about "an entirely different network",
> but one which exists and on which you took measurements.  The network
> has problems, caused by poor interaction between an IETF protocol and
> the hardware one which it runs.  One has to change.  Why close our
> eyes to the possibility of changing the protocol?
>
>> Your paragraph sounds like the statements of what my seagoing ancestors
>> called "sea-lawyers" people who make some weird interpretation of a "rule
>> book" that seems to be based on the idea that the design came from "god" or
>> the "king".  Nope - the design came from figuring out what worked best.
>
> At the risk of being repetitive, I see the same thing in reverse:  I'm
> hearing "we can't change TCP to work over the current network, because
> TCP is standardized, and it used to work".  I'm not saying that we
> should have links with excessive buffers.  I'm not even saying that
> they shouldn't unnecessarily drop packets (although it sounds odd).
> I'm just saying that we should *also* be open to changing TCP to work
> over the links that people build.
>
>> Now, I welcome a fully proven research activity that works as well as the
>> Internet does when operators haven't configured their layer 2 components to
>> signal congestion and limit buildup of slow-to-drain queues clogged with
>> packets.
>
> Great.  I agree that my mindset is more IRTF than IETF, and so I'm
> Cc'ing this to ICCRG too.
>
> However, I'm arguing that the layer 2 links *are* signalling
> congestion very strongly, if only we'll listen.
>
> Links with slow-to-drain queues are certainly a problem if there is a
> high load of traffic which doesn't have proper congestion control, but
> that isn't a reason not to design proper congestion control which
> doesn't fill all available buffers.
>
>> You are welcome to develop and convince us to replace the Internet with it,
>> once it *works*.
>
> I'm not talking about replacing the internet, any more than RFC2851 /
> RFC5681 replace RFC753.  I'm only suggesting that we design protocols
> which work on the network that is out there, and that you measured.
> If the link you mention is an isolated case, then we can simply call
> it misconfigured.  However, I don't believe it is an isolated case,
> and we should take responsibility for TCP's poor behaviour on such
> links.
>
> Cheers,
> Lachlan
>

Ken Calvert, Professor                   University of Kentucky
Computer Science                         Lexington, KY USA 40506
Lab for Advanced Networking              Tel: +1.859.257.6745
calvert at netlab.uky.edu                   Fax: +1.859.323.1971
http://protocols.netlab.uky.edu/~calvert/


More information about the end2end-interest mailing list