[e2e] What's wrong with this picture?
lachlan.andrew at gmail.com
Sun Sep 13 15:57:00 PDT 2009
2009/9/13 Detlef Bosau <detlef.bosau at web.de>:
> Lachlan Andrew wrote:
>> I have always found
>> it odd that people use "best effort" to mean something less that
>> "trying as hard as is possible"
> To my understanding, "best effort" is a term from the business world.
> When I send you a book and ship the item "best effort delivery", this means:
> I will take the item to the parcel service. And I don't care for the rest.
Yep. I find that an odd use of "best" too...
>> Still, a quick search for "discard" in the "Requirements for Internet
>> Hosts" standard doesn't say that they *have* to discard packets.
> When I throw a bottle of milk to the ground and the bottle breaks, there is
> no standard which says that the milk MUST spill out.
> (However, it's best current practice to wipe it away, because the milk will
> become sour otherwise and will smell really nasty.)
>> Again, my main motivation for writing a provocative email was that I'm
>> frustrated at people saying "We're allowed to specify TCP to behave
>> badly on highly-buffered links, but link designers aren't failing if
>> they design links that behave badly with highly-aggressive E2E
> We should keep in mind the reason for buffering. There was a discussion of
> this issue some weeks ago.
> In a private mail, I once was told: The reason for buffering is to cope with
> asynchronous packet arrival.
> That was the most concise statement I've ever heard on this issue.
> And of course, some people refer to the book of Len Kleinrock and the
> drawings found there and quoted by Raj Jain and many others which deal with
> the "power" of a queuing system and optimum throughput and a knee....
> Some weeks ago, I found a wonderful article which recognized a delay larger
> than the "knee" delay as congestion indicator
> This is an appealing idea and I love it. There is only one minor question
> left: What's the delay for the "knee"?
> And how can this be determined without any knowledge about the path and the
> traffic structure?
> So, the major purpose of buffering is to cope with asynchronous arrival. And
> when there is some minor benefit for the achieved throughput from properly
> designed buffers, I don't mind.
>> TCP congestion control was only ever intended as an emergency hack.
> When each thoroughly designed optimal solution for a problem would at least
> be half as successful as VJ's "hack", the world would be a better one.
> In my opinion, the congavoid paper is not a "quick hack" but simply a stroke
> of genius.
Absolutely! It was brilliant to realise that loss was telling us
something about the network. He also proposed a very robust response
to it, over a wide range of conditions.
The only reason I call it a hack is to counter the view that it is a
carefully engineered solution, and that networks should be designed to
show a particular undesirable symptom of congestion just because "TCP
>> It is remarkable that it has worked as well as it has, but why do we
>> have to keep designing networks to support the hack?
> First: Because it works.
> Second: Up to know, nothing better is known.
It works, except on links with large buffers (which exist, whether or
not they "should") or for large BDP flows (which exist, and will
become more widespread), or for links with non-congestion losses
(which exist, and will continue to without "heroic" ARQ).
Someone has pointed out that simply the binary backoff of the RTO may
be enough to prevent congestion collapse. Who knows what aspect of
VJ's algorithm is really responsible for making the internet "work",
and how much is simply that we don't see all the details?
>> The IETF may have standardised TCP, but what if the IEEE
>> "standardises" a reliable link protocol (like data centre ethernet),
>> or the ITU standardises high-reliability ATM (used by DSL modems,
>> which also get the blame for TCP's excessive aggressiveness)? Should
>> we change their standards, or ours? The IETF isn't the only
>> standards body, or even the supreme one. If there are standards that
>> don't interact well, we should revisit all standards, starting with
>> the ones we can control.
> We should avoid mixing up different goals.
> TCP/IP is a generic protocol suite with hardly any assumptions at all.
Exactly my point. I don't think TCP should assume that routers drop
packets instead of buffering them. We can still use VJ's insight
(that we should look for symptoms of congestion, and then back off)
without that assumption.
>> At the risk of being repetitive, I see the same thing in reverse: I'm
>> hearing "we can't change TCP to work over the current network, because
>> TCP is standardized, and it used to work".
> When I review the proposals for TCP changes made in the last decade, I'm not
> convinced that no one is willing to consider changes to TCP.
> However, a recent paper submission of mine was rejected, amongst others,
> with the remark: "Ouch! You're going to change TCP here!".
> When there are valid reasons to change protocols, we should consider doing
Absolutely. Many in the academic research community are (too?)
willing to change TCP. However, it is hard for the academics to make
the changes without the IETF's support.
2009/9/14 Detlef Bosau <detlef.bosau at web.de>:
>> Maximum Segment Lifetime.
> However, the story remains the same. What is the reason to keep a segment in
> the network that long?
The MSL is not that we should try to keep segments in the network that
long, but that protocols should still work if, by mistake, a packet
does survive that long. We don't want a misconfigured router
somewhere to cause ambiguity between two IP fragments, for example.
It was perhaps misleading of me to bring the MSL into the discussion
in the first place... (We want the network to be "safe" under those
conditions, but shouldn't optimise for them.) The point was that a
few seconds of delay is not "wrong", even though it is undesirable.
Lachlan Andrew Centre for Advanced Internet Architectures (CAIA)
Swinburne University of Technology, Melbourne, Australia
Ph +61 3 9214 4837
More information about the end2end-interest