[e2e] What's wrong with this picture?

Detlef Bosau detlef.bosau at web.de
Sun Sep 13 04:43:31 PDT 2009

Lachlan Andrew wrote:
> You are right that "good" isn't one-dimensional.  I have always found
> it odd that people use "best effort" to mean something less that
> "trying as hard as is possible" (because best is a superlative --
> nothing is better than the best).  It was only in formulating a reply
> that I realised "best" could also mean "most appropriate".

To my understanding, "best effort" is a term from the business world.

When I send you a book and ship the item "best effort delivery", this 
means: I will take the item to the parcel service.

And I don't care for the rest.

Hence, best effort is not a synonym for "taking responsibility" for 
something. Quite the opposite is true: "best effort" is a synsnom for 
SNMP: "Sorry, not my problem."

> Still, a quick search for "discard" in the "Requirements for Internet
> Hosts" standard doesn't say that they *have* to discard packets.

When I throw a bottle of milk to  the ground and the bottle breaks, 
there is no standard which says that the milk MUST spill out.

(However, it's best current practice to wipe it away, because the milk 
will become sour otherwise and will smell really nasty.)
> Again, my main motivation for writing a provocative email was that I'm
> frustrated at people saying "We're allowed to specify TCP to behave
> badly on highly-buffered links, but link designers aren't failing if
> they design links that behave badly with highly-aggressive E2E
> protocols".

We should keep in mind the reason for buffering. There was a discussion 
of this issue some weeks ago.

In a private mail, I once was told: The reason for buffering is to cope 
with asynchronous packet arrival.


That was the most concise statement I've ever heard on this issue.

And of course, some people refer to the book of Len Kleinrock and the 
drawings found there and quoted by Raj Jain and many others which deal 
with the "power" of a queuing system and optimum throughput and a knee....

Some weeks ago, I found a wonderful article which recognized a delay 
larger than the "knee" delay as congestion indicator

This is an appealing idea and I love it. There is only one minor 
question left: What's the delay for the "knee"?
And how can this be determined without any knowledge about the path and 
the traffic structure?

So, the major purpose of buffering is to cope with asynchronous arrival. 
And when there is some minor benefit for the achieved throughput from 
properly designed buffers, I don't mind.

> TCP congestion control was only ever intended as an emergency  hack.

When each thoroughly designed optimal solution for a problem would at 
least be half as successful as VJ's "hack", the world would be a better one.

In my opinion, the congavoid paper is not a "quick hack" but simply a 
stroke of genius. I don't know whether this is common sense, but I'm 
convinced that this is certainly one of the most important papers ever.

(With respect to my remark on Lehman Bro's: I wish, some of the decision 
makers from that "very weekend" would have proposed an emergency hack 
for the problem like TCP congestion control. This would have spared us 
much problems.
However, we should think positive: The financial market is now delivered 
from congestion for quite a long period of time...)

> It is remarkable that it has worked as well as it has, but why do we
> have to keep designing networks to support the hack?  As I said in

First: Because it works.
Second: Up to know, nothing better is known.

> reply to Detlef, a good TCP can make link designers lazy.  However, we
> shouldn't let good links make us as TCP / E2E designers lazy.

For me, I can say, I'm not going to get lazy ;-)

The real concern is the proper separation of concerns: Who carries the 
burden of reliable delivery? Is this due to the link? Or is this due to 
the end points? That's the reason why I think, that this mailing list 
here is highly appropriate for this issue: The proper distribution of 
the "reliability concern" among the OSI layers 1 to 4 and among links 
and nodes respectively is a typical end-to-end issue.

> The IETF may have standardised TCP, but what if the IEEE
> "standardises" a reliable link protocol (like data centre ethernet),
> or the ITU standardises high-reliability ATM (used by DSL modems,
> which also get the blame for TCP's excessive aggressiveness)?  Should
> we change their standards, or ours?  The IETF isn't the only
> standards body, or even the supreme one.  If there are standards that
> don't interact well, we should revisit all standards, starting with
> the ones we can control.

We should avoid mixing up different goals.

TCP/IP is a generic protocol suite with hardly any assumptions at all.

When, e.g. for a data center, there is some proprietary solution much 
more appropriate than a generic one, it may be of course reasonable to 
use it.

> It is exactly a political problem, between different standards bodies.
> But closer to your analogy, who gives TCP the right to send just what
> it pleases?  I'm not talking about "an entirely different network",
> but one which exists and on which you took measurements.  The network
> has problems, caused by poor interaction between an IETF protocol and
> the hardware one which it runs.  One has to change.  Why close our
> eyes to the possibility of changing the protocol?

As I said above: The better we know our network, the more assumptions we 
can make and the better we know our requirements, the more precise and 
useful our design, both of network components and protocols, will be.

TCP/IP is "one size fits all", and this is both, the most important 
reason for its success and its severest limitation as well.

> At the risk of being repetitive, I see the same thing in reverse:  I'm
> hearing "we can't change TCP to work over the current network, because
> TCP is standardized, and it used to work". 

When I review the proposals for TCP changes made in the last decade, I'm 
not convinced that no one is willing to consider changes to TCP.

However, a recent paper submission of mine was rejected, amongst others, 
with the remark: "Ouch! You're going to change TCP here!".

When there are valid reasons to change protocols, we should consider 
doing so.

>  I'm not saying that we
> should have links with excessive buffers.  I'm not even saying that
> they shouldn't unnecessarily drop packets (although it sounds odd).
> I'm just saying that we should *also* be open to changing TCP to work
> over the links that people build.
That's what many of us are actually doing.

Detlef Bosau		Galileistraße 30	70565 Stuttgart
phone: +49 711 5208031	mobile: +49 172 6819937	skype: detlef.bosau	
ICQ: 566129673		http://detlef.bosau@web.de			

More information about the end2end-interest mailing list