[e2e] Reacting to corruption based loss

Cannara cannara at attglobal.net
Sun Jun 26 10:41:43 PDT 2005

As Craig suggests, a "balance point" is needed.  But consider the key
statements: "Please be so kind not to usurp the whole channel. There are other
flows interested on network resources as well."  

Note the phrases: "other flows", "network resources".  Since not all flows are
TCP and not all TCPs are the same, nor are they configured the same way, this
phrase makes clear the need for more Internet network-layer flow management. 
This is exactly what happens within metro networks that distribute many flows
of many types through many interfaces.  

Now to the question: "Why couldn´t we continue to assume loss free links?"
Because they don't exist with any long-term certainty and depend greatly on
link technology.  So, some thing independent of one particular transport
(i.e., TCP) needs to consider what to do for Internet losses that aren't due
to congestion.  This entity needs to do its work for all types of transport,
even something like ICMP, that is intended to be e2e.

We know clearly how poorly TCP responds to loss, in ignorance of its cause. We
know as engineers this is not a worthy design for a generally-useful network. 
This behavior is one reason TCP is not used in various networks for which loss
must be handled more efficiently.  So, we come back to the core meaning of the
e2e principle -- assure ends know what each is doing and get data reliably &
efficiently transferred.  That isn't: "managing network congestion".  If
managing betwork congestion were the ends' tasks, then the ends would need far
more accurate and complete info than simply a timeout waiting for an ack to a
packet that got lost, or a timeout retransmit when an ack got lost.  

The network layer is where loads of info exists on why and where packets get
lost.  That info needs to be used in the Internet, as it is in other component
nets.  It would also be useful for the next layer up to get some of that info,
so whatever responses to loss are available are chosen well.


Craig Partridge wrote:
> In message <42BDDD74.BF9FDB92 at web.de>, Detlef Bosau writes:
> >Basically, we´re talking about the old loss differentiation debate. If
> >packet loss is due to corruption, e.g. on a wireless link, there is not
> >mecessarily a need for the sender to decrease it´s rate. Perhaps one
> >could do some FEC or use robust codecs, depending on the application in
> >use. But I do not see a reason for a sender to decrease it´s rate
> >anyway.
> I believe that's the general wisdom.  Though I'm not sure anyone has
> studied whether burst losses might cause synchronized retransmissions.
> >In my opinion, and I´m willing to receive contradiction on this point,
> >it is a matter of the Internet system model. Why couldn´t we continue
> >to assume loss free links? Is this really a violation of the End to End
> >Principle when we introduce link layer recovery? Or is it simply a well
> >done seperation of concerns to fix link issues at the link layer and to
> >leave transport issues to the transport layer?
> Take a peek at Reiner Ludwig's (RWTH Aachen) dissertation which says that,
> in the extreme case, link layer recovery and end-to-end recovery don't mix --
> and we know from the E2E principle that some E2E recovery must be
> present.  (I believe there's also work from Uppsala showing that you
> need at least some link layer recovery or TCP performance is awful --
> what this suggests is we're searching for a balance point).
> Craig

More information about the end2end-interest mailing list