[e2e] Reacting to corruption based loss

Ethan Blanton eblanton at cs.ohiou.edu
Thu Jun 30 12:43:58 PDT 2005

(Reformatted so as not to be top-posted, so others can follow)

Cannara spake unto us the following wisdom:
> Ethan Blanton wrote:
> > Cannara spake unto us the following "wisdom":
> > > Good one Clark!  Indeed FDDI and other fiber rings have dual
> > > interfaces & fibers and anything can happen in any part of the
> > > hardware.  The assumptions about the physical layer that have been
> > > made in most TCP discussions simply evidence lack of understanding of
> > > the reality and complexity of underlying layers.  This lack extends to
> > > the length of time the defects last and go undiscovered, while folks
> > > struggle with peformance issues.
> > 
> > Let me get this straight, for my benefit and for the benefit of those
> > who may not understand you.  You're suggesting that TCP should have a
> > mechanism by which the hardware layer can communicate that there is a
> > particular sequence of bits which, due to physical imperfections in
> > the transmission medium, cannot be reliably transmitted, and that
> > given this information the TCP stack could choose a different method
> > of communicating those particular bits?
> > 
> Ethan, for someone who quotes Beccaria (a paesano of my family), you clearly
> know how to read.  So, how could you gather from anything I've said that TCP
> should have the knowledge you claim of the physical layer?  

Actually,  I  never have any idea what you're saying, which is why I was
asking for clarification.  Let us carefully consider the scenario  lead-
ing up to my response:

Network conditions:
* FDDI ring, which in fact is capable of detecting corruption loss
* A particular bit pattern which is reliably corrupted and cannot pass
  the network as-is
* TCP backs off exponentially as this corrupted packet is repeatedly
  lost due to non-congestion events

> The problem is that TCP is designed to assume the physical layer is not
> involved in loss.  Thus, it slows down when it shouldn't, because it wrongly
> ascribes all loss to network congestion.  Note that the words "network
> congestion" refer to the network layer.  The kludge done in the '80s, to make
> one transport at the transport layer protect the network layer from meltdowns
> that were often nearly happening, is the issue.

Cannara's response:
* "The assumptions about the physical layer that have been made in
  most TCP discussions simply evidence lack of understanding of the
  reality and complexity of underlying layers."
* The above paragraph
* This response is made in a long and fruitless thread where Cannara
  has repeatedly stated that TCP is worthless, doesn't work, the
  Internet doesn't work, if only we listened to him it would be better
* Furthermore, context indicates that it is his belief that if TCP
  could tell this loss was corruption vs. congestion, TCP could
  somehow make a smarter choice

My  question  to  you, Alex, is exactly *what* do you think TCP could do
better than exponentially backing off a  packet  which  will  NEVER  GET
THROUGH.   In  trying so hard to find examples that prove you right, you
have chosen an example which is the absolute  WORST  possible  case  for
your  argument.   Let's  assume  for a moment that TCP was notified that
this packet was lost due to corruption, and it  retransmitted  instanta-
neously...   Now  we're flooding the wire as fast as we can send packets
with a packet that will never get through.  This is surely  an  improve-

This particular scenario is of course pathological, but one can (easily)
construct many scenarios where the response to corruption  (or  whatever
network  trouble you choose) does not have one clear right answer.  Con-
sider the simple case of a slowly fading signal from an  802.11  WAP;  a
packet  is lost to corruption, and the link simultaneously falls back to
a slower bitrate to improve its signal state.  Yes, you could  create  a
signal  for  both  "corruption  experienced",  and a signal for "bitrate
reduced".  You could also create a signal for "bitrate increased", "com-
peting  traffic  reduced",  "user  wants better latency", and a thousand
other things.  I am aware that there are even transport and network pro-
tocols which support these signals.  I am also of the opinion that it is
likely that TCP can benefit from some knowledge of lower layers.  What I
am  heartily  SICK of hearing, however, is this endless prattle that TCP
is completely broken, IP is worthless, it's  all  done  wrong,  it  will
never work, it's insecure, blah blah blah blah blah.

The fact remains that the Internet exists, it WORKS, it works very WELL,
the Internet protocol stack is simple enough and yet powerful enough  to
cope  with  little  or  no modification on everything from tiny embedded
systems with a scant few K of RAM and cycles per second to  many-proces-
sor  iron in server rooms with gigabit links sprawling to hundreds of IP
devices on enterprise networks.  No matter how  much  that  hurts,  it's
true.  Yes, we can improve it, and yes, there are mistakes in the stack.
However, Chicken Little is never going to panic it out of existence.

So ... have  any  actual  ideas on how to improve things?  (I don't mean
citations of previous ideas which no one can find, either.)


P.S.: Somehow your mail got to me again, and it even traversed the

The laws that forbid the carrying of arms are laws [that have no remedy
for evils].  They disarm only those who are neither inclined nor
determined to commit crimes.
		-- Cesare Beccaria, "On Crimes and Punishments", 1764
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://www.postel.org/pipermail/end2end-interest/attachments/20050630/6f811476/attachment.bin

More information about the end2end-interest mailing list