[e2e] Open the floodgate

Cannara cannara at attglobal.net
Thu Apr 22 23:49:19 PDT 2004


To reduce the list traffic, I'm going to combine Ricks's & Noel's, and address
the points from both...

Rick:  "seems the mechanisms did sufficiently well" -- perhaps, especially if
part of the task was avoiding impending collapses 20 years ago, but that's my
point -- a kludge may well have been needed right then, but stopping there
doesn't yield a well-designed system for the next decades.  As far as VJ's
algorithms are concerned, there are quite knowledgeable folks who can show
their statistical bases were inadequate.  

Of course, "other" drops weren't accounted for, because the system wasn't
completed, though it could have been, by now.  Alternative ways of dealing
with end-end flows by interior management of links have long existed, and are
used at lower layers in networks.  Apparently, a great bias has existed
against this sort of design, which is actually very successful in other
contexts.  Even a very big old name in Internet Land liked this type of
approach, as expressed, for example in...

"...reason it [TCP] requires the opposite of backoff is because it doesn't
have the visibility to determine which algorithm to choose differently as it
navigates the network at any point in time.  But if you can do it hop by hop
you can make these rules work in all places and vary the algorithm knowing
your working on a deterministic small segment instead of the big wide
Internet."

...but politics seems in control.  So, we're left with something called TCP
which attempts network-layer congestion amelioration without any info from
that layer.  Note, for instance, that a TCP sender can't even tell if it timed
out and retransmitted too quickly, even when it gets 2 ACKs
for the same data from the receiver -- a loss is still counted.  Yet, IP
provides a simple way of doing exactly this.  As I implied before, ECN is a
late, partial admission that a problem has long existed and needs attention.
None of us waits as many decades to get a cavity filled -- at least I hope
none do.

And on: "..take something out of the environment..." -- how long do we wait to
do something?  We floundered for years on something as basic as addressing. 
As others have said, the IETF wins the slowness race with other networking
bodies.  And, note that I'm not "denigrating" TCP, I'm raising issues that, if
dealt with, would prolong its overall life and improve its effectiveness. 
Others have raised many of the same issues over the years and been rebuffed. 
Rebuffing is what bureaucracies do, so that's the criticism.  TCP is
inanimate.  Only people can resist changes in it.
-----

Noel:  You say to me: "If don't want the transport layer to do congestion
control, what layer do you want to do it? There's only one more layer down
there - the internetwork layer. Adding congestion control there would greatly
increase the complexity of the internetwork layer, and all the devices that
implement it - including the routers."

First, routing systems of all sorts do congestion control on a link and even
system basis, it's just that such info can't be understood by TCP/IP.  So, as
I've simply said, the initial congestion-control kludge may have been
needed, but so was continued development of more comprehensive congestion
management that even nodes running, say, UDP could benefit from.  It didn't
happen and folks who were there know about the great biases oddly set up
against doing things to involve the Network layer more effectively.

And, there's more than "only one more layer down there", which when you look
into the various types of Datalink devices and systems around also points up
instances of flow management that again, IP was never allowed to be made aware
of.  Even use of "the internetwork layer" as the only layer below TCP itself
is a throwback to how the IP family was designed -- to talk to other I/O
processors (IMPs...), not to any resident drivers for physical media.  There
was no concept of addressing physical interfaces, LANs, etc.  Only other
Network-layer devices were conceived of, and not many at that, which is why IP
addresses were so small.  An interesting thing is that during the '80s and
'90s, when corporate networks could address 80 bits worth of Network-layer
systems, IP was floundering on its inadequate base and complex
addressing schemes.  Now, of course, the ultimate addressing kludge (NAT) has
even forestalled IPv6 in common use (which may even be a good thing :)

As far as: "And trust me, a network in congestive collapse is so much worse
than the effects of treating error packets as congested packets, that the
latter just didn't seem very important. If you had to live with congestive
collapse you'd appreciate that."  

I know, I was around then too Noel.  In fact, many folks outside The TCP/IP
in-crowd thought the overall protocol suite somewhat quaint and inadequate for
corporate tasks, especially for large, secure business nets.  Amazing how many
billions we're now spending because that assessment rings true today.

The issue is not how great, or bad, things were 20 years ago, but being active
in trying to do more astute development now, and not rest on laurels, real or
imagined.

Alex


Rick Jones wrote:
> 
> I'm sure I'll come to regret this, given I don't have an historical score card
> with which to tell the players and their old scores but...
> 
> ...in the context of TCP VJ congestion control, wasn't one of the stated
> assumptions that networks didn't have much in the way of error rates in them
> other than drops resulting from congestion?
> 
> That being the case, seems the mechanisms did sufficiently well.
> 
> Now it seems there are things changing the preconditions - wireless and I
> suppose really really really high-speed links. (have bitrates increased faster
> than the exponents on the bit error rates?)
> 
> So sure, take something out of the environment in which it was conceived and it
> may not do well and may indeed need some work, but I'm not sure that is cause to
> denigrate it so seemingly completely.
> 
> rick jones
%%%%%%%%%%%%%%%%%%%

Noel Chiappa wrote:
> 
>     > From: Cannara <cannara at attglobal.net>
> 
>     > TCP was modified to provide network congestion control some years ago u
>     > -- something that is clearly a violation of layering responsibility
> 
> You say this very glibly, as if doing something different were trivial and
> obvious. I don't think it's quite so trivial or obvious (especially not in
> hindsight), however.
> 
> If don't want the transport layer to do congestion control, what layer do you
> want to do it? There's only one more layer down there - the internetwork
> layer. Adding congestion control there would greatly increase the complexity
> of the internetwork layer, and all the devices that implement it - including
> the routers.
> 
> If you say "OK, so there should be a shim layer in the middle", then you run
> into the issue that you've now added another layer, with attendant
> implementation complexity and overhead. The natural response would be to fold
> the functionality into the transport layer, to get rid of the structuring
> overhead of a separate layer.
> 
> Sure, with hindsight, another layer in there might have been good when the
> network got to be really large, and much faster. (We could have
> location/identity binding in there too, perhaps.) However, all sorts of
> things are obvious in hindsight, and would *now* be good to have, now that
> the network is larger, weren't so obvious (or obviously economical in the
> long run) back then.
> 
> I can tell you exactly how the transport layer wound up doing *network*
> congestion control - it's because it was already doing *host* congestion
> control, specifically via the window field. The transport layer did host
> congestion control for a very simple reason: it was the layer that knew how
> much buffering was available - because it managed the buffers where data was
> held until it could be ACK'd. (Surely a transport responsibility, no?) Once it
> was doing one kind of congestion control, adding the other was just the most
> straightforward thing to do.
> 
>     > a protocol that believes it needs to slow down whenever it sees a
>     > packet loss.
> 
> Look, we all have known for some time that i) TCP can't currently distinguish
> between error loss and congestion loss, and ii) slowing down for an error loss
> is a mistake.  (In fact, I'm losing horribly these days because my mail is
> kept on a host which is on a network which is losing packets, so I am
> personally aware of this issue.) We're not cretins. You don't need to keep
> repeating it.
> 
>     > There are various causes of loss, which was a fact even known in the
>     > '80s when TCP was kludged, so why ignore them and incorrectly lump them
>     > with congestion?
> 
> Because when the network fell into congestive collapse - which we *were
> actually seeing* back before TCP had congestion control, losses due to
> congestion far outnumbered the losses due to error, so the latter could
> safely be ignored.
> 
> And trust me, a network in congestive collapse is so much worse than the
> effects of treating error packets as congested packets, that the latter just
> didn't seem very important. If you had to live with congestive collapse you'd
> appreciate that.
> 
>         Noel


More information about the end2end-interest mailing list