[e2e] Is a non-TCP solution dead?

Cannara cannara at attglobal.net
Mon Mar 31 14:57:08 PST 2003


Spencer, thanks for expanding on this issue.  I understand the reasons below,
but perhaps we need to realize that using the Tranpsort Layer (TCP) to
ameliorate Network-Layer congestion was a mistake.  Of course it was done
following some big scares of Internet collapse in the '80s, but it remains,
nevertheless, a kludge.

It's worthwhile to note other approaches, some used years ago, that
successfully 'violated' the odd ideas underlying the kludge of TCP
algorithmically helping IP traffic.  Slow start, for example, makes an
assumption of an extremely weak network path -- not the usual case in
corporate nets, somewhat the case at the horrid Inet peering points at busy
hours, and modestly so in the overall, complex path over wireless/
wired/Internet, etc. serialized links.  The design of modern routers, gateways
between network types, firewalls, etc. allows for an extreme amount of
buffering.  This is partly because of new, non-TCP services (video...).  So,
the original kludge of 'informing' TCP about congestion couldn't work long
term anyway -- that's a Network-Layer responsibility.  

I found it interesting some years ago, when consulting for a large storage
apps company, that their TCP traffic was violating most TCP rules -- no slow
start, window shutdowns ignored...  It was amazing to watch with a Sniffer(r)
the vast data rates they achieved among big Sun servers.  I was ready to list
the violations as problems with their stacks when one of their development
guys explained that they knew enough about how the OS handled IP that they
didn't need to use the kludges added to TCP to 'protect' the net.  Many on
this list have likely used this vendor's systems in one way or another.

I'll just add a couple more examples, such as AIS on UDP, and the old NBP from
3Com -- it assumed the net was in pretty good shape, so when a connection was
opened, it blasted 42 pkts at full rate, then waited for an Ack.  If the 42
weren't all Acked, it would throttle back, otherwise it would blast onward. 
Worked great.

My point is just that TCP should be considered "legacy" (hate that term :).

Alex

Spencer Dawkins wrote:
> 
> Hi, Alex,
> 
> There are a couple of interesting points here:
> 
> - For a few minutes during the early days of the PILC working
> group, we were excited to think that we might be able to treat
> ECN as an unambiguous indicator of congestion, so when we were
> using ECN (even "fully-deployed" ECN), we could interpret loss
> in the absence of ECN Congestion Encountered as an indication of
> transmission error, and not congestion.
> 
> Of course, we couldn't do this, because at a sufficiently high
> congestion level, we lose packets marked with CE, so we would be
> retransmitting at exactly the wrong time.
> 
> One of the reasons we were "dragging our feet" on error
> notification in PILC was because we couldn't come up with a
> reliable way to tell a sender about transmission errors that
> allowed the sender to know that there was no possibility of
> simultaneous loss due to congestion, so no possibility of making
> congestion worse for a sustained period of time.
> 
> - If we use something besides TCP over wireless links, we have
> two choices, (a) use a Performance Enhancing Proxy to translate
> TCP over wireline into something else when you hit
> wireline-to-wireless borders, or (b) use "something besides TCP"
> end-to-end.
> 
> Since one of the big issues with TCP has been slow start, it's
> difficult to see how (a) works without spoofing ACKs to a
> wireline TCP sender. Without ACK spoofing, the sending TCP
> executes slow start and congestion avoidance at end-to-end round
> trip latencies. I do believe some applications could handle
> this, but it's enough of a change in the "transport contract"
> that I don't see how we can just start doing it and hope that
> application protocols are doing their own end-to-end ACKs at the
> application layer.
> 
> If the wireless community continues to move toward "walled
> garden" deployments, so a single carrier can control its
> wireline servers and wireless clients, (b) could happen. But
> this would require a change from the late 1990s, when the goal
> was to allow wireless users to use arbitrary wireline services -
> there were some servers deployed that ran WAP 1.X end-to-end,
> but this deployment just didn't go as far as the industry had
> hoped.
> 
> Maybe we are closer today, with more realistic expectations and
> wider availability of downloaded applet technologies, and I know
> of proprietary implementations that run over UDP from mobile
> devices to a server, both of which are provided by the same
> vendor, so that a carrier really can deploy servers and clients
> in a more straightforward way.
> 
> But it's important to realize that we need something beside less
> foot-dragging in the IETF to solve this problem.
> 
> Spencer Dawkins
> 
> --- Cannara <cannara at attglobal.net> wrote:
> > Injong, excellent points.  There is no reason to continue to
> > believe that TCP
> > has any long-term purpose, especially in the radio-link
> > environment.  Foot
> > dragging in the IETF on ridding TCP of its '80s myopic flow
> > control has left
> > us with a poor transport for errored links.  It will be great
> > if the control
> > radio-link providers have can generate a better thought-out
> > transport, even if
> > it simply rides on IP or UDP.  We are in hope.
> >
> > Alex




More information about the end2end-interest mailing list