[e2e] TCP in outer space

Cannara cannara at attglobal.net
Sat Apr 14 12:34:55 PDT 2001


David, these are good questions.  And, since I omitted many other examples,
like Amgen's worldwide AppleTalk network, you're correct in asking what folks
could have passed across the "Internet-protocol-bureaucracy barrier".  One
thing to keep in mind is that all end networks (or autonomous systems) have
their own administrations regardless of an interconnect via any public net. 
In fact, corporate nets for decades have had to demark at telco service
points, where one or more separate admins then hold sway -- this complicates
corporate networking greatly, but, at least with Frame-Relay interconnects,
allows some admission and flow control to occur at the appropriate level --
DLC, in Frame's case, at the end net's systems.  Providers that offer
backbones around the Internet are also now in the business of allocation and
charging for services on which they can set entry limits.

The interesting cultural aspect of these discussions is that on one hand we
see the mysteries of UDP Length explained and old kludges exposed, then we
hear principles like: "thou shalt not cross layer bounds".  This despite whole
trains of design and discussion history on using router code, that assumes
something about a particular transport, to delete frames and thus trick far
end transports into backing off, just so the near router code can hope to
reduce its filling queues -- how many layers are involved here?  :]  And, how
much capacity is held back on a guess?

As for comments on "conservative design", I'm always in favor of that, yet we
open a vast network to partial control, which actually generates control
inputs of the wrong polarity on occasion, and we assume it's ok because 95% of
Internet traffic just happens to be TCP based.  I'd say luck, rather than
engineering conservatism, is in force here, especially since externally-
introduced apps like WWW fortunately chose TCP, and they happen to carry the
dominant traffic burdens (web biz, porno, music/pic files...).  If anything is
conservative, it's the end points, where providers oversubscribe by huge
factors, thus automatically committing them to throttle at their junctures to
subscribers and the public net -- a major DSL franchiser oversubs by 20:1. 
This external protection is not of the IETF's design, nor part of the
conservative engineering believed in. 

I'd summarized some of this in a draft response to another msg., so will try
not to repeat anything.  The fact is that network processors are now available
that have (per chip) >500,000,000 RISC cycles available per second and >1GByte
fast RAM available for processing packets (corresponding to about 2M
packets/sec).  They also implement queue controls like RED in hardware.  Since
basic RFC1812 routing takes <200 such cycles per packet, we now have designs
emerging with well over 100 extra instructions available per packet to do
whatever.  MPLS numbers are similarly good.  All these numbers more than
double next year, and will reach 10Gb/s packet processing per chip about a
year later.  If someone is interested in processor vendors, I'll be happy to
pass names off list.

The point is that router/switch code can do far more these days than ever
imagined when the decision to offload performance and capacity decisions from
'gateways' (routers) was made years ago.  The corollary is that this is not a
surprising reality.  So, for example, rather than simply using the hardware
RED capability now available to drop packets, use it to generate a more
intelligent control statement to the sender.  Source Quench and its original
purposes have been discussed, but consider that intelligent folks might even
go further -- let a little of this processor-cycle wealth be directed at the
network-layer without tricking the assumed transport, which is not the source
of all the traffic.  This is, of course being discussed.  

As an example and just for kicks, keeping the TCP input so loved, consider a
compatible technique that starts, say before RED drops -- let the router
fabricate a statement to the sender via an Ack that simply reduces the
apparent receive window.  The router code knows all that's needed to do this
and can use an Ack seq # that won't cause any other response at the sender but
seeing that it's time to reduce outstanding payload (segments).  If a router
does this on all connections in queues that are approaching some RED watermark
of choice, all dropping may be avoided for those connections, and performance
will improve.  Even more ambitious attempts along these lines can be thought
up, I'm sure.  This is only suggested as an example, but the point is that
this is old news and not "rocket science" by any means.  It does mean thinking
a bit differently and doing more than just running yet another NS simulation.

Alex


David.Eckhardt at cs.cmu.edu wrote:
> 
> > Indications of parochialism were clear years ago, when Xerox had
> > a wordwide, functioning, XNS network, followed by 3Com's worldwide
> > XNS-like network, paralleled by the Marine Corps worldwide Banyan
> > Vines network, various Netware networks, etc., etc.  These were
> > indeed largely datagram networks.  There has been ample opportunity
> > for cross-fertilization of networking ideas, over many years.
> > Apparently, however, the IETF has been another way of abbreviating
> > NIH.
> 
> Comparing single-administration networks to the Internet seems a
> little apples-oranges.  Lots of things get easier if you can
> administratively control load and mandate which versions of which
> applications get deployed.
> 
> But, setting that aside, what do you think the Internet should
> have learned from Xerox/3Com/USMC about, say, congestion control?
> 
> Or what other specific insights or technologies do you believe
> the IETF wrongly rejected?
> 
> Dave Eckhardt




More information about the end2end-interest mailing list