[e2e] TCP in outer space

Jon Crowcroft J.Crowcroft at cs.ucl.ac.uk
Fri Apr 13 10:59:28 PDT 2001


well, when i came in (1981) we were running TCP over connected ARPANET
, SATNET (.72 round trip time) and 10Mbps cambridge rins (the ethernet
didn't quite cut it yet then:-)

and we thought about the effect of RTT mean and variance getting bigger
(see phd by stephen edge back then from UCL - even talked about
fragmentation being a bad idea as i recall)

i am afraid that we _did_ do rocket science:-)
In message <20010413101839.B62628 at ted.isi.edu>, Ted Faber typed:

 >>
 >>--LyciRD1jyfeSSjG0
 >>Content-Type: text/plain; charset=us-ascii
 >>Content-Disposition: inline
 >>
 >>On Thu, Apr 12, 2001 at 09:17:53PM -0400, J. Noel Chiappa wrote:
 >>>     > From: Ted Faber <faber at ISI.EDU>
 >>> 
 >>>     >> since 1978, there has been no mystery that, for instance, TCP's
 >>>     >> retransmission algorithm has been given no means to distinguish among
 >>>     >> even the two most important causes of loss -- physical or queue drop.
 >>>  
 >>>     > Mechanisms based on internal network state were considered and
 >>>     > discarded because they conflicted with the well-defined and
 >>>     > well-prioritized list of design goals for the nascent Internet.
 >>>     > Specifically, depending on such internal information made the net less
 >>>     > robust to gateway failures .. so such mechanisms weren't included.
 >>>  
 >>> Actually, neither of these positions is right. The early Internet *did* try
 >>> and *explicitly* signal drops due to congestion, separately from drops due to
 >>> damage - that's what the ICMP "Source Quench" message was all about.
 >>
 >>I was thinking of more network specific ways to signal the event.
 >>You're right SQ was an attempt to do signal congestion in a
 >>transport-independent way.  I won't beat the horse about the
 >>differences between SQ and ECN.
 >>
 >>> In fact, the thing is that we just didn't understand a lot about the dynamics
 >>> of very large pure datagram networks, back in the late 70's when the initial
 >>> TCP work was being done. (And I'm not talking about only congestion, etc
 >>> here.) To start with, nobody had ever built one (the ARPAnet was *not* a pure
 >>> datagram network, for reasons I'm not going to explain here). With the *very*
 >>> limited amount of brain/programming power available in the early days of the
 >>> project (most of which was focussed on down-to-earth tasks like getting
 >>> packets in and out of interfaces), it's not too surprising that there were
 >>> lots of poorly-examined regions of the design space.
 >>
 >>My point was that the brain power seems to have made some a priori
 >>decisions about how to wade into the design space.  One of those was
 >>to favor robustness to efficiency.  That implies that once congestion
 >>systems were sufficient to keep the net running, that prioritization
 >>uregd brain power to be spent on robustness and swallowing new network
 >>technologies rather than tuning the congestion control for optimal
 >>efficiency.  It also suggests that when we get around to such
 >>tinkering, breaking the robustness of the net is a pitfall to beware.
 >>
 >>But I only know what I read in the papers; you were there, and if you
 >>tell me that I'm misunderstanding I'm probably misunderstanding.
 >>
 >>> 
 >>> But to get back to the topic, even ECN is at best a probabilistic mechanism
 >>> for letting the source know which drops are due to congestion, and which to
 >>> packet damage. The only "more reliable" (not that *any* low-level mechanism
 >>> in a pure datagram network is wholly reliable) method is SQ - and as I
 >>> mentioned, we just got through a long disquisition about how SQ is not the
 >>> right thing either.
 >>
 >>OK, I lied, I will mutter about the differences.  Unless you give ICMP
 >>packets preferential treatment with respect to drops, I think it's
 >>tough to characterize the relative reliability of the signals.  There
 >>are a lot of apples and oranges variables there - e.g., ECN's
 >>redundancy (multiple packets are likely to be marked) vs. SQ's shorter
 >>path through a (hopefully) less congested are of the net.  Even in the
 >>absence of other shortcomings, I don't think I'd believe an assertion
 >>about the relative reliabilities without a study or three.
 >>
 >>
 >>--LyciRD1jyfeSSjG0
 >>Content-Type: application/pgp-signature
 >>Content-Disposition: inline
 >>
 >>-----BEGIN PGP SIGNATURE-----
 >>Version: GnuPG v1.0.4 (FreeBSD)
 >>Comment: For info see http://www.gnupg.org
 >>
 >>iD8DBQE61zTvaUz3f+Zf+XsRAlg3AKDNOKrawJEwDHONitT4TW/udoV4oACfT3t4
 >>OolofDOxiIG0F48RcEBCgCw=
 >>=IXKg
 >>-----END PGP SIGNATURE-----
 >>
 >>--LyciRD1jyfeSSjG0--

 cheers

   jon




More information about the end2end-interest mailing list