[e2e] TCP in outer space

Ted Faber faber at ISI.EDU
Fri Apr 13 10:18:39 PDT 2001


On Thu, Apr 12, 2001 at 09:17:53PM -0400, J. Noel Chiappa wrote:
>     > From: Ted Faber <faber at ISI.EDU>
> 
>     >> since 1978, there has been no mystery that, for instance, TCP's
>     >> retransmission algorithm has been given no means to distinguish among
>     >> even the two most important causes of loss -- physical or queue drop.
>  
>     > Mechanisms based on internal network state were considered and
>     > discarded because they conflicted with the well-defined and
>     > well-prioritized list of design goals for the nascent Internet.
>     > Specifically, depending on such internal information made the net less
>     > robust to gateway failures .. so such mechanisms weren't included.
>  
> Actually, neither of these positions is right. The early Internet *did* try
> and *explicitly* signal drops due to congestion, separately from drops due to
> damage - that's what the ICMP "Source Quench" message was all about.

I was thinking of more network specific ways to signal the event.
You're right SQ was an attempt to do signal congestion in a
transport-independent way.  I won't beat the horse about the
differences between SQ and ECN.

> In fact, the thing is that we just didn't understand a lot about the dynamics
> of very large pure datagram networks, back in the late 70's when the initial
> TCP work was being done. (And I'm not talking about only congestion, etc
> here.) To start with, nobody had ever built one (the ARPAnet was *not* a pure
> datagram network, for reasons I'm not going to explain here). With the *very*
> limited amount of brain/programming power available in the early days of the
> project (most of which was focussed on down-to-earth tasks like getting
> packets in and out of interfaces), it's not too surprising that there were
> lots of poorly-examined regions of the design space.

My point was that the brain power seems to have made some a priori
decisions about how to wade into the design space.  One of those was
to favor robustness to efficiency.  That implies that once congestion
systems were sufficient to keep the net running, that prioritization
uregd brain power to be spent on robustness and swallowing new network
technologies rather than tuning the congestion control for optimal
efficiency.  It also suggests that when we get around to such
tinkering, breaking the robustness of the net is a pitfall to beware.

But I only know what I read in the papers; you were there, and if you
tell me that I'm misunderstanding I'm probably misunderstanding.

> 
> But to get back to the topic, even ECN is at best a probabilistic mechanism
> for letting the source know which drops are due to congestion, and which to
> packet damage. The only "more reliable" (not that *any* low-level mechanism
> in a pure datagram network is wholly reliable) method is SQ - and as I
> mentioned, we just got through a long disquisition about how SQ is not the
> right thing either.

OK, I lied, I will mutter about the differences.  Unless you give ICMP
packets preferential treatment with respect to drops, I think it's
tough to characterize the relative reliability of the signals.  There
are a lot of apples and oranges variables there - e.g., ECN's
redundancy (multiple packets are likely to be marked) vs. SQ's shorter
path through a (hopefully) less congested are of the net.  Even in the
absence of other shortcomings, I don't think I'd believe an assertion
about the relative reliabilities without a study or three.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 230 bytes
Desc: not available
Url : http://www.postel.org/pipermail/end2end-interest/attachments/20010413/dd8c4f03/attachment.bin


More information about the end2end-interest mailing list