[e2e] TCP in outer space

Ted Faber faber at ISI.EDU
Mon Apr 16 09:26:20 PDT 2001


On Sat, Apr 14, 2001 at 04:18:46PM -0400, J. Noel Chiappa wrote:
>     > From: Ted Faber <faber at ISI.EDU>
> 
>     >> The early Internet *did* try and *explicitly* signal drops due to
>     >> congestion ... that's what the ICMP "Source Quench" message
>  
>     > I was thinking of more network specific ways to signal the event.
> 
> Well, that wouldn't have been workable, if the congestion point was on a
> different network from the source of the traffic, no? (And I hope we're not
> falling into terminology confusion over what you mean by "network specific" -
> I'm assuming it means "specific to a particular link technology").

We're agreeing loudly.  For the record, I though Alex was suggesting
network-dependent congestion indications (e.g., notifications from
wireless that a given packet loss was congestion) and I don't think
those are compatible with the Internet design.  SQ is network layer
(link-independent), philosophically more compatible, has significant
practical drawbacks, and I had just forgotten about it.

> The moral of the stories is clear: robustness is fine, but you need to have
> counters to see how often you're triggering the robustness mechanisms,
> because if you're doing it too often, that's a problem (for all the obvious
> reasons - including that the recovery is generally less efficient, so you
> shouldn't be doing it too much).

Agreed.  A robust system that you can't monitor is a recipe for
trouble down the line.  I hope it didn't sound like I was against
error reporting in robust systems. My point was that the Internet's a
system for which it's more important that it be forwarding some
packets than getting 100% link utilization.

> 
> 
>     > That implies that once congestion systems were sufficient to keep the
>     > net running, that prioritization urged brain power to be spent on
>     > robustness and swallowing new network technologies rather than tuning
>     > the congestion control for optimal efficiency.
> 
> Well, actually, it was more like "since the original TCP congestion control
> worked OK to begin with, we moved our limited resources toward other things
> like getting applications up, and dealing with some of the initial scaling
> problems, such as needing to deploy A/B/C addresses, subnets, replacing GGP
> with EGP, getting rid of HOSTS.TXT in favor of DNS, etc, etc, etc,
> etc"!

I thought mine sounded nicer, but yours is what I was making sound
pretty. :-)

> 
>     > Unless you give ICMP packets preferential treatment with respect to
>     > drops, I think it's tough to characterize the relative reliability of
>     > the signals. ... Even in the absence of other shortcomings, I don't
>     > think I'd believe an assertion about the relative reliabilities without
>     > a study or three.
> 
> Excellent point. In large complex systems, theory has to take a back seat to
> empirical data. It's almost a given that a large system is going to behave in
> ways that all of Mensa couldn't predict.

Which goes right back to your point about measurability and error
reporting in robust systems.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 230 bytes
Desc: not available
Url : http://www.postel.org/pipermail/end2end-interest/attachments/20010416/505ac073/attachment.bin


More information about the end2end-interest mailing list