[e2e] What's wrong with this picture?

Lachlan Andrew lachlan.andrew at gmail.com
Sat Sep 12 13:54:08 PDT 2009


Greetings,

I'm glad my provocative comments stimulated a response.

I'm happy to agree that they're not a sound way to make incremental
changes to the current network, but they reflect a particular hobby
horse of mine, that the "inter" has been forgotten in the internet.

An internet is not a network, but a collection of networks.  There has
been much confusion because people think that Ethernet networks are
"link" instead of networks.  That has caused people to misinterpret
topology data and all sorts of problems.

2009/9/12 Detlef Bosau <detlef.bosau at web.de>:
> Lachlan Andrew wrote:
>>
>> No, IP is claimed to run over a "best effort" network.  That means
>
> To run? Or to hobble? ;-)

Definitely to hobble...

>> that the router *may* discard packets, but doesn't mean that it
>> *must*.
>
> This is not even a theoretical debate.
>
> A router's storage capacity is finite, hence the buffer cannot keep an
> infinite number of packets.

I never mentioned an infinite number of packets.  If links have
token-based congestion control then it is possible to have a lossless
network.  Token-based control was proposed for ATM, and is (correct me
if I'm wrong) being reintroduced for "Data Centre Ethernet".  If we
say "IP will not run over Data Centre Ethernet, because it doesn't
drop packets, therefore Data Centre Ethernet is wrong", then we are in
serious trouble.  *That* sounds like we're taking our IETF rule-book
as god-given.

Granted, Data Centre Ethernet isn't being offered as a public
"internet" service, but surely we should be looking at the issue of
TCP-over-reliable-networks.

> However, we're talking about TCP here. And TCP simply does not work properly
> without any kind of congestion control.

RFC 793 does.  The current TCP congestion control mechanism was a very
useful hack to solve an immediate problem.  Why are we so keen to
defend a 20 year old kludge?  (I know: "because it works".  But what
if there are cases where it *doesn't* work.  Why say "Those cases
shouldn't exist.  Next problem"?)

VJ's radical insight was "the network is telling us something" when it
drops a packet.  That brought about a radical improvement.  Why is it
so hard to say "the network is telling us something" when it delays
packets?  Perhaps this debate should be in the IRTF/ICCRG (Cc'd)
instead of the IETF, but

> Hence, there is a strong need to have a sender informed about a congested
> network.

Doesn't an 8-second delay inform the sender?  Of course, by the time
it has reached 8 seconds it is too late.  However, we can detect 100ms
of delay after a mere 100ms...  Delay gives *instant* per-packet
feedback, whereas loss only gives feedback every few packets (or every
few thousand packets on high BDP links).

Pure delay-based TCP has many problems, since we don't understand all
of the possible causes of delay, but delay should definitely be
considered as information.

>> If the delay is less than the IP lifetime (3 minutes?) then
>> the router is within spec (from the E2E point of view).
>
> I don't know a router who checks a packet's lifetime with the clock,
> although some stone-aged specification proposes a temporal interpretation of
> lifetime ;-) Practically, in IPv4 the lifetime is a maximum hop count. In
> IPv6, this is even through for the specs.

I'm not talking about TTL.  I'm talking about the maximum segment
lifetime (MSL), which, as recently as 2008
(draft-touch-intarea-ipv4-unique-id-00.txt) was "typically interpreted
as two minutes".

>> No, it is not "WRONG" to claim a high bit-rate service is high-speed,
>> even if it has high latency.
>
> Do we talk about rates? Or do we talk about delays? Or do we talk
> about service times?

"speed" = "rate".  A concord is fast, even if I have to book my ticket
a day in advance.

> In packet switching networks, we generally talk about service times and
> nothing else.
> Any kind of "rate" or "throughput" is a derived quantity.

True, but it is the derived quantity which was being advertised.  The
service was not being advertised as a low-delay service, but as a
high-speed service.

I'm not trying to defend a service which has these appalling delays;
just to get us out of the mindset that they're doing something "wrong"
as distinct from "stupid".

>>  High-speed is not equivalent to low-delay.
>
> However, a high speed network will necessarily have small service times.

Yes, but not small latency.  See the concord example.

>>> The entire congestion control mechanism (and approximate fairness
> From the days, when VJ wrote the congavoid paper, up to now TCP works fine
> about congested reliable networks ;-)
> (What, if not "reliable", are wirebound links?)

Wirebound IP links are "best effort", not "reliable".  "Reliable"
(when applied to a protocol) means that it guarantees to deliver each
packet exactly once.  That is what is meant by RFC 3366's "TCP
provides a reliable byte-stream transport service, building upon the
best-effort datagram delivery service provided by the Internet
Protocol"  It has nothing to do with whether we can actually rely on
the protocol to get the job done usefully...

> Dave's problem arises from unreliable networks (yes, I intendedly use a
> somewhat strange definition of reliability here ;-)), i.e. from wireless
> ones.

No, it is a "reliable" network running over an unreliable physical
link.  The problem is that it tries too hard to be reliable.

>>  There are many methods
>> which use delay as an indicator of congestion, as well as using loss.
>>  (Should I plug Steven Low's FAST here?)  We don't need anything very
>> fine-tuned in a case like this; just something very basic.
>
> delay is one of the worst indicators for network congestion one could
> even imagine.

Oh?  In that case, why are we assuming that this 8-second delay was
caused by congestion?

I'm not saying we should *only* consider delay, but that the problem
here is that TCP is *ignoring* delay, like RFC793 ignored loss.

> there is usually no
> reference delay which is related to a "sane" network.

True, that is a challenge, but we know that these delays are not sane,
and should define appropriate responses.

> The really problem with delay and delay variations is, that there are
> several possible causes for them:
> - congestion.
> - MAC latencies. (similar to congestion.)
> - high recovery latencies due to large numbers of retransmissions.
> - route changes.
> - changes in path properties / line coding / channel coding / puncturing
> etc.
>
> Without any particular knowledge of the path, you may not be able to
> determine "the one" (if it is one at all) reason for a delay variation.

Similarly, we don't know that "the one" reason for a packet loss is
congestion.  However, we can use the available information.  Take a
look at Doug Leith's recent work on debunking the myths about why we
can't use delay for congestion estimation.

> So, the consequence is that we should abandon the use of delays as
> congestion indicator.

No, we shouldn't rely on delay as the only source of information.  I
don't see why we should ever decide a priori to ignore information.

> The original meaning of "congestion" is that a path is full and cannot
> accept more data.
> And the by far most compelling indication for "this path cannot accept mor
> data" is that this "more data" is discarded.

True, it is compelling.  However, it isn't the only indication.  I'm
not saying that we should reject loss as an indicator of congestion.
I'm just saying that we shouldn't ignore other indicators.

>> Of course, fixing TCP to work over any IP connection (as it was
>> intended) does not mean that the underlying networks should not be
>> optimised.  As Lloyd said, we already have recommendations.
>
> And we have the end-to-end recommendations which tell us, that underlying
> networks should not attempt to solve all problems on their own.

"Be liberal in what you accept, and conservative in what you send" (RFC1122).

Once again, I'm not saying that the link layer is good.  I'm just
saying that we should be open to improving TCP to make it accept that
there are bad links.

Of course, a valid argument against making TCP robust to bad links is
that it hides the badness of those links and makes link-designers
lazy.  However, I'm not going to argue against Moore's law just
because it makes programmers lazy.

Cheers,
Lachlan

-- 
Lachlan Andrew  Centre for Advanced Internet Architectures (CAIA)
Swinburne University of Technology, Melbourne, Australia
<http://caia.swin.edu.au/cv/landrew> <http://netlab.caltech.edu/lachlan>
Ph +61 3 9214 4837



More information about the end2end-interest mailing list