[e2e] Is the end to end paradigm appropriate for congestion control?
Jon.Crowcroft at cl.cam.ac.uk
Mon Nov 11 23:19:39 PST 2013
don't see much to object to in your post - good question -
the "in flight" or "outstanding packets" or "in the pipe"
or whichever phrase you use for packets that
aren't at least still mostly being serialised or de-serialised
in a nic/transceiver at one or other end
of a transmit/receive pair on a link....
yes, i suspect these are a rare case in practice nowadays...
back in the day, when testing VJCC on the internet in 87/88,
one of the "interesting" cases was the
SATNET which used geostationary satellites -
probasbly one of the very few links for which you could
actually lauch quite a few packets into orbit back then (or now, due
to horrendous RTT) - being 35000km up (in geosync orbit)
and same down, and then back again)
even though only 64kbps per sec,
you could get a few packets out there before the first bit traveling
at the speed of light (so much faster than fiber:)
hit the far end downlink over the (bent stovepipe) satellite
.36 secs later...
...and as satellitle transponder speed/capacity went up and
got cheaper over time, this became more true...
but its not a common case... most of the "pipe" is not made of packets
"in flight" but more accurately, just in buffers...
and of course, any LAN tech has to start receiveing bits before you
finish sending them.... (unless you choose the cambridge ring model
with 16 bit minipackets - a precursor to atm)
on the related topic...
layer 2 flow control in switches is being mucked about with
as we speak for various special case data center net
magics to avoid tcp incast hassles but its a nich use afaik...
(as discussed before on this list)...
In missive <5280CC5D.70401 at web.de>, Detlef Bosau typed:
>>I know this question is as heterodox as could be.
>>Nevertheless. A TCP packet on its way from source to sink will typically
>>travel quite some packet switching nodes, each of which
>>introduces a potential flow control problem. A switching node typically
>>does "store and forward" - and whenever a queue on the node cannot be
>>drained sufficiently fast - be it due to a throughput shortage on the
>>outgoing link, be it due to a large amount of incoming traffic - the
>>switching node has to throttle down incoming traffic or it must discard
>>Both scenarios are possible with TCP: An unexpected throughput shortage
>>may occur on wireless networks, unexpected traffic may be caused
>>by any competing sender.
>>I'm - after years - still not comfortable with the concept of "storage
>>on the line", which is basically the motivation for our sliding window
>>A line (fibre, copper, air interface) may or may not offer a certain
>>amount of transient storage capacity, depending on quite some factors,
>>one of which is the MAC scheme. E.g. in CSMA/CD nets, there is no more
>>then ONE packet on the line. Hence, the concept of a
>>"Latency-Throughput-Product" must be used with extreme caution.
>>I'm sometimes not quite sure whether particularly VJCC actually works
>>around a "concatenated flow control problem", which in the late 80s
>>really WAS a concatenated flow control problem because the vast majority
>>of a paths "capacity" was located on the switches - and not on the lines.
>>(At least to my understanding.) And because we did not want to touch the
>>switches, in other words we wanted to keep thins small and simple, we
>>worked around this flow control problem using an end to end congestion
>>70565 Stuttgart Tel.: +49 711 5208031
>> mobile: +49 172 6819937
>> skype: detlef.bosau
>> ICQ: 566129673
>>detlef.bosau at web.de http://www.detlef-bosau.de
More information about the end2end-interest