[e2e] A Cute Story. Or: How to talk completely at cross purposes. Re: [ih] When was Go Back N adopted by TCP

Jon Crowcroft jon.crowcroft at cl.cam.ac.uk
Wed Jun 4 01:59:52 PDT 2014


I dont think there's anything wrong here, but maybe a note on buffer bloat
is in order:-

alongside the feedback/AIMD and ack clocking mechanisms for tcp, there was
a long discussion on right sizing buffers in the net - since AIMD naively
applied led to the sawtooth rate behaviour in TCP, a back of envelope
calculation led to the notion that the bottleneck had to have a buffer to
cope with the peak, which at worst case would be bandwidth*delay product
worth of packets (basically 3/2 times the mean rate) so that when 1 more
packet was sent at that rate, one loss would be incurred triggering the MD
part of AIMD once every ln(W) worth of RTTs...[al this is academic in
reality for lots of reasons, including the various other triggers like
dupacks and the fact that this case is a corner one - since usually there
are lots of flows multiplexed at the bottleneck(s) and multiple
bottlenecks, so the appropriate buffer siz could be way smaller - and of
course, any one running a virtual queue and rate estimater (i.e. AQM a la
codel etc) and especially ding ECN rather than loss baed feedback can avoid
all this rediculsous provisioning of packet memory all ov er the net

but alas, the rule of thumb for a corner case became dogma for a lot of
router vendors for way too long to get it disestablished....

and many of the bottlenecks today are near the edge, and more common than
not, probably in the interface between cellular data and backhaul, where,
as you say, thee radio link may not exhibit any kind of stationary capacity
at all etc etc


On Tue, Jun 3, 2014 at 1:43 PM, Detlef Bosau <detlef.bosau at web.de> wrote:

>
> I presume that I'm allowed to forward some mail by DPR here to the list
> (if not, DPR may kill me...), however the original mail was sent to the
> Internet History list and therefore actually intended to reach the public.
>
> A quick summary at the beginning: Yes, TCP doesn't manage for sent packets
> a retransmission queue with copies of the sent packets but maintains an
> unacknowledged data queue and does GBN basically. This seems to be in
> contrast to RFC 793, but that's life.
>
> A much more important insight into the history of TCP is the "workload
> discussion" as conducted by Raj Jain and Van Jacobson.
> Unfortunately, both talk completely at cross purposes and have completely
> different goals......
>
> Having read the congavoid paper, I noticed that VJ refers to Jains CUTE
> algorithm in the context of how a flow shall reach equilibrium.
>
> Unfortunately, this doesn't really make sense, because slow start and CUTE
> pursue different goals.
>
> - Van Jacobson asks how a flow should reach equlibrium,
> - Raj Jain assumes a flow to be in equilibrium and asks which workload
> makes the flow work with an optimum performance.
>
> We often mix up "stationary" and "stable". To my understanding, for a
> queueing system "being stable" means "being stationary", i.e.
> the queueing system is positively recurrent, i.e., roughly, in human
> speech: None of the queue lengths will stay beyond all limits for all times
> but there is a probability > 0 for a queue to reach a finite length at any
> time.
>
> A queueing system is stationary when its arrival rate doesn't permanently
> exceed its service rate, this is actually nothing else than the "self
> clocking mechanism" and the equilibrium VJ is talking about.
>
> From RJ's papers I see a focus on the workload and the perfomance of
> queueing systems. A possible performance metric is the quotient
> p = average throughput / average sojourn time.
>
> If the workload is too little, operators will have idle times, the system
> is not fully loaded. (=> sojourn time acceptable, throughput to small.)
> If the workload is too large, too much jobs are not being serviced but
> reside in queues. (=> throughput fine, sojourn time too large.)
>
> From Jain's work we conclude that a queueing system has an optimum
> workload - which can be assessed by probing.
> => Set a workload, assess the system's performance, adjust the workload.
>
> Van Jacobson will reach the equilibrium.
> => Set a workload, if we see drops, the workload is too large.
>
> As a consequence, a system may stay perfectly in equilibrium state while
> seeing buffer bloat in the sense of "a packet's queueing time is more than
> a half of the packet's sojourne time.
>
> I don't know yet, perhaps someone can comment on this one, whether buffer
> bloat will affect a system's performance. (My gut feeling is: "Yes it
> will". Because the sojourn time grows inadequately large.)
>
> The other, more important, consequence is that probing for "dropfreeness"
> of a system does not necessarily mean the same as "probing for optimum
> performance".
>
> Detlef
>
>
>
>
>
>
>
>
>
> Am 20.05.2014 16:49, schrieb David P. Reed:
>
> I really appreciate the work being done to reconstruct the diverse set of
> implementations of the end to end TCP flow, congestion, and measurement
> specs.
>
> This work might be a new approach to creating a history of the Internet...
> meaning a new way to do what history of technology does best.
>
> I'd argue that one could award a PhD for that contribution when it reaches
> a stage of completion such that others can use it to study the past. As a
> work of historical impact it needs citation and commentary. Worth thinking
> about how to add citation and commentary to a simulation - something like
> knuth's literate programming but for protocol systems.
>
> Far better than a list of who did what when, or a set of battles. It's a
> contribution to history of the ideas...
>
> On May 20, 2014, Detlef Bosau <detlef.bosau at web.de> <detlef.bosau at web.de>
> wrote:
>>
>> Am 19.05.2014 17:02, schrieb Craig Partridge:
>>>
>>> Hi Detlef:
>>>
>>> I don't keep the 4.3bsd code around anymore, but here's my recollection
>>> of what the code did.
>>>
>>> 4.3BSD had one round-trip timeout (RTO) counter per TCP connection.
>>
>>
>> That's the way I find it in the NS2.
>>
>>>
>>> On round-trip timeout, send 1MSS of data starting at the lowest outstanding
>>> sequence number.
>>
>>
>> Which is not yet GBN in its "pure" form, but actually it is, because
>> CWND is increased with every new ack. And when you call "send_much" when
>> a new ack arrives (I had a glance at the BSD code myself some years ago,
>> the routines are named equally there, as far as I've seen, the ns2 cod
>>  e
>> and the BSD code are extremely similar) the behaviour resembles GBN very
>> much.
>>>
>>> Set the RTO counter to the next increment.
>>>
>>> Once an ack is received, update the sequence numbers and begin slow start
>>> again.
>>>
>>> What I don't remember is whether 4.3bsd kept track of multiple outstanding
>>> losses and fixed all of them before slow start or not.
>>
>>
>> OMG. ;-) Who else should remember this, if not Van himself our you?
>>
>> However, first of all I have to thank for all the answers here.
>>
>> Detlef
>>
>>
> -- Sent from my Android device with *K-@ Mail
> <https://play.google.com/store/apps/details?id=com.onegravity.k10.pro2>*.
> Please excuse my brevity.
>
>
>
> --
> ------------------------------------------------------------------
> Detlef Bosau
> Galileistraße 30
> 70565 Stuttgart                            Tel.:   +49 711 5208031
>                                            mobile: +49 172 6819937
>                                            skype:     detlef.bosau
>                                            ICQ:          566129673detlef.bosau at web.de                     http://www.detlef-bosau.de
>
>


More information about the end2end-interest mailing list