[e2e] Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: [ih] When was Go Back N adopted by TCP
detlef.bosau at web.de
Wed Aug 20 14:09:55 PDT 2014
Am 20.08.2014 um 22:03 schrieb dpreed at reed.com:
> [Joe Touch - please pass this on to the e2e list if it is OK with you]
> I'd like to amplify Detlef's reference to my position and approach to
> end-to-end congestion management, which may or may not be the same
> approach he would argue for:
What I have in mind is different in some respect - however the goals are
> To avoid/minimize end-to-end queueing delay in a shared internetwork,
> we need to change the idea that we need to create substantial queues
> in order to measure the queue length we want to reduce.
That's what I talked about, when I argued, we would measure the wrong
Particularly, when you refer to Raj Jain, Jain measures (in his
mathematical model) a queueing system's power in order to achieve a
workload which would allow the system to work with optimum performance.
What we actually measure is: Was the workload too large for the system
> This is possible, because of a simple observation: you can detect and
> measure the probability that two flows sharing a link will delay each
> other before they actually do... call this "pre-congestion avoidance".
In a "hop by hop flow control scenario" this could be achieved by a
window based flow control which follows the very simple rule that a
packet must not be sent to a hop until it can be processed there without
having to wait.
> Rather than leave that as an exercise for the reader (it's only a
> Knuth  problem at most, but past suggestions have not been
> followed up, and I am working 16 hours per day in a startup), I will
> explain how: packets need to wait for each other if the time interval
> each spends passing through a particular switch's outbound NIC
> "overlaps". If you remember recent flow histories (as fq_codel does,
> for example), you can record when each flow recently occupied the
> switch's outbound link, and with some robust assumptions of timing
> variability, one can estimate the likelihood that a queue might have
> built up. This information can be reflected to the destination of all
> flows, just as ECN marks are used in Jain's approach. Since
> congestion control is necessarily done by having the receiving end
> control the sending end (because you want to move end-to-end queues
> back to the source entrypoint, where they must exist for
> retransmission in any case), the receiver can use that "near
> collision" detection notification to slow the sender to a
> non-congesting rate that shares all links on the path such that queues
> will develop with arbitrarily low probability.
Where we might differ, however I have to think about it, is that I tend
to emphasize the asynchronous character of the Internet (where a
definition of a term "rate" is not used) while I think you have in mind
the model of "optimum rates" which have to be reasonably estimated.
(IIRC you mentioned something in that direction some weeks ago.)
> Call this the "ballistic collision avoidance" model (and cite me if
> you don't want to take all the credit).
70565 Stuttgart Tel.: +49 711 5208031
mobile: +49 172 6819937
detlef.bosau at web.de http://www.detlef-bosau.de
More information about the end2end-interest