[e2e] Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: [ih] When was Go Back N adopted by TCP

Kathleen Nichols nichols at pollere.com
Wed Aug 20 10:13:21 PDT 2014


If by "Coddle" you mean "codel", then you need to read about it more
carefully.
Excessive delay is NOT defined as "a packet resides in the local router
queue for more than 100 ms". As for the value of the constant, you may
wish to read the
most recent I-D or watch the video of Van's talk on this at the IETF.

Perhaps improving CC will obviate the need for AQM. In the meantime, it
is useful
to have solutions available for those who need them.

On 7/29/14 4:02 AM, Detlef Bosau wrote:
> Am 04.06.2014 um 02:01 schrieb Andrew Mcgregor:
>> Bufferbloat definitely does impair performance, by slowing down feedback it
>> increases the variance of the system workload, which inevitably causes
>> either packet drops because there is a finite buffer limit in reach, or by
>> causing such large delays that retransmission timers fire for packets that
>> are still in flight.  In either case, the system is doing excess work.
>>
> 
> I thought quite a bit about that during the last weeks and sometimes I
> think, we're mixing up cause and effect.
> 
> And we basically rely upon assumptions which may be put in question.
> 
> IIRC, David Reed pointed out some weeks ago what he defines as
> "bufferbloat". Simply put: A packet conveyed in a network spends most of
> its sojourn time in queues.
> 
> Now the problem is that we cannot determine a packet's "queuing time" a
> posteriori. So, sometimes we have criteria for buffer bloat, e.g.
> Coddle, where buffer bloat means "a packet resides in the local router
> queue for more than 100 ms". (Whether 100 ms is defined by experience or
> by divine inspiration is still opaque to me, it will certainly become
> clear when this contstant is assigned a name and  the Turing Award is
> awarded.)
> 
> According to Dave's definition it is no way clear, whether a flow
> experiences buffer bloat from this single queueing time. When a packet
> experiences 5 seconds of sojourn time - and the 100 ms in the local
> router queue are its only queueing delay, anything is fine. If the
> 0,1000000000000000000000000000000000000000001 seconds soujourn time
> consists of 100 ms queueing delay "and a little rest", this would be
> unsatisfactory.
> 
> In other words: Buffer bloat is not a phenomenon for a single flow - but
> a symptom of wrong system design.
> 
> Now, there are quite some rules of thumb how buffers should be designed
> that buffer bloat is avoided - and all of these miss the problem.
> 
> The problem is: Queues are utilized. And when congestion and resource
> control are handled by feedback in a way that a system which does not
> drop packets is not fully utilized, queues WILL BE FULLY UTILIZED,
> because we offer that much load to a system that it eventually drops
> packets.
> 
> That's basically the same af if you don't know the size of your fridge -
> and you fill in butter and milk and cheese - until the whole mess
> becomes lively and walks out. And then you throw away the whole mess,
> clean your fridge - and restart the same procedure from the beginning.
> (We call this "probing". In the context of a fridge: Mould cheese probing.)
> 
> My basic question is: Can we avoid probing? Wouldn't it be preferable to
> probing to assign known resources by proper scheduling instead?
> 
> Isn't probing the very problem? Hence, CUTE and VJCC basically introduce
> the problem which they pretend to solve?
> 
> WRT buffer bloat: When we don't offer inappropriate load to a network
> and all packets could be served in reasonable time, there wouldn't be
> any buffer bloat at all.
> 
> 
> 
> 
> 
> 



More information about the end2end-interest mailing list