[e2e] Delivery Times .... was Re: Bandwidth, was: Re: Free Internet & IPv6

Detlef Bosau detlef.bosau at web.de
Fri Dec 28 08:36:17 PST 2012


Am 28.12.2012 15:28, schrieb dpreed at reed.com:
>
> Bufferbloat arises because of the lower layers' lack of congestion 
> signals (dropped packets) while the lower layers attempt (using large 
> amounts of buffering) to avoid loss (lost packets).
>

O.k., however this is not a contradiction to what I said!

If, e.g. for increasing noise, the delivery time for packets over 
wireless interfaces increases, buffers may not be freed and the "path 
capacity" (quotes intended) drops.

So, this can be one reason for buffer bloat, do you agree here?

> In other words, by focusing on keeping packets alive at any cost, the 
> Internet loses all of its basic congestion detection.
>

Absolutely agreed!


> For the Internet, it *really does not help* to use buffering to hide 
> natural channel problems - either delay or transmission errors.
>

Agreed.

However, it may help to use buffering to compensate varying service 
rates. The emphasis lies on "varying". In addition: For doing so, it is 
*absolutely crucial* to care fore stability.
>
> That's because the consistent rule in the Internet is to build the 
> end-to-end required functionality *at the endpoints* and not in the 
> network.  That applies to wireless as well as wired.
>

David, you might have guessed this: I would like to put this rule in 
question. I think we should care for congestion control not only at the 
endpoints but at certain places (which are to be defined) in the network 
as well.


> The solution to congestion in the network is *not to allow the network 
> to stay congested* - which means aggressively dumping queued packets 
> whenever the queue builds up at any link to more than 1 packet.
>

Doing so might waste a lot of performance. However, in some cases, this 
may be necessary. Perhaps, we have to agree upon "it depends".

> Paradoxical it may seem.  But it's the same rule as with "garbage 
> collection" in large Java systems: whatever you do, don't wait until 
> the memory is completely full to garbage collect!  That is the worst 
> case operating point of the system in terms of jitter and gives you 
> worse throughput as well. Garbage collection when the memory is about 
> 1/2 garbage and 1/2 good stuff is much, much better for throughput, 
> and garbage collecting even earlier works better for worst-case 
> latency and jitter.   This is how one sets the "control loop" for 
> garbage collection wisely.
>

David, what is the control loop of congestion control controlling? (I 
sometimes think, we should have not only a second look at the congavoid 
paper but it is worthwile to give it even a ninth and a tenth look. I 
think, there is a huge confusion about what we are controlling at all!)

> The same thing for network buffers - *never* let an internal queue 
> build up to more than 1 packet on a sustained basis.
>

Not on a sustained basis. However: On a controlled basis AND WITH 
CAREFUL KNOWLEDGE OF WHAT WE ARE DOING!

Please don't mind my emphasis. I think, we agree upon the pitfalls. 
However, I don't want to totally stay away from them, but when we run 
into a pitfall, we must take great care to not only get into it but to 
find a way out as well.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20121228/4a63ec89/attachment.html


More information about the end2end-interest mailing list