[e2e] RES: Why Buffering?
lachlan.andrew at gmail.com
Sat Jun 27 13:44:08 PDT 2009
2009/6/28 Detlef Bosau <detlef.bosau at web.de>:
> Lachlan Andrew wrote:
>> Jon seemed to be saying that the arrival
>> rate at the queue could somehow exceed the departure rate in the long
>> term. That can only happen if (a) the window is increasing or (b) a
>> buffer somewhere else is emptying.
> That's exactly the reason why I'm not comfortable with the term "rate".
> "Rate", as you say yourself, describes a long term behaviour.
> However, what you're talking about in the quoted paragraph is short term
The rate of any (realisation of a) discrete process is well defined
over any interval (not necessarily infinite). When I said that "short
term" rates are not defined, I meant that we can't talk about the rate
"at a point in time". Also, it is not generally helpful to think of
"rate" on a timescale less than the average inter-event time. (The
rate over intervals on this scale is typically either 0 or very large.
It is well defined, just not useful.)
If you think of what I said in terms of "number of events divided by
time interval", I think you'll find it makes sense. If not, feel free
to point out an error.
>> Why? *Long term* rates are meaningful, even if there is short-term
>> fluctuation in the spacing between adjacent pairs of packets.
> Not only in the spacing between adjacent pairs of packets.
> I'm still thinking of WWAN. And in WWAN, even the time to convey a packet
> from a base station to a mobile or vice versa is subject to short-term
In that case, we need to distinguish between the rate of *starting* to
send packets and the rate of *completing* finishing packets. However,
in the "long term" the two will still be roughly equal, where "long
term" means "a time much longer than the time to send an individual
packet". If a packet can take up to 3 seconds to send, then the two
rates will roughly agree on timescales of 30s or more.
>>> One problem is that packets don't travel all parts of a path with the
>>> same speed. TCP traffic may be bursty, perhaps links are temporarily
>> True. Buffers get their name from providing protection against (short
>> timescale) fluctuation in rate.
> Is this their main objective?
It was. Buffers in different places have different purposes. I've
said many times that I think the current main objective of buffers on
WAN interfaces of routers is to achieve high TCP throughput. (Saying
it again doesn't make it more or less right, but nothing in this
thread seems a compelling argument against it.)
>>> I once was told that a guy could drastically improve his throughput by
>>> enabling window scaling.....
>>> On a path from the US to Germany.
>>> I'm not quite sure whether the other users of the path were all amused
>>> the one guy who enabled window scaling ;-)
>> Yes, enabling window scaling does allow TCP to behave as it was
>> intended on large BDP paths. If the others weren't amused, they could
>> also configure their systems correctly.
> However: Cui bono? If the only consequence of window scaling is an end of
> the commercial crisis, at least for DRAM manufactures, at the cost of
> extremely long round trip times, we should rather avoid it ;-)
But that isn't all it does. On a high BDP link, if you don't use
window scaling a single flow can get much less throughput than the 75%
of capacity which is possible with window scaling and without
> The problem is: Buffering shall provide for work conservation, as Jon
> pointed out. As soon as buffers "overcompensate" idle times and don't avoid
> idle times but introduce latency by themselves, the design is not really
True. A buffer which never empties is too big (for that situation).
Lachlan Andrew Centre for Advanced Internet Architectures (CAIA)
Swinburne University of Technology, Melbourne, Australia
Ph +61 3 9214 4837
More information about the end2end-interest