[e2e] end2end-interest Digest, Vol 63, Issue 11

S. Keshav keshav at uwaterloo.ca
Fri Jun 26 08:17:24 PDT 2009


The whole idea of buffering, as I understand it, is to make sure that  
*transient* increases in arrival rates do not result in packet losses.  
If r(t) is the instantaneous arrival rate (packet size divided by  
inter-packet interval) and s(t) the instantaneous service rate, then a  
buffer of size B will avert packet loss when integral from t1 to t2  
(r(t) - s(t)) < B. If B is 0, then any interval where r(t) is greater  
than s(t) will result in a packet loss.

If you have a fluid system, where a source send packets as packets of  
infinitesimal size evenly spaced apart, and if routers do not add  
burstiness, then there is no need for buffering. Indeed, in the  
classical telephone network, where sources are 64kbps constant bit  
rate sources and switches do not add burstiness, we need only one  
sample's worth of buffering, independent of the bandwidth-delay  
product. A similar approach was proposed by Golestani in 1990 with   
'Stop-and-go' queueing, which also decoupled the amount of buffering  
(equivalent to one 'time slot's worth) from the bandwidth-delay  
product. http://portal.acm.org/citation.cfm?id=99523

As Jon points out, if exogenous elements conspire to make your packet  
rate fluid-like, you get the same effect.

On Jun 25, 2009, at 3:00 PM, Jon Crowcroft wrote:

> so exogeneous effects may mean you dont need BW*RTT at all of
> buffering...

So, why the need for a bandwidth-delay product buffer rule? The BDP is  
the window size at *a source* to fully utilize a bottleneck link. If a  
link is shared, then the sum of windows at the sources must add up to  
at least the BDP for link saturation.

Taking this into account, and the fact that link status is delayed by  
one RTT, there is a possibility that all sources maximally burst their  
window's worth of packets synchronously, which is a rough upper bound  
on s(t). With one BDP worth of buffering, there will be no packet loss  
even in this situation. So, it's a good engineering rule of thumb.

In reality: (a) RTTs are not the same for sources (b) the sum of  
source window sizes often exceeds the BDP and (c) the worst-case  
synchronized burstiness rarely happens. These factors (hopefully)  
balance themselves, so that the BDP rule seemed reasonable. Of course,  
we have seen considerable work showing that in the network 'core' the  
regime is closer to fluid than bursty, so that we can probably do with  
far less.

Hope this helps,

keshav



More information about the end2end-interest mailing list