[e2e] end2end-interest Digest, Vol 63, Issue 11

David P. Reed dpreed at reed.com
Fri Jun 26 10:35:12 PDT 2009

If the bottleneck router has too much buffering, and there are at least 
some users who are infinite data sources (read big FTP), then all users 
will suffer congestion at the bottleneck router proportional to the 
buffer size, *even though* the link will be "fully utilized" and 
therefore "economically maximized".

This is the "end to end" list, not the "link maximum utilization" list.  
And a large percentage of end-to-end application requirements depend on 
keeping latency on bottleneck links very low, in order to make endpoint 
apps responsive - in their UIs, in the control loops that respond 
quickly and smoothly to traffic load changes, etc.

Analyses that focus 100% on maximizing static throughput and utilization 
leave out some of the most important things.  It's like desiging cars to 
only work well as fuel-injected dragsters that run on Bonneville salt 
flats.  Nice hobby, but commercially irrelevant.

On 06/26/2009 11:17 AM, S. Keshav wrote:
> The whole idea of buffering, as I understand it, is to make sure that 
> *transient* increases in arrival rates do not result in packet losses. 
> If r(t) is the instantaneous arrival rate (packet size divided by 
> inter-packet interval) and s(t) the instantaneous service rate, then a 
> buffer of size B will avert packet loss when integral from t1 to t2 
> (r(t) - s(t)) < B. If B is 0, then any interval where r(t) is greater 
> than s(t) will result in a packet loss.
> If you have a fluid system, where a source send packets as packets of 
> infinitesimal size evenly spaced apart, and if routers do not add 
> burstiness, then there is no need for buffering. Indeed, in the 
> classical telephone network, where sources are 64kbps constant bit 
> rate sources and switches do not add burstiness, we need only one 
> sample's worth of buffering, independent of the bandwidth-delay 
> product. A similar approach was proposed by Golestani in 1990 with  
> 'Stop-and-go' queueing, which also decoupled the amount of buffering 
> (equivalent to one 'time slot's worth) from the bandwidth-delay 
> product. http://portal.acm.org/citation.cfm?id=99523
> As Jon points out, if exogenous elements conspire to make your packet 
> rate fluid-like, you get the same effect.
> On Jun 25, 2009, at 3:00 PM, Jon Crowcroft wrote:
>> so exogeneous effects may mean you dont need BW*RTT at all of
>> buffering...
> So, why the need for a bandwidth-delay product buffer rule? The BDP is 
> the window size at *a source* to fully utilize a bottleneck link. If a 
> link is shared, then the sum of windows at the sources must add up to 
> at least the BDP for link saturation.
> Taking this into account, and the fact that link status is delayed by 
> one RTT, there is a possibility that all sources maximally burst their 
> window's worth of packets synchronously, which is a rough upper bound 
> on s(t). With one BDP worth of buffering, there will be no packet loss 
> even in this situation. So, it's a good engineering rule of thumb.
> In reality: (a) RTTs are not the same for sources (b) the sum of 
> source window sizes often exceeds the BDP and (c) the worst-case 
> synchronized burstiness rarely happens. These factors (hopefully) 
> balance themselves, so that the BDP rule seemed reasonable. Of course, 
> we have seen considerable work showing that in the network 'core' the 
> regime is closer to fluid than bursty, so that we can probably do with 
> far less.
> Hope this helps,
> keshav
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090626/6962e3ed/attachment.html

More information about the end2end-interest mailing list