[e2e] RES: Why Buffering?

Lachlan Andrew lachlan.andrew at gmail.com
Fri Jun 26 12:16:11 PDT 2009

Greetings Keshav,

2009/6/26 S. Keshav <keshav at uwaterloo.ca>:
> The whole idea of buffering, as I understand it, is to make sure that
> *transient* increases in arrival rates do not result in packet losses.

That was the original motivation, but I believe the current motivation
is TCP throughput.  If we fix TCP, then buffers can go back to their
original role of smoothing transients, and I agree with your

> So, why the need for a bandwidth-delay product buffer rule? The BDP is the
> window size at *a source* to fully utilize a bottleneck link. If a link is
> shared, then the sum of windows at the sources must add up to at least the
> BDP for link saturation.
> Taking this into account, and the fact that link status is delayed by one
> RTT, there is a possibility that all sources maximally burst their window's
> worth of packets synchronously, which is a rough upper bound on s(t). With
> one BDP worth of buffering, there will be no packet loss even in this
> situation. So, it's a good engineering rule of thumb.

No buffer is big enough to stop a long TCP(Reno) flow losing packets,
because it will increase its window until it does.  The reason for one
BDP is not to prevent loss, but to make sure that throughput is
maintained when loss occurs.

Of course, designers have gone way overboard with large buffers
(especially in modems for low rate access links, as David pointed
out).  Still, if it weren't for TCP's loss-probing, then huge buffers
would be harmless.


Lachlan Andrew  Centre for Advanced Internet Architectures (CAIA)
Swinburne University of Technology, Melbourne, Australia
<http://caia.swin.edu.au/cv/landrew> <http://netlab.caltech.edu/lachlan>
Ph +61 3 9214 4837

More information about the end2end-interest mailing list