[e2e] Queue size of routers
avg at kotovnik.com
Fri Jan 17 12:50:18 PST 2003
On Fri, 17 Jan 2003, Alexandre L. Grojsgold wrote:
> > Routers in real backbones have the delay*bw of buffer space.
> My understanding is that the delay*bandwidth product applies to protocol
> engines that implement flow/error control, and need to receive acks before
> thay can go over.
There are two different things: one, is that you need transmit window of
at least bw*delay to "fill the pipe" and get full capacity out of the
The second thing (i.e. amount of buffer space per interface in routers and
switches) is best explained in terms of control theory. The window-based
flow control and congestion control are negative-feedback mechanisms. The
information about growth of queues (or congestion) is not propagated
instaneously; the packet (or the fact of its absense) must reach the
destination, and the ACK must go back to sender, and even after sender
takes a corrective action (i.e. delays transmission or collapses window)
the effect of that action still takes time to reach the congested point.
The sum of signal propagation times is end-to-end RTT.
Now, if you have a negative-feedback system with a delay, you have to have
some inertia to prevent oscillations. That inertia must slow down system
movements to a rate at least larger than the delay (roughly speaking).
The only place where excess packets can be accumulated (thus providing
"inertia") is the congested point.
I.e. to avoid overreaction of congestion control mechanisms, you want to
accomondate those packets which are still flying as if no congestion
happened. Otherwise the network starts to oscillate between congestions
and half-empty circuit.
The effect is most pronounced when all connections have similar RTTs, as
in case of intercontinental or satellite circuits.
> It holds true for TCP sessions (the bandwith and delay being the
> throughput and delay seen across the network), and for connection based
> packet switches (like X.25 switches).
With X.25 you have the same story, with the difference that congestion
control loop has only one hop RTT as its delay (not end-to-end RTT).
However, you still need bw*end-to-end-RTT window size at the ends.
> I can't see a good reason for IP routers having longer queues when the
> bandwith increases or when the delay increases. The same routers are used
> at the ends of a terrestrial or satellite link, unnmodified. I've read
> papers proposing mods to TCP when used over satellite paths, but I've
> never seen a word saying that routers should have their queues enlarged.
That's the difference between papers and the reality. I remember horrible
performance of ICM trans-atlantic links because older boxes didn't have
enough memory for a single E-1 with 200ms RTT. That was communicated to
Cisco folks in person, w/o any papers, and the problem was fixed. All
modern boxes have at least 200-300ms worth of buffer space per interface.
> On a STM-1 link, with 20ms one way delay, the bandwith*delay product leads
> to aprox. 1500 packets (if we consider 256 octets as the mean packet
> length). This is clearly a queue length that will not be found in actual
Hmmm... even a decade-old AGS/+ had that much packet buffer memory...
On average, queue lengths are small, but on congested links they shoot
More information about the end2end-interest