AW: AW: [e2e] Queue size of routers
dovrolis at cc.gatech.edu
Fri Jan 17 16:50:08 PST 2003
You may want to look at:
It describes a tool that estimates the distribution of
RTTs for the TCP connections seen at a link. It assumes
though that you have a passive monitor at that link,
or at least some "typical" traces.. We can send out the
code if anyone wants to play with it.
Our main motivation in that work was the "buffer provisioning"
problem at a router interface: a bulk TCP connection (with
sufficiently large socket buffer sizes) can saturate an empty link
of capacity C only if we have at least C*T bytes of buffers
at that link, where T is the connection's "exogenous" RTT.
By "exogenous RTT", I mean the RTT at the connection's
path *before* the connection starts, i.e., before it causes any
additional queueing delays in the path.
In other words, a link must have at least an RTT worth of buffering,
in terms of transmission time, in order for a TCP connection
with that RTT to be able to saturate the link.
The previous result is related to the "sawtooth" variations of TCP's
congestion window, and it can be roughly explained as follows:
after a (single) packet loss cwnd will be decreased by
a factor of 2. To keep the link saturated after the loss,
we need to maintain a send window of at least C*T, which means
that before the loss cwnd should be at least 2*C*T. This will be the
case if we have C*T bytes in the buffer of the bottleneck link,
and C*T more bytes in the "wire" (please send me a note if
you would like to receive a (still incomplete..) draft that
discusses this problem in more detail). It turns out that
an important point in these calculations is that the actual RTT, while
the connection is in progress, will not be the "exogenous" parameter T,
but it will vary with the backlog that the connection has
in the bottleneck.
At the practical side, there are some hard decisions to
be made when provisioning the buffers of a router interface:
1. First of all, does the network manager want to allow a single
TCP connection to saturate the link? The answer may not always
2. If so, which connection's RTT should we consider? Between
Atlanta and Australia, or between Atlanta and Miami? The paper
that I mentioned at the beggining of this message shows that
the RTTs seen at a link vary by 3 orders of magnitude (1ms to 1sec).
3. An RTT worth of queueing delay at a link can make big harm
to the real-time traffic passing through that link, especially
if we have provisioned the link for a large RTT (say 300msec).
If the network provider offers edge-to-edge delay-based SLAs,
the presence of large buffers at each link opens the possibility
for SLA violations, when the right mix of bulk TCP transfers
4. What if the router uses a shared-memory architecture (such
as the Juniper boxes) in which different ports use the
same pool of packet buffers?
I suggest looking also into previous work by Robert Morris:
The focus there was on what happens when a link carries
a large number of TCP connections (possibly mice),
and one of the findings was that in some cases we actually
need buffering that is proportional to the number of
ongoing TCP connections in a link.
Also relevant is an early paper by Villamizar and Song:
Constantinos Dovrolis | 218 GCATT | 404-385-4205
Assistant Professor | Networking and Telecommunications Group
College of Computing | Georgia Institute of Technology
dovrolis at cc.gatech.edu
On Fri, 17 Jan 2003, Michael Welzl wrote:
> Dear Vladim,
> Thanks a lot for your response! All that seems to make sense.
> > Well, what you have for "delay" is characteristic RTT of the traffic mix,
> > plus some extra to accomodate deviation; not the maximal RTT. I'm not
> > aware of any tool for picking that parameter, it's pretty much "gut
> > feelings" of backbone engineers.
> Hmmm ... about time someone wrote such a tool! :)
> - randomly pick a packet that carries TCP (red alert! a layer
> violation straight ahead! :) )
> - remember a lot of things ... state all over the place:
> source/destination, sequence numbers ... but that would
> only have to be done sporadically! Maybe in a separate piece
> of hardware.
> - monitor subsequent packets from the same flow - examine the
> rate and sequence numbers ... it should be a little tricky,
> but possible.
> Eventually, such a tool could be used to automatically adapt
> the maximum queue length...
More information about the end2end-interest