[e2e] Queue size of routers
avg at kotovnik.com
Fri Jan 17 15:36:02 PST 2003
On Sat, 18 Jan 2003, Joerg Micheel wrote:
> My understanding of this configuration is that the maximum RTT any
> packet could incur is 2*minRTT for a given path. This is such that
> TCPs at the host have a way to decide when a packet surely must have
> been lost (by timing it out accordingly).
No... measured RTT may be a lot more than minRTT, depending on the number
and size of queues in the network. Normally, measured RTT is signifcantly
less than the maxRTT (which, with properly sized buffers, approximately
equals Nhops*minRTT); so some packets may be falsely detected as
lost; this is ok as long as the probability of false positive is low.
To achieve low probability of false loss detection TCP stacks measure not
only average path RTT, but also statistical deviation from that average,
and adjust expected RTT upwards accordingly (i.e. in an underloaded
network queue lengths, and latency deviation are small, so the loss of
packet may be detected sooner).
> Obviously, realtime UDP-based applications might require a different
> behaviour, but most of the Internet thus far has been optimised to
> carry TCP traffic, which still accounts for the bulk of the data.
Actually, TCP is very finely tuned; any large-traffic UDP-based
application would do itself a service by emulating TCP behavior (though it
may replace window scaling with packet rate control and/or source
quality control (such as frame frequency, resolution, color depth, etc)).
More information about the end2end-interest