[e2e] Open the floodgate - back to 1st principles

Jon Crowcroft Jon.Crowcroft at cl.cam.ac.uk
Mon Apr 26 02:31:48 PDT 2004


In missive <4350000.1082946807 at localhost>, Guy T Almes typed:

 >>  My impression is that, at least during the early 1990s and probably since 
 >>then, there was a rule of thumb that a router should have a delay-bandwidth 
 >>worth of memory per output port.  This was understood to be friendly to TCP 
 >>in that it would allow the buffer to drain while the TCP sender recovered 
 >>itself from a stumble following the bursting of the queue.

yes, so the AIMD rule, if you assume worst case (single session at
bottleneck router, mad grid app:-) takes the load to twice the operating 
rate, then halves so the worst case buffer you have to have so you
don't get burst loss and then trigger slow start instead of rate halving
is the same as pipesize

of course in most routers, this is unnecessary
i) they are in the core (e.g. sprint, at&t etc) where the bottleneck
ISNT so the router rarely has more than a few packets per linecard
worst case (so i am "reliably informed by sprint tech reports)
ii) there's a lot of roughly uncorrelated flows, so a rough estimate
of the buffer occupancy could be got by assuming that the flows are
distributed between elephants and mice in some ratio, e
some total number of flows, F
and that mice are at rate 1 packet per rtt (irrelevant really since
they are effectively open loop so we can always renormalise to
actual number)
so at the bottleneck rate C,
if we are not congested (and we wont be unless
(1-e)F>C
we have eF flows- assuming they are uniform random distribution of
phase over time, then they are uniformly distributed over
a set of rates between
C/e and 2C/e - there's F flows, so C/eF at each rate...
rest is excercise for reader, but i guess you can see
the buffer needed is still the same:)

iii) of course, with ECN, one assumes one runs some sort of virtual
buffer and rate estimator, in which case the system runs with
paractically no packets in memory of routers, thus saving us all a
lot of money, and allowing all the mice to perceive Really Low
Latency - although if the bottlenecks are at the edges, then its not
clear one sees a whole lot of latency due to buffers anyhow:) just
the cost of paying router vendors for unoccupied memory 

(hey, here's
a thought - why dont we use all that unused core router memory to
store other stuff, like the current top 100 chart entries, or the
DNS root or whatever...:)

meanwhile, the GGF GHP group has been discussing a lot of this stuff
- although it is very very earlym there's a draft doc about various
things that they worry themselves about in networks - see
http://www.cl.cam.ac.uk/~jac22/pub/draft-ggf-ghpn-netissues-3.doc
and
draft-ggf-ghpn-netissues-3.pdf
or
draft-ggf-ghpn-netissues-3.txt
for cached copy - please dont discuss this at great length on this
list - that document is a very early draft and is meant as a
descriptive doc/collection of input, 
not as a specification or prescription!

 - feedback to authors
MOST welcome, and feedback on ghpn list (see ggf website for info)
is welcome - it also references grid requirement/gap analysis
documents, which have some data points people here may find useful

cheers
jon


More information about the end2end-interest mailing list