[e2e] TCP un-friendly congestion control

David P. Reed dpreed at reed.com
Fri Jun 6 14:27:35 PDT 2003


Congestion loss rates are hardly small!!!

Little's theorem suggests that congestion loss rates can and do grow quite 
large quite rapidly as arrival rate approaches service rate.

What you may wish to say is that congestion loss rate variation is too 
small in the control region around the desirable operating point, where 
congestion loss = 0 and throughput = maximum.

But ultimately the problem remains with this research that trying to 
optimize the system to operate at 100% capacity on the bottleneck link(s) 
is the wrong way to run a network.

Instead of running such a network, increase the capacity of the bottleneck 
links to 10x load!  It's cheaper and the customers are happier.

At 04:25 PM 6/6/2003 -0400, J. Noel Chiappa wrote:
>     > From: "dave o'leary" <doleary at juniper.net>
>
>     > For more information:
>     > http://netlab.caltech.edu/FAST/
>     > http://netlab.caltech.edu/pub/papers/fast-030401.pdf
>
>Thanks for the pointers; in looking over the material there, I came across a
>point which I'm wondering about, and I'm hoping someone here can enlighten me.
>
>
>I've always heard that at extremely high speeds packet-loss based congestion
>feedback (Reno) had problems in practise, because since you can't
>differentiate between i) packet losses caused by congestion, and ii) losses
>caused by errors, then if you have a roughly constant BER then as the link
>speed goes up, you start getting a faster rate of 'false' (i.e. error-caused,
>not congestion-caused) loss signals, which cause unneeded (and undesirable)
>reductions in sending speed. I assumed that queueing-delay based congestion
>feedback (Vegas) would avoid this issue.
>
>This point is not made in detail in the papers, although there is a brief
>comment which allude to this whole area - "Reno .. must maintain an
>exceedingly small loss probability in equilibrium that is difficult to
>reliably use for control".
>
>I am informed that this comment also refers to the difficulty of using as a
>control signal something (the congestion loss rate) which is small and
>difficult to estimate reliably; not only that, but as the paper points out,
>the quantization of that signal is large compared to the signal amplitude,
>making for even more trouble.
>
>
>So now I finally get to my question!
>
>I would suspect that in practise, the first aspect I mentioned (two different
>kinds of loss signal getting mixed up) is probably an even bigger problem
>than the other one (the difficulty of using such a small signal for
>feedback).
>
>Does anyone have sany information on this, either from real-life measurements,
>or simulations?
>
>         Noel




More information about the end2end-interest mailing list