[e2e] TCP un-friendly congestion control

J. Noel Chiappa jnc at ginger.lcs.mit.edu
Fri Jun 6 13:25:57 PDT 2003


    > From: "dave o'leary" <doleary at juniper.net>

    > For more information:
    > http://netlab.caltech.edu/FAST/
    > http://netlab.caltech.edu/pub/papers/fast-030401.pdf

Thanks for the pointers; in looking over the material there, I came across a
point which I'm wondering about, and I'm hoping someone here can enlighten me.


I've always heard that at extremely high speeds packet-loss based congestion
feedback (Reno) had problems in practise, because since you can't
differentiate between i) packet losses caused by congestion, and ii) losses
caused by errors, then if you have a roughly constant BER then as the link
speed goes up, you start getting a faster rate of 'false' (i.e. error-caused,
not congestion-caused) loss signals, which cause unneeded (and undesirable)
reductions in sending speed. I assumed that queueing-delay based congestion
feedback (Vegas) would avoid this issue.

This point is not made in detail in the papers, although there is a brief
comment which allude to this whole area - "Reno .. must maintain an
exceedingly small loss probability in equilibrium that is difficult to
reliably use for control".

I am informed that this comment also refers to the difficulty of using as a
control signal something (the congestion loss rate) which is small and
difficult to estimate reliably; not only that, but as the paper points out,
the quantization of that signal is large compared to the signal amplitude,
making for even more trouble.


So now I finally get to my question!

I would suspect that in practise, the first aspect I mentioned (two different
kinds of loss signal getting mixed up) is probably an even bigger problem
than the other one (the difficulty of using such a small signal for
feedback).

Does anyone have sany information on this, either from real-life measurements,
or simulations?

	Noel




More information about the end2end-interest mailing list