[e2e] Why do we need congestion control?

RAMAKRISHNAN, KADANGODE (K. K.) kkrama at research.att.com
Wed Mar 6 10:12:37 PST 2013


I want to second what Jon and Keshav say with regard to the assistance provided by coding, but the limitations that arise in an environment without effective congestion control.

We'd explored the benefit of coding (admittedly simple R-S codes) at the end-end transport layer to complement TCP, so as to help sustain losses on wireless links, in our work on LT-TCP.
We did see the benefit of coding, to extend the dynamic range of transport protocols to tolerate higher loss rates, but only up to a point. Beyond that, you see the same results as you would see in an uncontrolled environment where losses (and the resulting wasted work) begin to dominate the utilization of the resources in the network. That is without paying attention to the delays that result from excessive losses that cause the receiver to wait to reconstruct a block.  There is still the need for reasonable congestion control mechanisms to keep from causing excessive losses. And Keshav's point of the unfairness across flows in the short term and the eventual result of everyone losing out is certainly important to keep in mind.

Finally, I heartily agree with Jon's last point regarding ECN...

--
K. K. Ramakrishnan                  Email: kkrama at research.att.com
AT&T Labs-Research, Rm. A161        Tel: (973)360-8764
180 Park Ave, Florham Park, NJ 07932    Fax: (973) 360-8871
      URL: http://www.research.att.com/people/Ramakrishnan_Kadangode_K/index.html


-----Original Message-----
From: end2end-interest-bounces at postel.org [mailto:end2end-interest-bounces at postel.org] On Behalf Of Jon Crowcroft
Sent: Wednesday, March 06, 2013 10:03 AM
To: shun cai
Cc: Jon.Crowcroft at cl.cam.ac.uk; end2end-interest at postel.org
Subject: Re: [e2e] Why do we need congestion control?

ok - i see your point - this is true if your sources have a peak rate they can send at 

this could be the line rate of their uplink  - 
that would be embarrasingly bad
(see keshav's followup on escalating costs of coding)
or the rate they can get data off disk (which could be as bad, but might be lower)
or an application specific rate (e.g. streamed video) for which you're suggestion is
quite reasonable....

but for data sources which are greedy 
(TCP with arbitrarily large files)
you need a way to tell sources a non wasteful way of sending - 

and what is more
there isn't just one set of sources in one location 
and a set of sinks in one other location
so the system of senders sending at 
unconstrainted rates on a finite speed net with high speed edges,
would create multiple bottlenecks,
which would exponentiate the problem

coding isn't magic - its info theory - if you lose info
you must add redundency - coding does it pre-emptively
rather than post-hoc the way ARQ/Retransmit does,
which saves you time, but in the end, can't defer the inevitable

if you look at digital fountin systems for video
they pick a likely loss rate, pick a tolerable picture degradation rate
and use those to derive/choose a code 

the assumption is that the losses are capped because most other systems 
are backing off just like TCP - if you break that assumption,
you'll break the coding parameter choice

anyhow, roll out ECN - much betterer technology:)
congestion avoidance without keeping queues filled everywhere...








More information about the end2end-interest mailing list