[e2e] Congestion control as a hot topic in IETF

Barath Raghavan barath at ICSI.Berkeley.EDU
Thu Mar 7 06:28:50 PST 2013

Just to weigh in on our work on Decongestion Control -- Jon is right that it wasn't about making a congestion-free Internet, but rather about making the case that it's possible for an Internet-like network to achieve high goodput despite extreme packet loss.

We did significant followup work beyond our 2006 HotNets paper, but it was never published.  A few of the things we found:

1. The notion of dead packets, per Patterns of Congestion Collapse (Kelly, Floyd, and Shenker), is key when you have a protocol that induces heavy loss.

2. That a refined notion of dead packets, which we called zombie packets, reflected the true loss of capacity when using Decongestion Control.  The idea is that if an eventually-dropped packet is wasting network resources that could have been used by packets that would have contributed to overall goodput, then the packet is a zombie packet.

3. For networks structured like those of backbone ISPs (circa 2008), firehose Decongestion Control, in which senders send as fast as possible, would result in poor network-wide goodput due to high zombie packet rates.  However, another approach, ratio Decongestion Control, in which senders send at a fixed fraction above their goodput, achieved near optimal goodput (we used 20% -- i.e., if a flow is getting 10Mbps goodput, send at 12Mbps).  This latter approach is aligned with a sender's incentives (i.e., there's no reason for them to send faster than their goodput), and yet also yields good performance from the network's perspective.

The bottom line for me was that it's possible to run a network like this, and that if done right (as we found in our implementation), you don't even have to use rateless codes to mask most losses if you have a decent SACK-like mechanism (making the coding cost minimal to zero depending on the type of flow).


On Mar 6, 2013, at 3:29 PM, Jon Crowcroft wrote:

> well I beg to differ about this work being about
> a "congestion control free" internet
> the decongestion control idea involves (as well as xor coding)
> a) fairness support at the edges of the net (under specified)
> and 
> b) active queue management in the core (a decent implemenation
> would need the same sort of virtual queue mechanisms a lot of
> decent ECN implemenations have looked at)
> so while the work is cool (for sure) and I am definitely in favour
> of exploring the design space, it doesn't amount really to 
> just "everyone send as fast as they can" at all - that doesn't seem
> a fair way to describe it
> so i'd claim it is still congestion control, just with a different
> partitioning of the functionality than a purist end2end (which is
> just fine - i am no purist:)
> there is also the long term argument that the ratio of core to edge capacity
> flips over every now and then -- see the figure in our paper
> http://www.cl.cam.ac.uk/~jac22/out/ripqos-rant.pdf
> and this makes a lot of things break badly (go to places where the
> core nets are massively under provisioned and you'll see what
> damage "just a little bit of packet loss" can do...
> In missive <51377F9F.1080206 at isae.fr>, Emmanuel Lochin typed:
>>> Hi all,
>>> We've attempted with success to implement a Decongestion Control 
>>> Transport Protocol following A. Snoeren and T. Bonald Infocom'09 paper : 
>>> "Is the law of Jungle sustainable for the Internet". We defined an 
>>> "Anarchical Networks" scenario and tested our proposal named DCTP with 
>>> Achoo (proposed by A. Snoeren) over a simulated ISP-like topology.
>>> Preliminary results tend to confirm that both Snoeren and Bonald are 
>>> right and that such architecture is sustainable.
>>> You'll find our first experiments in the slides available here: 
>>> http://www.lochin.net/tetrysjungle.pdf
>>> Emmanuel
>>> On 05/03/2013 13:07, Scharf, Michael (Michael) wrote:
>>>>> Am 04.03.2013 23:07, schrieb Scharf, Michael (Michael):
>>>>>> There has been some interesting research on whether a
>>>>> transport protocol could work without any congestion control.
>>>>> One reference is: B. Raghavan and A. Snoeren, "Decongestion
>>>>> Control", ACM SIGCOMM Workshop on Hot Topics in Networks, 2006.
>>>>> I remember that you, some years ago, asked whether networking
>>>>> can be done without flow control.
>>>> My comment is about network designs that typically assume erasure codes and flow-based queueing/scheduling in all network nodes. Actually, it took me a while to fully understand why this is no alternative to the way Internet congestion control works today. But, for what it is worth, I found the overall idea intesting.
>>>> Michael
>>> -- 
>>> Emmanuel Lochin
>>> Professeur ISAE - OSSI
>>> Institut Supérieur de l'Aéronautique et de l'Espace (ISAE)
>>> Issu du rapprochement SUPAERO et ENSICA
>>> 10 avenue Edouard Belin - BP 54032 - 31055 Toulouse cedex 4
>>> Tel : 05 61 33 91 85 - Fax : 05 61 33 91 88
>>> Web : http://personnel.isae.fr/emmanuel-lochin/
>>> ---
>>> "This email and any attachments are confidential. They may contain legally privileged information or copyright material. You should not read, copy, use or disclose them without authorisation. If you are not an intended recipient, please contact us at once by return email and then delete both messages. We do not accept liability in connection with computer virus, data corruption, delay, interruption, unauthorised access or unauthorised amendment. This notice should not be removed"
> cheers
>  jon

More information about the end2end-interest mailing list