[e2e] Congestion control as a hot topic in IETF

Barath Raghavan barath at ICSI.Berkeley.EDU
Fri Mar 8 08:49:25 PST 2013


On Mar 8, 2013, at 1:09 AM, Emmanuel Lochin wrote:

> On 07/03/2013 15:28, Barath Raghavan wrote:
>> Just to weigh in on our work on Decongestion Control -- Jon is right that it wasn't about making a congestion-free Internet, but rather about making the case that it's possible for an Internet-like network to achieve high goodput despite extreme packet loss.
>> 
>> We did significant followup work beyond our 2006 HotNets paper, but it was never published.  A few of the things we found:
>> 
>> 1. The notion of dead packets, per Patterns of Congestion Collapse (Kelly, Floyd, and Shenker), is key when you have a protocol that induces heavy loss.
>> 
>> 2. That a refined notion of dead packets, which we called zombie packets, reflected the true loss of capacity when using Decongestion Control.  The idea is that if an eventually-dropped packet is wasting network resources that could have been used by packets that would have contributed to overall goodput, then the packet is a zombie packet.
>> 
>> 3. For networks structured like those of backbone ISPs (circa 2008), firehose Decongestion Control, in which senders send as fast as possible, would result in poor network-wide goodput due to high zombie packet rates.  However, another approach, ratio Decongestion Control, in which senders send at a fixed fraction above their goodput, achieved near optimal goodput (we used 20% -- i.e., if a flow is getting 10Mbps goodput, send at 12Mbps).  This latter approach is aligned with a sender's incentives (i.e., there's no reason for them to send faster than their goodput), and yet also yields good performance from the network's perspective.
> 
> Hi Barath,
> 
> Did you use Achoo for your experiments or did you slide to another mechanism? We've developed our own ns-2 Achoo prototype to compare with our Tetrys proposal and noticed some limitation of Achoo when the RTT increases (might be due to the fact that Achoo remains a block erasure coding scheme). If you have a pseudo code or any prototype of Achoo, we will be interested in to allow a fair comparison.
> 
> Emmanuel


We used a few different implementations over the lifetime of the project, but this was years ago so unfortunately I'm not sure I have anything to share.  There were two distinct focuses to our work: a) the impact on the network of large scale use of Decongestion Control and b) the design and implementation of the protocol itself.  For the former, John McCullough built a flow-based simulator (to try out larger topologies than ns2 could handle).  For the latter I built a Linux user-level implementation (that used library interposition to make TCP-based apps use Decongestion), and later we built a simplified ns2 implementation.

You're right that there is a potential issue with the naive scheme as RTT increases -- this can be largely though not completely mitigated by a) using the ratio send approach rather than firehose, b) SACKs (ours were a bit vector of blocks received), and c) overlapping the sending of coding windows, scaling one down in rate while scaling the next up (we only partially explored this).

-Barath


More information about the end2end-interest mailing list