[e2e] [Iccrg] Re: Reasons not to deply TCP BIC/Cubic

Daniel Havey dhavey at yahoo.com
Wed Feb 1 14:20:13 PST 2012

Hehe ;^)  I agree with Detlef.  We have to be more careful with our terminology, and we have to be specific about the network conditions that we are talking about.

For instance we were talking about delay.  Lachlan said, something to the effect of Cubic causes large delays.  I don't think it does.

We are both right.  Lachlan sees Cubic's delay as larger than Reno.  Therefore it is large.  I see the delay as insignificant compared to the 30 second playout buffer that I am working with.

Soooo, I have an issue with the term "quickly".  I see TCP as a control system with a delayed feedback loop.  The feedback is an ACK or a dupAck.  How quickly a TCP reacts is completely dependent on RTT (how long it takes for the feedback signal to return to the sender).  If I send a packet and it is lost, I will not know it until the ACK from the next packet reaches me.  If the packet is received, I should know about it slightly sooner.  This is how quickly a TCP can react to changes in the network.

I think what Mirja is talking about is the aggressiveness of the response to a feedback signal.  This is determined by the congestion control algorithm.  Cubic tends to be more aggressive than Reno.

PS...It's a lot of work to define the scenario, and the terminology.  However, once we have done that, we might actually solve a problem ;^)

--- On Wed, 2/1/12, Detlef Bosau <detlef.bosau at web.de> wrote:

> From: Detlef Bosau <detlef.bosau at web.de>
> Subject: Re: [e2e] [Iccrg] Re:  Reasons not to deply TCP BIC/Cubic
> To: end2end-interest at postel.org
> Date: Wednesday, February 1, 2012, 1:24 PM
> On 01/31/2012 05:22 PM, Mirja
> Kuehlewind wrote:
> > Hi everybody,
> > 
> > I know I'm quite late to enter this discussion, but I
> would like to raise
> > another question. I guess there was already a lot
> explanation on the e2e list
> > saying (more or less) that Cubic is more aggressive but
> it is also able to
> > react more quickly on network changes (mainly sudden
> increases in bandwidth).
> First of all, I would really like to avoid the term
> "bandwidth" in our context.
> To make a long story short and come to the point: Bandwidth
> is given in Hertz, throughput is given bin bit per second.
> At least in papers, I've read so far, this "sloppiness" has
> lead to much confusion and in some cases to simply wrong
> results.
> We should acknowledge the fact, that there are e.g. wireless
> networks around, where this difference is absolutely
> crucial.
> > With e.g. TCP Reno the time between two loss events is
> increasing with the
> > network capacity. So it might take a long time to
> actually detect that there
> > is more bandwidth available in a network with large
> capacity. If we want to
> It is not a bandwidth, what we detect, but it is network
> capacity. And a tcp flow may share this capacity with
> others. So, at least one issue is the coexistence of
> different TCP schemes.
> > speed up this probing process we would need more
> frequent loss events causing
> > a higher loss rate...? I guess one solution to this
> problem would be ECN.
> > That means having a high probing rate but no losses.
> I'm wondering if there
> > is any other solutions?
> > 
> To my understanding, even an ECN is respected only "once a
> round". Please correct me, if I'm wrong.
> The more I think about BIC and CUBIC, the more I ask for the
> appropriate problem for this nice solution.
> I'm not quite sure whether there exists a suitable problem
> for just more aggressive probing.
> And perhaps we should clearly focus the problem - afterwards
> we may propose and discuss suitable solutions.
> --
> ------------------------------------------------------------------
> Detlef Bosau
> Galileistraße 30    
> 70565 Stuttgart           
> Tel.:   +49 711 5208031
>            mobile: +49
> 172 6819937
>            skype: 
>    detlef.bosau
>            ICQ: 
>         566129673
> detlef.bosau at web.de 
>    http://www.detlef-bosau.de
> ------------------------------------------------------------------

More information about the end2end-interest mailing list