[e2e] [Iccrg] Re: Reasons not to deply TCP BIC/Cubic

Detlef Bosau detlef.bosau at web.de
Fri Feb 3 04:12:42 PST 2012


On 02/02/2012 11:45 PM, Lachlan Andrew wrote:
>
> That post was ages ago, before Mirja reopened the thread.  Yes, it was
> saying that CUBIC induces excessive queueing delays.
>

Perfect ;-) So I can ask the author himself: Why?
>>>   I see TCP as a control system with a delayed feedback loop.
>> This is an always interesting analogy - however, at least to me, not a very
>> helpful one.
> It isn't an analogy.  Congestion control *is* a control system.  It is
> just not a simple one, and all tractable analyses have to model it as
> something much simpler than it is.
>

O.k., "not a simple one" is the correct view. Obviously, a simple view 
into the congavoid paper will help us here:

Look at the footnote on page 2:
> A conservative flow means that for any given time, the integral of the 
> packet density around the sender–
> receiver–sender loop is a constant. Since packets have to ‘diffuse’ 
> around this loop, the integral is sufficiently
> continuous to be a Lyapunov function for the system. A constant 
> function trivially meets the conditions for
> Lyapunov stability so the system is stable and any superposition of 
> such systems is stable. (See [3], chap. 11–
> 12 or [21], chap. 9 for excellent introductions to system stability 
> theory.)

I would never dare tallking about Lyapunov stability, particularly not 
in an interview for a job.

These are things where you either have been appointed the Nobel-prize 
for physics - or you simply stay quiet.

However, it clearly exhibits, which mathematical effort must be done to 
apply control theory to our problem. And perhaps, the message of this 
footnote is nothing else then "with sufficient effort, it can be shown 
that at least VJCC and the well known control theory do not contradict."

And from what I've read so far in papers, which try to figure out 
differential equations and the like for TCP, I'm even more convinced 
that an analytical approach to TCP stability can hardly be achieved with 
reasonable effort.


>
>> Exactly. And if you introduce queuing delay into the network, you will slow
>> down the reaction significantly.
> No.  With loss-based TCP, the feedback delay is of the order of the
> interval between packet losses, which is very much larger than the
> RTT.  (To see why that is the feedback delay, consider how the network
> signals "increase your rate"; it has to withhold the next expected
> packet loss.)  Although increasing queueing delays makes the RTT much
> higher, algorithms like CUBIC actually make the feedback delay much
> less, by causing more frequent losses.

More frequent losses cause more frequent retransmissions.

And regarding the queuing delay: A larger buffer will drop incoming 
packets later than a smaller one. Hence, it obviously defers a packet loss.


> A better solution is provided by Compound TCP, which doesn't wait for
> losses.  By monitoring the delay, it  does  get feedback on the
> fastest possible timescale (an RTT).

Loss detection by delay observation is a topic, I dealt with myself 
about a decade ago.

A very good paper on this one is:

> @article{martin,
> journal = " IEEE/ACM TRANSACTIONS ON NETWORKING",
> volume ="11",
> number = "3",
> month = "June",
> year = "2003",
> title = "Delay--Based Congestion Avoidance for TCP",
> author = "Jim Martin and Arne Nilsson and Injong Rhee",
> }

The major point of this paper is that delay changes and load changes 
take place on incomparable time scales.

What made me leave this approach eventually was the simple fact, that 
this approach does what scientists call "ratio ex post". We observe a 
phenomenon, which may be caused by different reasons, e.g. we observe 
increasing delay which may be caused by increasing load _OR_ increasing 
service time for instance on a wireless link, and then we get out the 
crystall ball to guess that one, which applies here.

A more hilarious illustration my be the following:

(Taken from http://yoursuccessfulchild.com/?p=95)
> I am reminded of a story I heard years ago which underscores the need 
> to rebuild trust by letting go of our assumptions. There was a woman 
> who was walking in the busy streets of New York City and she noticed 
> something that she thought was very peculiar. A man standing on a 
> street corner was clapping his hands every 3 seconds. As the woman 
> watched in amazement at this man’s consistent, yet odd behavior she 
> suddenly had the need to approach the man. As he continued to clap his 
> hands every 3 seconds she asked him why he was committed to this 
> unusual behavior. While continuing to clap, he explained that this 
> ritual kept the elephants away. “But there are no elephants in New 
> York City” she retorted. “Exactly, that’s why I continue to do this!”

And in that very sense, congestion detection by delay observation is a 
mixture of clairvoyance and hand clapping against Elephants.

Unfortunately, "ratio ex post" is one of the most common mistakes made 
in science, and I made this several times in my life.

Congestion may be signaled by ECN, which can get lost, or by loss, which 
cannot get lost. It's that simple as that.


>   It doesn't need to increase the
> loss rate in order to allow responses to congestion.

Correct. You may send ECN to slow down the sender.

>   However, it does
> still take a long time to converge to a fair rate, because the
> underlying loss-based CWND still only responds on the same timescale
> as Reno does.

I did not spend that much time on reading the original papers on BIC and 
CUBIC, but at least it was not obvious to me, how these approaches 
achieve fairness. Whereas in VJCC this becomes clear quite easy.

>    In my understanding, Compound is a strict improvement
> over Reno when there is one big flow (long-lasting and high-BDP)
> sharing with occasional cross-traffic, but reverts to Reno when there
> are just two or more big flows, if the buffers are large.  In
> contrast, in all cases, CUBIC has some benefits (mostly for the flow
> deploying CUBIC) and some drawbacks (mostly for the cross traffic)
> over Reno.

Do you happen to have a paper on Compound? Up to now, I only know 
Compound als "M$ rocket science" - and hardly anyone knows the details.

However, in summary we try to tackle large BDP, don't we?

And perhaps, we should try to focus on the problem here - and not on the 
(felt) ten thousandth botch to "improve" TCP/Reno etc.

Not to be misunderstood: TCP/Reno and the like _IS_ outstanding work. (I 
don't know, whether VJ was already awarded the ACM Turing Award, 
however, his groundbreaking work deserves it.)(And we should keep in 
mind both, the time passed since the congavoid paper, which has seen 
improvements since then but up to know no fundamental changes, and the 
growth of the Internet from about 50 nodes in that time to about 500 
millions nodes up to know, i.e. 7 orders of magnitude, and the 
principles of the congavoid paper still hold as well. I think that 
should be kept in mind to give this work adequate appreciation.)

However, it's in some sense a "one size fits all" work. (Where the range 
of sizes is outstanding, indeed.) However, there may exist better 
solutions for particular cases. And perhaps, it may be worthwhile to 
have a look in that direction.



-- 
------------------------------------------------------------------
Detlef Bosau
Galileistraße 30	
70565 Stuttgart                            Tel.:   +49 711 5208031
                                            mobile: +49 172 6819937
                                            skype:     detlef.bosau
                                            ICQ:          566129673
detlef.bosau at web.de                     http://www.detlef-bosau.de
------------------------------------------------------------------




More information about the end2end-interest mailing list