[e2e] TCP Cubic

Lachlan Andrew lachlan.andrew at gmail.com
Sun Dec 11 15:05:00 PST 2011


Greetings,

On 11 December 2011 18:22, Yan Cai <ycai at ecs.umass.edu> wrote:
> A few thoughts on comparison between TCP Cubic and Newreno.
>
> 1. Newreno like TCP is additive increase and multiplicative decrease (AIMD)
> while Cubic is one of MIMD TCP's.

No, Cubic is not MIMD.  (STCP was the only well-known MIMD proposal,
to my knowledge.)

>
> 2. To me, the most significant reason for adopting an MIMD TCP is due to the
> incapability of AIMD TCP's grabbing bandwidth in a large bandwidth-delay
> link: AIMD TCP takes very long time to make the pipe "congested."

That depends on what is added.  It is easy to use AIMD and add an
amount per RTT that is proportional to the RTT.  (That is still AI
because it is independent of the current window.)  Using bandwidth
estimation techniques, it is also possible to use AI that adds an
amount roughly proportional to the capacity of the lowest-capacity
link on the path.  On a high-BDP path, this typically gives
convergence time roughly independent of the BDP.

Of course, adding large amounts per RTT causes other big problems; my
point is just that "AIMD" is much more flexible than "Reno", and
needn't suffer Reno's limitations.

> 3. Goodput/throughput vs link utilization.
> Goodput/throughput can be used to characterize both a TCP session and a
> bottleneck link while link utilization is mainly for the bottleneck link.
> However, both of them are heavily impacted by the buffer size at the
> bottleneck link. In the case of TCP newreno, the rule of thumb, that is,
> buffer size = RTT * bandwidth, has been proven to provide a 100% link
> utilization for the bottleneck link while achieving the smallest average RTT
> for the TCP session. A bigger buffer leads to larger RTTs (overbuffer) while
> a smaller one makes the link utilization lower than 100% (underbuffer).

Yes, Daniel's comment that buffering isn't a big contributor to the
increase in delay ignores this rule of thumb.  It is well known that
CTCP causes high delay compared to Reno, because it keeps the buffers
almost full.  This is devastating on typical DSL links which are
massively over-buffered.

Note that the ubiquitous "buffer=BDP" rule of thumb has been widely
challenged.  Look for Nick McKeown's and Damon Wischik's
contributions.  Srikant also had a good Infocom paper studying the
variation with buffer size of the transmission completion time (a much
more useful measure than link utilization).

> It seems that the (MIMD) TCP performance is not only determined by the
> congestion avoidance algorithm, but also affected by the network topology
> and parameters. A detailed description on the experiment setup will help
> understand the experimental results better.

Not really.  The results are pretty standard.  The main driver for
"high speed" TCP variants was to provide higher throughput in the
presence of occasional packet loss, and so it is natural that they
almost all have higher self-induced loss than Reno does.  (That
applies to HS-TCP and H-TCP as well.  It doesn't apply to delay-based
algorithms such as FAST which have no intrinsic self-induced loss, but
have their own set of problems.)

In essence, each loss-based TCP induces a fixed amount of loss for a
given transmission rate.  In the absence of external loss (such as
caused by bursts of cross traffic), all will fill a pipe (explaining
Saverio's observation 3), and so those designed to be more resilient
to external loss will have to induce a higher level of loss to keep
themselves down to this rate (observation 2).  Cubic responds to loss
by reducing the window but racing back as quickly as possible to the
window size that caused the loss, and so it keeps windows perpetually
almost full (observation 1).  This design decision was the cause of
much heated debate about Cubic vs HTCP (which responds to loss with a
much slower initial window growth, and hence actually has a smaller
mean delay than Reno).  There was a paper on this recently in LCN by
Lawrence Stewart and others.

For anyone who is interested, this may be better discussed on the
ICCRG mailing list.

Cheers,
Lachlan

-- 
Lachlan Andrew  Centre for Advanced Internet Architectures (CAIA)
Swinburne University of Technology, Melbourne, Australia
<http://caia.swin.edu.au/cv/landrew>
Ph +61 3 9214 4837



More information about the end2end-interest mailing list