[e2e] TCP Cubic

Daniel Havey dhavey at yahoo.com
Sun Dec 4 13:22:19 PST 2011


This is interesting.  Could you please elaborate on why you think your experiments show these results?

1. a larger RTT?
Why?  Because of queuing?  It doesn't seem like queuing delay would add very much to the RTT.  A full 1000 segment queue will be drained by a 1 Gbps router in (1500 bytes * 8 bits)÷1 Gbps * 1000 packets =  0.012 seconds.

What kind of RTTs are we talking about here?  If the average RTT is 200ms then 12ms is not a big deal.  It the average RTT is 10ms then it is a big deal.

I think this result may be caused by something other than queuing delay.


2. A larger number of retransmissions?
Why did this happen?  How many more retransmissions?  A lot more?  Like an order of magnitude or just a little?  I think I have a theory as to how this could happen.

All TCPs act in the same general manner.  Bring the bottleneck queue to 100% utilization, lose a few packets, backoff.  Repeat ad infinitum.

TCP Cubic in particular differs from NewReno because it has an exponential increase until the previous CWND, then a linear (TCP friendly) phase, and finally a second exponential phase.  I think you called it "probing".  Good term ;^)

So the only possible source of additional retransmissions is after we have reached 100% queue utilization and we are losing a "few" packets before the congestion event causes the sender to backoff the CWND.

It seems unlikely that this would occur during the first exponential phase of the Cubic algorithm because a congestion event just occurred and the queue at the bottleneck router has just been drained.  If it occurs during the linear increase phase then this is the same as NewReno.  If it occurs during the last exponential phase of the algorithm then your Cubic TCP may be out of tune.

The congestion event is supposed to occur within the linear phase of the Cubic algorithm.  If the congestion event occurs during the probing phase of Cubic this indicates the CWND was a much lower value than the available space in the bottleneck router queue.

This is very scenario dependent.  I have a graph of the queuing behavior of a Cubic TCP operating over a 10 Mbps link with an RTT of ~200 ms.

In my experiments the Cubic TCP does congest the queue during the probing phase, however, I still get plenty of goodput (~8-10 Mbps).

This is because I have a large RTT (TCP Cubic is probably not tuned for 200 ms).  The high goodput is probably because I have a low speed link (10 Mbps).

...Daniel



Dear all,
>
> we know that TCP BIC/Cubic is default in Linux and as a consequence
> 50% of servers employs TCP BIC/Cubic.
>
> Our measurements say that there could be reasons not to deploy TCP
> BIC/Cubic. These reasons  are in our opinion rooted in its more
> aggressive probing phase. In particular, in common network conditions,
> TCP BIC/CUBIC exhibits: 1. a larger RTT average wrt to TCP NewReno or
> TCP Westwood+; 2. a larger number of retransmission wrt to TCP NewReno
> or TCP Westwood+; 3 larger throughput but same goodput wrt to TCP
> NewReno or Westwood+.







More information about the end2end-interest mailing list