[e2e] benchmarking new tcp congestion control algorithms

francesco@net.infocom.uniroma1.it francesco at net.infocom.uniroma1.it
Sun Feb 11 19:55:48 PST 2007


Hi,

I red the paper and I think it is a good starting point, however other metrics
should be included.
It seems that the focus is only on the capability of the algorithms to achieve
the full link utilization (maintaining fairness) and not in the congestion
control capabilities of the proposals (the capability to avoid congestion, i.e.
not stressing a stressed networks?). 
The performance analysis should focus on the impact of new algorithms on the
network and on other flows.

**** BEGIN ADVERTISING ****
For instance in our paper presented at pfldnet07
(http://wil.cs.caltech.edu/pfldnet2007/paper/YeAH_TCP.pdf), we defined the
"Fair-to-Reno" factor as the ratio of the aggregated goodput of n Reno flows
competing against a Reno flow, to the aggregated goodput of n Reno flows
competing against the selected algorithm. This metric is good to indicate the
effect that have new algorithms on standard Reno congestion control (persistent
connections). We show that some algorithms steal a lot of bandwidth to Reno
connections also in scenarios where Reno performs well... Is this what we
expect from a new algorithm?
**** END ADVERTISING ****

But I think that this is not enough. We should also have a look on the effect of
new proposals on Web traffic (and not only the viceversa). E.g. high congestion
loss probabilities can block new web connections during the three way handshake
and provoke problems to small window flows.
In this sense, I think it is very important to look at the packet loss
probability induced by the congestion control algorithm (both packet loss
probability and number of losses per congestion event) and on the average and
standard deviation of the queue utilization.

As a second issue, to have comparable results we should converge also to define
basic TCP parameters values (I did not yet have a look to the ns-2 scripts):
- initial slow start threshold (I suggest using limited slow start).
- TCP internal buffer sizes (I suggest 2*(bandwidth-delay product + bottleneck
buffer sizes))
- etc..

As many of you have experienced, on highspeed links every single parameter can
have a strong impact on the overall result.

Francesco


Quoting Douglas Leith <doug.leith at nuim.ie>:

> I've put together a set of short ns scripts to carry out the tcp  
> benchmark tests described in
> 
> Experimental evaluation of high-speed congestion control protocols,  
> Li, Y.T, Leith,D., Shorten,R. Transactions on Networking, 2007 (see  
> http://www.hamilton.ie/net/eval/ToNfinal.pdf).
> 
> The ns scripts are at http://www.hamilton.ie/net/eval/tcptesting.zip.
> 
> Its now very easy to rerun these tests against proposed new  
> congestion control algorithms.  Baseline tests for standard tcp, high- 
> speed tcp, scalable tcp, bic tcp, fast tcp and htcp are reported in  
> http://www.hamilton.ie/net/eval/ToNfinal.pdf and these experimental  
> measurements can be directly compared against the simulation results  
> generated by the script.
> 
> Let me know if you have any comments.
> 
> Doug
> 




----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.


More information about the end2end-interest mailing list