[e2e] Why do we need congestion control?

Daniel Havey dhavey at yahoo.com
Sat Mar 16 20:25:25 PDT 2013


Hi Fred,

 
> We now have at least two AQM algorithms on the table that
> auto-tune.
> 
> As such, while the general statement of 2309 is a good one -
> that we need to implement AQM procedures - the specific one
> it recommends turned out to not be operationally useful. 
> 
> As to the specific statement that Lloyd seeks, and notes
> that the TCP community argues for one specific answer, I'll
> note that operationally-deployed TCP has more than one
> answer. 
Well, perhaps all of the TCP community does not argue for one specific answer.  

Let's think about Joe's thesis.

They are complementary:

    FEC (including erasure codes) always completes in finite time,
    but has a nonzero probability it will fail (i.e., that the
    data won't get through at all)

    ARQ always gets data through with 100% probability when
    it completes, but has a nonzero chance of taking an
    arbitrary long time to do so

This tells us that if we have horrible RTT then TCP retransmission will be a drag.  However, if we have a nice RTT then TCP retransmission is what we want.

One solution does not fit all.

...Daniel

> There is Microsoft's Congestion Control algorithm,
> NewReno in FreeBSD, and CUBIC in Linux. There are other
> algorithms such as CAIA CDG that also fill the bottleneck in
> a path but manage to do so without challenging the cliff,
> which at least in my opinion is a superior model.
> 
> Similarly, it is difficult to argue that everyone has to
> implement the same AQM algorithm. What is reasonable without
> doubt is that whatever algorithm is implemented, the
> requirement is that it manage queue depths to a
> statistically shallow queue.
> 




More information about the end2end-interest mailing list