[e2e] Why do we need congestion control?
vitacaishun at gmail.com
Wed Mar 6 04:29:22 PST 2013
As discussed in chapter 1 of your PhD thesis, when network is congested,
retransmission dominate the traffic and effective throughput diminshes
rapidly, leading to a deteriorating situation. This can be illustrated in
the well known figure with two turning points Knee and Cliff.
I wonder, however, does the situation the same if rateless erasure
code (say fountain codes) is used? As with erasure code, no ACK and
retransmission is needed except when the whole file is completed. So even
heavy loaded, the network is still busy with effective data packet, right?
Although queueing delay will increase, I believe that the network
throughput will not suffer the plunge as un-coded network.
2013/3/5 Srinivasan Keshav <keshav at uwaterloo.ca>
> To answer this question, I put together some slides for a presentation at
> the IRTF ICCRG Workshop in 2007 . In a nutshell, to save costs, we
> always size a shared resource (such as a link or a router) smaller than the
> sum of peak demands. This can result in transient or persistent overloads,
> reducing user-perceived performance. Transient overloads are easily
> relieved by a buffer, but persistent overload requires reductions of source
> loads, which is the role of congestion control. Lacking congestion control,
> or worse, with an inappropriate response to a performance problem (such as
> by increasing the load), shared network resources are always overloaded
> leading to delays, losses, and eventually collapse, where every packet that
> is sent is a retransmission and no source makes progress. A more detailed
> description can also be found in chapter 1 of my PhD thesis .
> Incidentally, the distributed optimization approach that Jon mentioned is
> described beautifully in .
> hope this helps,
>  Congestion and Congestion Control, Presentation at IRTF ICCRG
> Workshop, PFLDnet, 2007, Los Angeles (California), USA, February 2007.
>  S. Keshav, Congestion Control in Computer Networks PhD Thesis,
> published as UC Berkeley TR-654, September 1991
>  Palomar, Daniel P., and Mung Chiang. "A tutorial on decomposition
> methods for network utility maximization." Selected Areas in
> Communications, IEEE Journal on 24.8 (2006): 1439-1451.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the end2end-interest