[e2e] Why do we need TCP flow control (rwnd)?

Paddy Ganti pganti at gmail.com
Mon Jun 30 17:56:30 PDT 2008


Nice to learn from your write-up as well. Just one more point where rwnd
helps:

  Imagine a middle box that acts as a tcp relay/forwarder or a PEP. In this
case rwnd control provides adequate buffering on both ends (read and write
sockets) thus increasing the total throughput. If not for rwnd, the optimal
throughput for each individual connection is never reached.

-Paddy Ganti

On Fri, Jun 27, 2008 at 9:13 PM, Xiaoliang David Wei <weixl at caltech.edu>
wrote:

>     I really enjoyed reading this thread -- lots of wisdom and history from
> what all the gurus have said. :)
>
>     I would add two cents with my experience of playing with the rwnd:
>
> 1. rwnd is not critical for the correctness of TCP. So, yes, we can remove
> rwnd without breaking TCP's correctness.
>
>     The TCP algorithm is very robust to guarantee reliability and avoid
> congestion. If we remove rwnd, the receiver buffer will just be viewed as
> part of (the last hop of) the network model in the congestion control
> algorithm, and receiver dropping the packet (due to lack of buffer) will be
> a congestion signal to the sender to slow down. This will work, though the
> sender now has to "guess" the receiver's buffer with the *same* assumption
> of network congestion, and the guessing function will be the same congestion
> control algorithm such as AIMD or whatever loss based algorithms -- not
> necessary sawtooth if your algorithm is not AIMD. So, removing rwnd control
> will be OK (maybe less efficient), and works well when the receiving
> application is not the bottleneck, or the receiving application has a
> similar processing pattern as network processing patterns.
>
>
> 2. Why do we want to remove rwnd control (John's question)?  rwnd has its
> own goodness and badness:
>
>     Pro: rwnd control is very good in avoiding buffer overflow -- no loss
> will happen for lack of receiving buffer (unless OS some buffer negation).
>
>     Con: However, rwnd is not very good in using the buffer efficiently,
> esp with small buffer cases. With rwnd control, we have to allocate BDP
> worth of buffer at the receiver to fully utilize the network capacity.
> However, this BDP worth of buffer is not always necessary at all -- Think
> about an extreme case that the receiving application has a much larger
> processing capacity, and each packet arrives at the receiver side can be
> immediately consumed by the application: we only need to have 1 packet worth
> of buffer to hold that received packet.  But with rwnd control, sender will
> only send a maximum of rwnd packets each RTT, even there is no queue-up at
> the receiver side at all! (As David Reed pointed out, the rwnd should
> indicate the receiving app's processing capacity, but unfortunately, the
> current way of indication is through available buffer size, which is not
> always an accurate indication.)
>     This trouble is particular obvious with the majority OS implementations
> of the last generation. As many research (e.g. web100) pointed out a few
> years ago, most of the TCP connections are bounded by a very small default
> buffer of Windows and also Linux. While it is easy to change the server's
> sending buffer, the clients' receiver buffer (usually lies in millions of
> customers Windows boxes) is hard to change. So, if we can remove the rwnd
> control (e.g. having the sender ignore the rwnd and only rely on congestion
> control), we might improve the connection speed and don't even have extra
> loss if the receivers can process all the packets quickly. I remember some
> of the network enhancement units on the market actually do such a feature
> (with other features to reduce the negative effect of ignoring rwnd). This
> reason, however, will probably be weaken as Vista and Linux 2.6 both come
> with buffer auto-tuning.
>
> 3. rwnd is very important for the responsiveness and adaptability of TCP.
> So, no, please don't remove rwnd until you get a good solution for all TCP
> usages.:)
>
>     TCP are used almost universally in all reliability traffic. Bulk
> traffic where network is bottle-necked usually satisfies above conditions
> that receiver is not a bottleneck. However, there are also many cases that
> the receiver is slow, or the receiver's processing pattern is completely
> different from network router (and hence congestion control algorithm's
> estimation will completely go off).
>
>     Just give an example of networked-printer. When a networked-printer
> runs out of paper, it is data processing capability quickly drops to zero
> and lasts for minutes, then after feeding paper, its capacity quickly jumps
> back to normal. This on-off pattern is very different from most network
> congestion, and I don't see TCP congestion control algorithms can handle
> such case responsively. In this case, rwnd control has its own advantage:
> great responsiveness (by preventive control, explicit notification when the
> buffer opens up and etc).
>
>     Note that to achieve such great responsiveness, rwnd control is
> designed to be very conservative and preventive -- sender (at this moment)
> can at most send data up to whatever the receiver (half RTT ago) could
> receive. This conservativeness guarantees that no packet will be dropped
> even application completely shut down its processing after announcing the
> rwnd. ECN and other explicit congestion control provide no such guarantee
> and cannot achieve the same responsiveness to a sudden capacity shutdown.
>
>     I think there are a lot other applications that have very different
> processing patterns and it is very hard to have one algorithm to predict all
> these patterns efficiently.
>
>     So, my understanding here is that:
>     A. if the receiver is very fast, we don't need rwnd control at all;
>     B. if the receiver's processing pattern is similar to network
> congestion and if tcp congestion does a good job, we don't need rwnd either.
>     C. The two "if" in A and B might stand in some cases, but not all the
> usage cases. I don't expect TCP will work as universally well as
> it currently does if we don't have rwnd control.
>
>
> -David
>
>
> On Thu, Jun 26, 2008 at 12:38 AM, Michael Scharf <
> michael.scharf at ikr.uni-stuttgart.de> wrote:
>
>> Hi,
>>
>> maybe this is a stupid question: Is there really a need for the TCP
>> flow control, i. e., for signaling the receiver window back to
>> the sender?
>>
>> It is well known that TCP realizes both congestion control and flow
>> control, and that a TCP sender therefore maintains two different
>> windows (cwnd and rwnd). Obviously, the congestion control protects
>> the path from overload, while the flow control protects the receiver
>> from overload.
>>
>> However, I have some difficulties to understand why the flow control
>> part and receiver advertized window is actually needed.
>>
>> Instead of reducing rwnd, an overloaded receiver running out of buffer
>> space could simply drop (or mark) new arriving packets, or just
>> refrain from sending acknowledgements. As a reaction to this, the
>> sender probably times out and the TCP congestion control significantly
>> reduces the sending rate, which reduces the load on the receiver, too.
>>
>> To my understanding, a fine granular receiver advertized window is
>> much more efficient if the buffer sizes are of the order of a few
>> packets only. But I guess that most of today's Internet hosts have
>> larger buffers, and therefore they hardly need a fine granular flow
>> control.
>>
>> Are there reasons why TCP can't just use its congestion control to
>> handle slow receivers?  Do I overlook some aspect?  Any hint or
>> reference would be welcome.
>>
>> Michael
>>
>>
>
>
> --
> Xiaoliang "David" Wei
> http://davidwei.org
> ***********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20080630/7701b6a0/attachment-0001.html


More information about the end2end-interest mailing list