[e2e] 10% packet loss stops TCP flow

Reiner Ludwig Reiner.Ludwig at ericsson.com
Tue Mar 1 21:47:18 PST 2005

I appreciate the e2e argument very much but ...

At 14:02 28.02.05, David P. Reed wrote:
>Interesting theoretical discussion, but remember that TCP was designed 
>to run on networks with low packet loss rates, or networks that use 
>link-level retransmission or FEC to get packet loss rates to less than 
>1%.  This is what was meant by "best efforts".

... this rule with the "magic number 1%" simply can not hold. In the future we will see wireless links running at speeds > 1 Gb/s. Given that the RTT is lower bounded by the laws of physics, we end up with huge bandwidth x delay products. And, we know that TCP does not perform well with a loss rate of 1% in such a regime [FAST-TCP, HS-TCP, Scalable-TCP, etc.]. No single magic number will work for all regimes.

One rule of thumb that works, though, is to make the link-level retransmission completely persistent (don't give up until the link is declared DOWN). That way any errors or variations on the link are translated into congestion, and that is something that at least the rate-adaptive end-points understand.

So, maybe "best effort" should mean that we try as hard as possible to sucessfully transmit packets, but if congestion builds up we will at some point need to drop (mark) packets.


More information about the end2end-interest mailing list