[e2e] 10% packet loss stops TCP flow

Jonathan Stone jonathan at dsg.stanford.edu
Fri Feb 25 13:58:25 PST 2005


In message <20050225204022.68BB386AE1 at mercury.lcs.mit.edu>,
Noel Chiappa writes:

>    > From: Roy Xu <kx2 at njit.edu>
>
>    > it seems to be a common understanding that if a TCP flow experiences
>    > 10% or more packet loss, the flow stops (i.e., attains 0 or meaningless
>    > throughput)
>    > ...
>    > my questions is what is the theoretical or analytical explanation for
>    > this observation?
>
>Exercise for the reader: If a TCP connection will time out and abort the
>connection if it receives no ACK to T retransmissions of a segment X, given
>that it has N total segments to transmit, derive the equation f which gives
>Pc = f(Pp, T, N), where Pc is the probability of the successful completion of
>the tranmssion, and Pp is the probability of the loss of any individual
>packet.

Noel,

I'm puzzled. Are you deliberately tempting your readers to fall into
the fallacy of assuming packet loss is statistically independent, and
thus of assuming Poisson behaviour of packets in the Internet?

If so, sure, it's an educational exercise who haven't gotten that far.
But even so... for shame, sir :-/.


More information about the end2end-interest mailing list