[e2e] Why do we need TCP flow control (rwnd)?
fred at cisco.com
Tue Jul 1 16:56:01 PDT 2008
On Jul 1, 2008, at 3:52 PM, slblake at petri-meat.com wrote:
> Quoting John Day <day at std.com>:
>> No kidding. There are some textbook authors who have helped with this
>> too. I was shocked to see Stallings say in one of his books,
>> to the effect of 'In the early 90s we discovered network traffic
>> Poisson.' (!) We had known *that* since the 70s!!! I remember prior
>> to 1976 scouring the literature looking for work that was closer to
>> real, i.e. didn't assume Poisson, or would give us the tools to get
>> closer to real.
> The following paper may be of interest to those following this thread:
> T. Karagiannis, M. Molle, M. Faloutsos, and A. Broido,
> "A Nonstationary Poisson View of Internet Traffic",
> IEEE Infocom 2004.
As the paper notes, the Poisson model tends to be a limiting case -
its results will be similar to but more conservative than one would
expect in reality. I use the equations too, because they are simple,
but with that caveat very explicitly in place.
The thing that I find a little hard to grasp is why folks might have
thought the network was Poisson in the first place.
A model such as M/M/1 says that traffic arrives completely randomly
with no effect by one transmission on another, and departs equally
randomly. By implication, packet size is random, whether uniformly
distributed or gaussian distributed. Now think about the applications
you know of. In TCP traffic, I once read that 40% of all traffic was
pure Acks and therefore 40 bytes long, and mss-sized packets were
another perhaps 35%, with the remainder being randomly distributed
between the two. And TCP has three linkages between transmissions: the
arrival of data generally triggers the transmission of an Ack, the
arrival of an Ack generally triggers the transmission of more data,
and data, when transmitted, is usually in bursts of 2-3 packets. It's
not random, plain and simple. It is ack-clocked, and in many cases is
Further, if there was a random process generating data, it would be at
one interface. After the data went through the queuing structures of
that interface, it would go to another interface, and another. Traffic
that goes through multiple queuing systems even if it was Poisson in
the beginning, has a different distribution. It is by definition an
I think the thing that makes Gaussian and Poisson models attractive is
the relative simplicity of their math. With Gaussian models we can
discuss standard deviations, and with poisson models we can pull
formulae out of textbooks. But in both cases, they are more
conservative than we observe, and have predictive only to the extent
that they are used as conceptual limits.
More information about the end2end-interest