[e2e] link between Kelly's control and TCP's AIMD

Sireen Habib Malik s.malik at tuhh.de
Fri Feb 25 09:11:08 PST 2005


Lisong,  thanks for the paper. Very nice to know that out of the two 
things that i raised in my previous email is already taken care off :-)

Dear Mr. Reed, Honestly, i will not pretend that i have fully grasped 
the depth of your message. So please bear with me.

Internet pipes are < 15-20% utilized and 90% of it is TCP traffic. You 
are also right that utilization is not the best metric to look at when 
the links are so low occupied. It also makes sense to keep them below 
50%: all queuing models tell us that,  plus capacity for 
protection/restoration, and if i recall correctly from one of Mr. 
Odlyzko's more recent papers the Internet traffic grows  still 50% per 
year, so networks should have extra Mbps for the future.

Agreed, TCP is designed for the whole network. However, as i undestand 
it, if the Maximum Congestion Window allows the TCP to discover 
bandwidth, at one point it will hit a bottleneck. It is a bottleneck 
because it is saturated! That "link" will then ulitmately determine its 
performance. Am i right?  If not then I do not understand why should not 
TCP discover and "share" all the available bandwidth that it "can"? I 
have put "can" in quotes as it co-operates with other TCPs to share the 
bandwidth.

I  do not understand the point about optimization. I agree that one very 
common objective function is to "minimize" the maximum and/or average 
utilization of the links in the network. I have never seen any paper 
that says that "maximizing" the utilization is an objective function. 
Smarter (not the smartest) optimization models bring delays as 
constraint. Does this not help the losses (Erlang Loss Formula) and 
delays (M/M/1, M/D/1, MG1-Processor Sharing, Semi-Markov/M/1, etc.), and 
achieve exactly the same objective of separating the cars by more than 
one inch?

I agree with your take on the assumptions of stationarity and 
independence: self-similar behaviour extends from scales greater than 
RTT to minutes, hours, and perhaps days and months. If i put your 
comment in the context of three variables that i discussed in my last 
email, then Capacity and Buffer Size are constant, however, Number of 
Flows (N) is a random variable. Even if it is heavy-tailed variable, it 
would be fair to assume that the first moment, or the average exists. So 
the analysis still has relevancy if assume N is the average number of  
active flows. If its a non-stationary process then atleast the 
assumption of steady behaviour for a few seconds, or minutes, will hold 
the analysis true for those scales.


regards,
Sireen Malik









David P. Reed wrote:

> Just two quick high-level observations, since framing the issue 
> properly is critical:
>
> 1) TCP is a protocol that is designed for networks, not links.   That 
> means that many diverse users with diverse and unpredictable (not 
> gaussian, not time-invariant, not stationary, not independent - i.e. 
> theoretical simplification is nice, but optimizing a theoretical model 
> is only relevant to the limited extent that the real world is like the 
> model.)
>
> 2) TCP is designed to be robust over a wide variety of network 
> implementation choices, and should not depend on assumptions like 
> "routes are stable and unchanging".
>
> 3) TCP, and the Internet itself, are not designed to maximize the 
> utilization of the wires.   The mathematicians looking for objective 
> functions to optimize, and the elders who remember when wires were 
> expensive, but haven't paid attention for 50 years, tend to focus on 
> this objective function.   They remind me of people who would decide 
> that a highway system was optimally operating when cars are going 1 
> mph with 1 inch between them in all directions.   Yes, the wires would 
> be full, and the throughput (in cars per minute) would be pretty high, 
> but the latency would suck.
>
> A reasonable objective function of the network is the probability that 
> a user walking up to the network to send a message or get an MP3 file 
> or a web page is getting a service that costs less than what the value 
> of the service is to him/her.  This has precious little to do with 
> utilization of the wires, it turns out, in any practical network.   
> I'd fire any graduate student or post-doc who thought that utilization 
> was "obviously" the best metric for  network performance.
>
> Peer reviewers in network protocol performance work who rate purely 
> mathematical exercises highly because they solve irrelevant problems 
> have accepted far too many papers that focus on "utilization" as a 
> metric.   Read Andrew Odlyzko's many papers to get a better idea of 
> what metrics actually matter in practice - and understand why 10% peak 
> utilization is a typical, desirable corporate network operating point, 
> and 50% is a sign of bad management.
>



More information about the end2end-interest mailing list