[e2e] Is a control theoretic approach sound?

Panos Gevros Panos.Gevros at cl.cam.ac.uk
Wed Jul 30 05:29:41 PDT 2003


my assumption is that for any adaptive transport protocol the issue is to 
discover the (subjective) "target" operating point (as quickly as possible) 
and stay there (as close as possible) - tracking possible changes over time,

so one optimisation plane is 
(smoothness, responsiveness)
another optimisation plane is  
(capacity, traffic conditions)
and I think it is a fair assumption that there is no single scheme that 
operates better than the rest over the entire spaces. (if there are claims to 
the contrary any pointers would be greatly appreciated).

the question is at the boundaries of the ( Capacity, Traffic) space 
particularly at the (hi, low) end of this space
*simple* (and appropriate) modifications to the existing TCP control mechanisms
(i.e  no rtt measurements and retransmission schemes more aggresive slow start 
and/or more aggresive AI in congestion avoidance)
could have the same effect in link utilisation and connection throughput.
I believe that this is possible but the problem with this approach is that it 
is "TCP hostile".

Also my guess is that most of the complexity in "new" TCPs  is because 
implementors attempt to be "better" (by some measure) while remaining 
"friendly" to the standard.

I have seen TCP implementation which in the case of the remote endpoint being 
on the same network it allows a very high initial cwnd value at slowstart - 
solving all performance problems (in the absence of "social" considerations.. 
of course)

Wouldnt this be a much simpler answer to the problems of the "demanding 
scientists who want to transfer huge data files across the world" (citing form 
the article in the economist magazine)
..in their case they know pretty much that the links they are using are in the 
gigabit range and there are not many others using these links at the same time.


Panos


Steven Low typed :

 |A few comments:
 |Fairness can indeed be an issue.  Error
 |in baseRTT estimation, however, does not
 |seem to upset stability.  Its effect is
 |to distort the underlying utility function
 |that FAST (or Vegas) is implicitly optimizing;
 |see:
 |http://netlab.caltech.edu/FAST/papers/vegas.pdf
 |Shivkumar Kalyanaraman has a clever proposal to
 |deal with this problem which makes use of
 |higher prioirty queue in routers.
 |
 |Finally, even though the fairness problem
 |can in principle be severe, in our limited
 |experience with FAST, baseRTT estimation
 |has not been a major issue, consistent with
 |Larry Peterson's experience with Vegas.
 |
 |Steven
 |




More information about the end2end-interest mailing list