[e2e] Open the floodgate - back to 1st principles

Ibrahim Matta matta at cs.bu.edu
Sun Apr 25 15:17:52 PDT 2004



> From: Guy T Almes [mailto:almes at internet2.edu] 
> Sent: Sunday, April 25, 2004 3:03 PM
> To: Jon Crowcroft; end2end-interest at postel.org
>
> During the last few years, it's become common to use multiple parallel

> Reno/AIMD TCP flows to move a single large file.  The idea is that,
while 
> one TCP stumbles, the others can proceed and make good use of the
otherwise 
> underutilized capacity.  This does not work as well as one would like,

> largely because these multiple flows tend to synchronize with each
other, 
> resulting in really really massive congestion/queueing periods and
then 
> (unless you use several dozen parallel flows) surprising periods of 
> underutilization.

Let me add here however, that some work exists to "coordinate" the
transmission of these constituent TCP flows (cf. Congestion Manager) and
to "adapt" the number of constituent TCP flows to reach a certain target
bandwidth (cf. elastic TCP-tunneling)...

> The new generation of TCP congestion control algorithms, including the

> cited work at North Carolina State and also interesting work at
Caltech, 
> Rice, Cambridge, and other places, are good attempts to give us a much

> better alternative to the multiple-stream-Reno/AIMD approach.  
[text deleted]
> And the really hard problems are not 
> when the high-speed link is dedicated (the problem then, of course, is

> technically easier), but when the big file transfers are combined with
a 
> moderate, but dynamic, number of other users, perhaps including
several 
> 1-gigE flows at a time over wide area.

I agree, which supports Jon's comments. And in imho, it feels we are
going down the same path of yet many more TCP versions without carefully
looking at such compatibility issues...

> I'd caution against stereotyping this user community. 
> The requirement space is more textured than you might imagine.  And
it's 
> not best to relegate this entire space to the dedicated lambda school.

Has anyone done a *quantitative* analysis of such requirements? Talking
to some physicists, it was hard for them to describe their requirements,
other than the usual requirement of "huge" data and some
"interactivity"!


Cheers, ibrahim

--
Ibrahim Matta, Associate Professor
Computer Science, Boston University
111 Cummington Street, Boston, MA 02215
http://www.cs.bu.edu/faculty/matta 


More information about the end2end-interest mailing list