[e2e] Open the floodgate

Guy T Almes almes at internet2.edu
Wed Apr 21 06:53:56 PDT 2004


Mark,
  I agree.

  Within the Internet2 community, there are several user communities that 
depend on gigabit and soon multi-gigabit per second file transfers etc. 
It's pretty clear that classic AIMD New Reno will not handle these needs, 
and at least the research university / lab community will suffer unless 
improvements are made.  There are several possible lines of advance:
[] using TCP and improving the control algorithms.  The BiC work at North 
Carolina State, the FAST work at Caltech, Floyd's HighSpeed work, and the 
HS LP work at Rice are among the efforts described in the URL below.
[] replacing TCP but retaining IP.  The XCP work at MIT is very interesting 
in this regard, though it posits changes in router cooperation.
[] replacing IP packet switching.  In the very high-end wide-area context 
there is a renewal of interest in circuit switching under the leading of 
lambda switching.

Which of these efforts are successful will have implication for the 
advanced research university / lab community, but will also likely have 
implication for the broader Internet community.

Vigorous innovation at the transport level, along with careful scrutiny and 
testing of the innovations, will be very important, both for meeting the 
needs of the research community *and* for guiding the longer-term evolution 
of the net.

Thus, I'd urge patience with lame press releases and focus on the strength 
and weakness of the ideas in this very important research area.

Regards,
        -- Guy

--On Tuesday, April 20, 2004 11:18:07 +0100 Mark Handley 
<M.Handley at cs.ucl.ac.uk> wrote:

>
>> yes, i think UCL should explain themselves -i've cc:d their member of
>> the IAB: -)
>>
>> >> FYI, "TCP is so 80s it may be obsolete today."
>> >>
>> >> http://story.news.yahoo.com/news?tmpl=story&u=/nf/23720
>
>
> [Disclaimer: My own opinion, not necessarily that of the rest of the IAB]
>
> We all know that TCP's AIMD algorithm with the standard constants
> cannot sustain very high delay-bandwidth products.  For the most part
> today, the only people affected by this are the "big science"
> community, because they're the only people who can justify
> multi-gigabit pipes wide-area, used by only a handful of flows.  So
> this isn't an issue today for almost everyone else.  But it will
> become an issue before too long.
>
> There are a lot of proposals on the table for how to improve on TCP in
> these environments.  Probably the best quick summary (although not an
> exhaustive list) is to look at the programme for PFLDnet 2003:
>
>   http://datatag.web.cern.ch/datatag/pfldnet2003/program.html
>
> Eventually we'll need consensus as to how to move forward, because not
> all these proposals can co-exist gracefully.  I don't think we're
> quite at the point yet where we need this consensus yet.
>
> What the high-energy physicists do on their own private network links
> is up to them - they have a real problem, and need solutions today.
> These solutions may evolve into solutions used by the rest of us, but
> they may turn out to not scale to the low-end, or to flakey wireless
> links, or whatever.  The Internet is very heterogeneous, and
> experiments need to take this into account.
>
> As for TCP-BIC, time and peer-review will tell.  Poorly written press
> articles don't exactly help make its case though.
>
> Cheers,
> 	Mark
>
>




More information about the end2end-interest mailing list