[e2e] web100

David P. Reed dpreed at reed.com
Fri Mar 23 13:38:01 PST 2001


At 01:08 PM 3/23/01 -0600, Richard Carlson wrote:
>David;
>
>Can you elaborate on your question?  Are you asking if TCP stacks are 
>really a performance bottleneck, if bandwidth is a scarce resource, of if 
>we have any proof of this?

It was genuinely a question to clarify a press release and website that are 
quite puzzling.  Fixing a performance bottleneck is a good thing to do, I 
just don't understand what the big hoopla is about, or why it takes $3MM.

So, none of the above.  I include the press release here - I've also looked 
at the website.  Reading the press release and the website, I get the idea 
that there is an answer that is already being disseminated in the form of 
software (middleware?), and it has to do with TCP-MIBs and autotuning.

So with claims of first distribution of a "solution" implied in the press 
release, it would be interesting to know if the researchers in the TCP 
field actually have validated that this is the source of the 
problem.  Crappy applications programs and TCP implementations could be the 
problem, as well, one might think.  Or maybe the APIs (Berkeley Sockets? 
and file system buffering don't work very well).

And the mystery of why the project is called "WEB" 100?  We know that web 
protocols have too much handshaking and parsing to be good bulk transfer 
vehicles.  And what do supercomputer users have to do with the Web?

But what most puzzles me is that this is an NSF research project, not a 
software development project, yet the press release talks about it as the 
latter.

I'm probably just confused.  Maybe this is how science is done these days, 
but I'd think that one grad student could have figured out where a 
bottleneck is by just a few measurements, then passed the info off to the 
community of developers to fix it.  Since the project is "open source" 
according to the website (but I, at least, can't look at the source because 
I don't have a password), one might think that the fix would simply be 
posted, at low cost.
------------------------------------------------------------------------------
FOR IMMEDIATE RELEASE
Mar. 19, 2001
Web100 Takes First Step Towards Improving Network Performance
PITTSBURGH -- The Web100 Project has distributed the initial version
of software that aims to bring data-transmission rates of 100
megabits per second to users of high-speed networks. Select
researchers at universities and government laboratories are getting a
sneak peek at the Web100 software to do real-world testing and
provide feedback to developers.
"Today's release of the Web100 software promises improved network
performance at a time when bandwidth is increasingly precious," said
Tom Greene, the Senior Program Director for Infrastructure in the
National Science Foundation's Division of Advanced Networking
Infrastructure and Research. "This type of middleware can help us
use existing resources more efficiently."
While most home users still connect to the Internet with a 56K modem,
universities, research centers and some businesses today have
connections capable of transmitting data at 100 megabits per second
(Mbps) or higher. Research has shown, however, that users rarely see
performance greater than three Mbps. Web100 researchers traced the
problem to software that governs the Transmission Control Protocol
(TCP) -- a "language" that computers use to communicate across
networks. Networking experts are able to overcome this limit by fine
tuning connections with adjustments to TCP.
The Web100 software will eventually allow users to take full
advantage of available network bandwidth without the help of a
networking expert. Web100 programmers are refining TCP software in
the Linux operating system to automatically achieve the highest
possible transfer rate. "Our goal is to make it easier for everyone
to move data across networks at 100 megabits per second or higher,"
said Matt Mathis, Pittsburgh Supercomputing Center network research
coordinator and one of the principal investigators of Web100.
Twenty-one researchers at ten institutions -- including Stanford
Linear Accelerator Center, Oak Ridge National Laboratory, Lawrence
Berkeley Laboratory and Argonne National Laboratory -- will test the
initial release of Web100 software.
At the University of Michigan, for example, Brian Athey will test the
Web100 software for use with the Visible Human Project. Athey is
working with Art Wetzel at PSC to develop applications that allow
students to view large Visible Human data-sets over high-speed
networks. "In situations of marginal bandwidth availability," said
Athey, "tuning could make the difference between a choppy and
unusable 500 Kbps to 1 Mbps stream to a perfectly useful 2 Mbps to 5
Mbps stream."
The Web100 Project is a collaboration of Pittsburgh Supercomputing
Center, the National Center for Atmospheric Research and the National
Center for Supercomputing Applications. More information can be found
at: http://www.web100.org/

# # #
CONTACT:
Sean Fulton
sfulton at psc.edu
Pittsburgh Supercomputing Center
412-268-4960
[R. Sean Fulton | Public Information Specialist | sfulton at psc.edu]
[***** Pittsburgh Supercomputing Center | 412/268-7141 *****]
-----------------------------------------------------------------------------


> From the DOE perspective getting access to high bandwidth pipes is not 
> the major problem scientific applications are running into.  There is 
> 'easy' access to OC-3 to OC-48 links both within North America and around 
> the globe.  (Take a look at the number of OC-3/12 links coming into the 
> US from Europe.)  The problem is getting effective e2e throughput 
> (goodput) through between 2 nodes (i.e., moving a GB of data from a 
> storage system at SLAC to a users desktop at UTK).  The BW*delay product 
> requires large windows on both end nodes and almost no loss over SLAC's 
> campus network, ESnet, Abilene, and UTK's campus network.
>
>The major problem DOE scientists have is determining why the goodput is so 
>low (i.e., 5 Mbps e2e over a 100 Mbps channel).  The Web100 activities are 
>designed to answer the question 'is the biggest problem in the local host, 
>the remote host, or the network'.  Getting an authoritative answer to this 
>simple question would be of immense value to the DOE scientific community 
>and well worth the investment NSF is making in funding the Web100 activities.
>
>Rich
>
>At 12:20 PM 3/23/01 -0500, David P. Reed wrote:
>>So, I got a press release on web100.org and its TCP improvement software.
>>
>>The press will probably get this completely wrong (the slant in the press 
>>release is that TCP is *the big problem* and that scarce bandwidth is the 
>>reason we can't use 100 MB pipes).
>>
>>Has anyone done any studies that would reasonably support the huge 
>>investment here?
>>
>>- David
>>--------------------------------------------
>>WWW Page: http://www.reed.com/dpr.html
>
>------------------------------------
>
>Richard A. Carlson                              e-mail: RACarlson at anl.gov
>Network Research Section                        phone:  (630) 252-7289
>Argonne National Laboratory                     fax:    (630) 252-4021
>9700 Cass Ave. S.
>Argonne,  IL 60439

- David
--------------------------------------------
WWW Page: http://www.reed.com/dpr.html

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.postel.org/pipermail/end2end-interest/attachments/20010323/ffcdc588/attachment.html


More information about the end2end-interest mailing list