[e2e] tcp in high rate network

David Borman dab at BSDI.COM
Mon Jun 17 09:12:16 PDT 2002


> From: Mark Boolootian <booloo at cats.ucsc.edu>
> To: "Jonathan M. Smith" <jms at central.cis.upenn.edu>
> Subject: Re: [e2e] tcp in high rate network
> Date: Mon, 17 Jun 2002 07:58:05 -0700
>
>
> In 1992, Dave Borman did a bunch of work at Cray with TCP over HIPPI and 
> was able to achieve rates approaching 800 Mbits/sec.  Similar stuff was
> done with SGIs.  
>
> Here's a note from the cell-relay newsgroup mentioning Borman's work:
>
>   http://cell-relay.indiana.edu/mhonarc/cell-relay/1992-Sep/msg00106.html

The simplistic answer is that there is nothing in the TCP protocol
that limits the speed at which it can work.  One of the biggest issues
in implementing high speed TCP is the delay bandwidth product.  The
sender needs buffering for at least the delay*bandwidth in order to
be ready to retransmit the missing data.  And if you want to be able
to handle 1 dropped packet within a window, and keep the pipe full,
you need buffering for at least twice the delay*bandwidth.  The more
errors you want to handle per RTT, the more buffering you need.  And
the receiver has to advertise a window of at least the same size.
The original 16 bit TCP window wouldn't allow a large enough window
to be advertised to handle large delay*bandwidth, the TCP Window Scale
option addresses that problem.  If there is no packet loss, the
receiver doesn't need to buffer large amounts of data.  But if
something gets lost, it'll need at least a delay*bandwidth of buffering
to handle one lost packet without slowing down.

Beyond that, dealing with interrupts, moving/checksumming data,
per-packet overhead, managing data queues all have more impact on
overall performance than the actual TCP processing, especially when
techniques like TCP Header Prediction have been implemented.  On any
system that I've profiled, the time spent in actual TCP processing
is always dwarfed by all the other work that has to happen.

		-David Borman, david.borman at windriver.com




More information about the end2end-interest mailing list