[e2e] Is a non-TCP solution dead?

Jonathan Stone jonathan at DSG.Stanford.EDU
Wed Apr 2 10:02:39 PST 2003


I'm trying to distill out any substantive issues with end-to-end
protocols or TCP from Alex Cannara's messaages. The quotes below are
from Alex and Rick Jone's exchange, edited to remove top-quoting and
non-technical statements.

In message <3E89ED03.9DCC459A at attglobal.net>Cannara writes

>> Cannara wrote:
>> > Some of the 'fixes' to TCP have been especially naive, like the "every
>> > other pkt Ack" trick.  Imagine the delays additive in a large file
>> > transfer when blocking (e.g., SMB) results in odd pkts.  How someone
>> > could imagine saving every other Ack was worth, say, 150mS penalty
>> > every 30kB, is unfathomable, unless they just never really thought out
>> > the implications.  [....]


The SMB protocol is a stop-and-wait RPC layered on top of TCP (or
other) transports. If the RTT between SMB client andn sever is 150
millisec, typical SMB implementations will never acheive more than 6
1/3 RPCs per second.  The maximum RPC payload is around 64k bytes, so
in this scenario, SMB will never deliver more than 400k bytes/sec[*].

That is a limitation of SMB. Decrying that limitation as a fault of
TCP or TCP ACKing policy is specious. It detracts from any legitimate
points.

[*] there are scenarios where Microsoft SMB clients will send more
than send more than one RPC on a given SMB connection, but they rarely
arise in typical use of SMB platforms.


>Rick,
>
>Take a Sniffer and watch what happens on a standard, Windows/NT client-server
>interaction, like reading a 2MB file over a 100mS WAN path.  Why would odd pkt
>counts not be expected?  How is the admin or user supposed to understand how
>to change the default TCP settings in the stack it got from Sun, HP,
>Microsoft, etc?  These are the questions real TCP engineering would have
>addressed.  The intentional omission of an Ack for imagined optimization was
>one of the sillier decisions made in later TCPs.

Isn't that is a fault of the vendors not providing adequate knobs for
tuning protocol parameters (or "constants")?  That's a vendor
packaging (and support) issue, not a protocol issue.  I personally
have had to rebuild TCP for end-hosts from source, to avoid tripping
over three-way-handshake timing out when running TCP over very slow,
very oversubscribed intercontinental links.  When runnig a protocol at
the edge, or outside, its design envelope, its not unreasonable to get
significant gains from such tweaks.

On the specific issue of knobs to control protocol behaviour: just how
does TCP differ in any significant way from putative replacement?


>Now, add in a 0.4% loss on the path, which engages the backoff algorithms in
>TCP incorrectly, and the "missing odd ack" problem becomes aggravated beyond
>reason.  What an 'unreasonable' expectation of the NOS designer that even or
>odd pkt requests should get equivalent support from the transport!  What an
>'unrealistic' expectation that x% pkt loss on a path should not generate 30x
>slowdown.

<shrug> Well-designed application can run over TCP with slightly
higher loss rates and much lower slowdown. I personally have several
packet traces of streaming compressed audio with a 1%-3% loss rate,
showing more than an order of magnitude less slowdown than you claim.
Once again, it sounds like you've chosen a poorly-engineered
application (a "NOS": PC-speak for a distributed file system) layered
on top of TCP.  The poorly-engineered application behaves poorly under
low loss.  Instead of blaming the poorly-enineered application, you blame TCP.


[....]




More information about the end2end-interest mailing list