[e2e] TCP improved closing strategies?

John Day day at std.com
Mon Aug 17 15:35:44 PDT 2009

At 16:30 -0400 2009/08/17, David P. Reed wrote:
>You need 2MSL to reject delayed dups.  However, one does not need "fully live"

Correct.  Bill's question was on how soon port-ids could be re-cycled.

>individual connections to deal with delayed dups.  You can reject 
>delayed dups by saying "port unreachable" without a problem in most 
>cases.  2MSL provides no semantic guarantees whatever.

Nor should it.  Nor should anyone even try to construe that it might.

>On 08/17/2009 04:14 PM, John Day wrote:
>>Re: [e2e] TCP improved closing strategies?
>>At 11:54 -0400 2009/08/17, David P. Reed wrote:
>>>The function of the TCP close protocol has two parts:
>>>1) a "semantic" property that indicates to the *applications* on 
>>>each end that there will be no more data and that all data sent 
>>>has been delivered.  (this has the usual problem that "exactly 
>>>once" semantics cannot be achieved, and TCP provides "at most 
>>>once" data delivery semantics on the data octet just prior to the 
>>>close.  Of course, *most* apps follow the end-to-end principle and 
>>>use the TCP close only as an "optimization" because they use their 
>>>data to provide all the necessary semantics for their needs.
>>Correct.  Blowing off even more dust, yes, this result was well 
>>understood by at least 1982.  And translates into Ted's solution 
>>that explicit establishment and release of an "application 
>>connection" is necessary.  Again see Watson's paper and Lamport's 
>>Byzantine General's paper.  Using the release of the lower level 
>>connection to terminate signal the end of the higher level 
>>connection is overloading and always leads to problems.
>>You still need 2MSL.
>>>2) a "housekeeping" property related to keeping the 
>>>TCP-layer-state minimal.  This is what seems to be of concern here.
>>Agreed here as well.  Taking Dave's point that the value of MSL has 
>>gotten completely out of hand. As Dave says the RFC suggests 30 
>>seconds, 1 or 2 minutes! for MSL.  Going through 2**32 port-ids in 
>>4 minutes with one host is unlikely but not *that* unlikely.  And 
>>of course because of the well-known port kludge you are restricted 
>>to the client's port-id space and address.  If you had good ole 
>>ICP, you wouldn't have 2**64 (there is other stuff going on), but 
>>it would be a significant part of that.
>>But the TCP MSL may be adding insult to injury, I have heard rumors 
>>that the IP TTL is usually set to 255, which seems absurdly high as 
>>well.  Even so, surely hitting 255 hops must take well under 4 
>>minutes!  So  can we guess that TCP is sitting around waiting even 
>>though all of the packets are long gone from the network?
>>2MSL should probably smaller but it still has to be there.
>>Take care,
>>>Avoiding (2) for many parts of TCP is the reason behind "Trickles" 
>>>(q.v.) a version of TCP that moves state to the client side.
>>>If we had a "trickles" version of TCP (which could be done on top 
>>>of UDP) we could get all the functions of TCP with regard to (2) 
>>>without server side overloading, other than that necessary for the 
>>>app itself.
>>>Of course, "trickles" is also faithful to all of TCP's end-to-end 
>>>congestion management and flow control, etc.  None of which is 
>>>needed for the DNS application - in fact, that stuff (slowstart, 
>>>QBIC, ...) is really ridiculous to think about in the DNS 
>>>requirements space (as it is also in the HTML page serving space, 
>>>given RTT and bitrates we observe today, but I can't stop the a 
>>>academic hotrodders from their addiction to tuning terabyte FTPs 
>>>from unloaded servers for 5 % improvements over 10% lossy links).
>>>You all should know that a very practical fix to both close-wait 
>>>and syn-wait problems is to recognize that 500 *milli*seconds is a 
>>>much better choice for lost-packet timeouts these days - 250 would 
>>>be pretty good.  Instead, we have a default designed so that a 
>>>human drinking coffee with one hand can drive a manual connection 
>>>setup one packet at a time using DDT on an ASR33 TTY while having 
>>>a chat with a co-worker.   And the result is that we have DDOS 
>>>I understand the legacy problems, but really.  If we still 
>>>designed modern HDTV signals so that a 1950 Dumont console TV 
>>>could show a Blu-Ray movie, we'd have not advanced far.
>>>On 08/17/2009 10:16 AM, Joe Touch wrote:
>>>>Hash: SHA1
>>>>William Allen Simpson wrote:
>>>>>>As I recall Andy Tanenbaum used to point out that TP4 had an abrupt close
>>>>>>and it worked.  It does require somewhat more application coordination
>>>>>>perhaps we can fake that by, say, retransmitting the last segment and
>>>>>>the FIN
>>>>>>a few times to seek to ensure that all data is received by the client???
>>>>>Cannot depend on the DNS client's OS to be that smart.  Has to be a server
>>>>>only solution.  Or based on a new TCP option, that tells us both ends are
>>>>>smart.  (I've an option in mind.)
>>>>There are two different problems here:
>>>>1) server maintaining state clogging the server
>>>>2) server TIME-WAIT slowing connections to a single address
>>>>Both go away if the client closes the connection. If you are going to
>>>>modify both ends, then that's a much simpler place to start than a TCP
>>>>option (which will need to be negotiated during the SYN, and might be
>>>>removed/dropped by firewalls or NATs, etc.).
>>>>FWIW, persistent connections helps only #2. If it's the number of
>>>>different clients connecting a server that is locking up too much server
>>>>memory, then persistent connections will make the problem worse, 
>>>>not better.
>>>>Version: GnuPG v1.4.9 (MingW32)
>>>>Comment: Using GnuPG with Mozilla - 
>>>>-----END PGP SIGNATURE-----
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090817/38db9ce6/attachment-0001.html

More information about the end2end-interest mailing list