[e2e] tcp connection timeout

Jim Gettys jg at freedesktop.org
Thu Mar 2 11:51:54 PST 2006


> I've seen hundreds of app-level protocols.  Nearly all of them get
> keepalives wrong. Besides, I do not want a protocol which can easily
> convert a transient fault when I'm not sending any data into long delays
> when I want to send - even after the fault went away - and that is exactly
> what you get when you send keepalives over reliable transport protocol.  
> Loss of a keepalive packet shouldn't collapse window or increase timeouts.  
> Neither should keepalives be retransmitted.

And you expect the operating system to have better knowledge than I do,
who is familiar with my application?

> 
> Now, how exactly do you propose to implement that on top of TCP?
> 
> Besides, there's a lot of situations when *network* layer should be able
> to tell a keepalive from a payload packet - dial-on-demand or cellular 
> links, for example.  Move keepalives to application land, and you have a 
> choice between wasting your battery or having the modem to know about all 
> kinds of applications.
> 
> How about an app-level keepalive message being stuck behind a large piece
> of data (in high delay - low bandwidth situation)? It is kind of hard to
> get that situation sorted out right when you have a serialized stream.

If things are stuck behind a large piece of data, in your example, the
TCP connection will fail, and I get all the signaling I need, just when
I need it; if not, the application will continue to make progress.  The
keep alive isn't needed to detect a failure or continued progress in
this case.

> 
> Finally, TCP stack has RTT estimates.  Application doesn't.  

Depends on the applications.  Some do keep RTT estimates, and I
certainly would like the OS to let me get the information that TCP keeps
on my behalf (I'm not sure what the current state of the sockets API is
in this area).

> So it has no
> way to know if its loss detector is sane.
> 
> > > Keepalives, arguably, is the crudest method for detection of loss of
> > > connectivity, and they load the network with extra traffic and do not
> > > provide for the fast detection of the loss.  But, because of their
> > > crudity, they always work.
> >
> > They NEVER work.  (except if you define working as doing something, even 
> > if it is not what you want it to do).
> 
> Sure. That's because all you see is broken implementations of keepalives. 
> All I'm arguing about is that we need only one implementation - but which 
> is designed by someone who has a clue, so all apps can then simply use it.

If I care to detect transient failures, my application can/should do
this on its own.

> 
> > > A good idea would be to fix standard socket API and demand that all TCP
> > > stacks allow useful minimal keepalive times (down to seconds), rather than
> > > have application-level protocol designers to implement workarounds at the
> > > application level.
> >
> > There you go - you think that you can demand that protocol engineers 
> > know more than application designers about "usefulness".
> 
> The "loss of some percentage of probe packets" (in TCP land 20% loss seems
> to be a threshold of useability, and not in any way application specific)  
> "observed during a period not exceeding given timeout"  (which *is*
> applicaion-specific) is generic enough.
> 
> There is *no* way to monitor network conditions with probes in such a way
> so as not to load it significantly by the probes and get any more specific
> at the same time.  So no application I'm aware of even tries.  (Hint: the
> measurement precision is proportional to measurement duration divided by
> probe frequency;  measurement makes no sense if measured system changes
> significantly at a rate faster than the measurement period - so unless
> you're loading the network with probes, the precision is misleading, as
> measured conditions shoot way outside of the stationary-case error margins
> while you measure).

The only time I care to have an error reported is when I'm actually
trying to use the connection.  Signaling errors when I don't care just
means I get failures that I wouldn't otherwise experience.  This is a
feature?

> 
> It follows that most actively probing network monitoring tools produce 
> junk numbers.
> 
> > That's why my cellphone says "call is lost" 20 times a day, requiring me 
> > to redial over and over, instead of keeping the call alive until I press 
> > hangup.  Some network engineer decided that a momentary outage should 
> > be  treated as a permanent one. Foo.
> 
> You're complaining about non-configurable threshold.  Well, I said that
> there should be API for application to say what the timeout is (or if it
> wants to let user to drop circuit manually).  Manual timeout works only if
> you have a human in the loop - and fails because of human, too. In case of
> cell phone _not_ dropping a call because of bad conditions and leaving it
> to the human to worry about can easily cause a situation like that link
> droppes for a few minutes (common when driving in a hilly countryside) -
> by the time it comes back one may forget about it, and miss subsequent
> incoming calls. While keeping the other party wondering how long he's 
> supposed to wait with the receiver glued to his ear.

If there are no costs for listening, I see no problem here.

If there are, design your application server to be sane (as HTTP servers
do, after much pulling of teeth).

People seem to figure out pretty well when to hang up the phone: it
isn't all that hard.  And it isn't as though you only have a single
line, (or ear), so the analogy with a telephone is broken to begin with.

> 
> > > And, yes, provide the TCP stack with a way to probe the application to
> > > check if it is alive and not deadlocked (that being another reason to do
> > > app-level keepalives).
> > >   
> > Oh yeah.  Put TCP in charge of the application.   Sure.   And put the 
> > fox in charge of the chicken coop.
> 
> It exactly the same sense as receiving a block of data from TCP puts it
> "in charge" of application.  It is totally irrelevant if you get an "are
> you still there?" query in the middle of TCP data and have to reply to it,
> or if you get "are you still there?" exception and have to reply to it.  
> The only difference is that in the second case you don't have to obfuscate
> your application-level protocol to allow for these queries in different
> states and in the middle of half-completed ops.  Better yet, you may have
> OS to do some useful guessing by default if you don't care to implement
> watchdog code in the application.

The OS will *always* guess wrong, except by chance.

> 
> (Digressing... frankly, I'm rather amused by the fact that no "modern" OS
> provides an exception on reaching a limit to number of operations between
> wait states; I guess one knows of usefulness of such things only if he
> has some experience with real-life fault-tolerant systems - as I had, at
> my father's design bureau; a critical fault in the kind of systems he
> designed more often than not entailed some obituaries).
> 
> Of course, one may argue that properly designed applications never get 
> stuck... well, good luck, and pray for the miraculous salvation when they 
> do.
> 
> > The purpose of the network is to support applications, not the 
> > applications to support the network's needs.   Perhaps this is because 
> > network protocol designers want to feel powerful?
> 
> Precisely, and that's why network shouldn't force application developers
> to make their code any more complicated than it should be.  Setting a
> timeout once is an awful lot simpler than keeping asynchronous timers
> needed for application-level keepalives.  Oh, and there's a small issue of
> significantly more complicated app. protocol FSMs which have to deal with
> asynchronous timing.  Add timeout negotiation (the shorter period of both
> ends should be used) - know any application-level protocols which do that?
> 
> And, again, most application developers can not implement the keepalives
> properly - because they are not network protocol designers.  A lot of app. 
> developers out there don't bother with keepalives at all - a five-minute 
> talk to a sysadmin in a large company yields heaps of yarns about
> stuck backups which didn't time out for a couple of hours and so the whole
> night's run was ruined, and so on.

Huh? People who build applications protocols are usually at least
slightly familiar with underlying transport, though there have been some
noticeable counter-examples ;-).

> 
> Transport layer has timers.  Applications, in most cases, shouldn't. 
> Majority of application-level protocols are synchronous anyway; why force 
> app developers to do asynchronous coding?

Huh? Every X application that has blinking cursors (e.g. any text
widget), has to have timers.  All GUI application toolkits, on any
platform I am aware of, have timers of some form (which may or may not
be in active use, if the application is truly trivial). 

So any application that talks to people has this facility available
trivially, and by definition, is an event driven program for which
dealing with timers is easy.  Only batch sorts of applications written
without any library support might not have timer facilities, and I'd
then ask why you are writing at that low a level?  Just about any
serious network interface library will have such facilities; you won't
be coding them yourself unless you are a flat-rock programmer.

As I said, they should be called "keep killing" rather than "keep
alives".

				Regards,
				- Jim







More information about the end2end-interest mailing list