From saikat at cs.cornell.edu Wed Mar 1 04:27:41 2006 From: saikat at cs.cornell.edu (Saikat Guha) Date: Wed, 01 Mar 2006 07:27:41 -0500 Subject: [e2e] tcp connection timeout In-Reply-To: <4404E2750200003A00009D60@TMIA1.AUBURN.EDU> References: <4404E2750200003A00009D60@TMIA1.AUBURN.EDU> Message-ID: <1141216062.28946.48.camel@localhost.localdomain> On Tue, 2006-02-28 at 23:53 -0600, Brahim Ben wrote: > Some implementations of TCP implement a timer called Keepalive timer, The default operation of the keepalive timer depends on the OS. Linux and Windows don't send keepalives by default, as far as I recall; when enabled, the recommended default value is around 2 hours. Also, some NAT/firewalls will timeout idle connections long before that (usually 1h, sometimes as low as 15m, or as high as 5d). Depending on the host OS, an idle connection abandoned by a NAT/firewall can show up as a timeout error. cheers, -- Saikat -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://www.postel.org/pipermail/end2end-interest/attachments/20060301/e05b2dd0/attachment.bin From braden at ISI.EDU Wed Mar 1 12:29:13 2006 From: braden at ISI.EDU (Bob Braden) Date: Wed, 1 Mar 2006 12:29:13 -0800 (PST) Subject: [e2e] tcp connection timeout Message-ID: <200603012029.MAA09641@gra.isi.edu> *> *> Hi, *> *> I recently got a TCP connection timeout error. *> *> If both side don't send packets for a long time while connection is open, is *> there a default time out to abort the connection? *> *> Thanks a lot! *> *> - Kun *> Sigh. This is periodic question #13, which recurs about every 2.3 years. Those who have been around the Internet more than 2.3 years can stop reading now. The inventors of TCP, and in particular Jon Postel, were religiously opposed to a TCP idle timer ("keep alive"). The knowledge of when to time out an idle connection belongs logically to the application layer; TCP itself does not know. But the OS guys (beginning with the Berkeley Unix folks) disagreed on pragmatic grounds, so BSD TCP had a keep-alive. When the Host Requirements Working Group came to this issue, there was nearly unanimous agreement with Postel, that TCP should not have a keep alive. The dissenter was the programmer responsible for the BSD code, which probably powered the majority of hosts at the time! So, the Host Requirements group set requirements on Keep-Alives that were designed to come as close to outlawing them as possible without actually outlawing them. Thus, keep-alives must be optional, they must default to "off", the default timeout interval must be very large, etc. Bob Braden From jg at freedesktop.org Wed Mar 1 12:54:02 2006 From: jg at freedesktop.org (Jim Gettys) Date: Wed, 01 Mar 2006 15:54:02 -0500 Subject: [e2e] tcp connection timeout In-Reply-To: <200603012029.MAA09641@gra.isi.edu> References: <200603012029.MAA09641@gra.isi.edu> Message-ID: <1141246442.28240.230.camel@localhost.localdomain> And being an application protocol designer who lived through the era when Berkeley UNIX had keep alives on by default, all they did was cause my applications to fail during transient failures, causing many more failures than would otherwise occur (most of my applications are idle most of the time). Keep alives should be renamed "Keep killing". >From my point of view, good riddance to keep-alives... I always considered it one of Berkeley's poorest ideas. Jon was right: it really is an application problem, and not something I want the system to second guess me on. - Jim Gettys On Wed, 2006-03-01 at 12:29 -0800, Bob Braden wrote: > > *> > *> Hi, > *> > *> I recently got a TCP connection timeout error. > *> > *> If both side don't send packets for a long time while connection is open, is > *> there a default time out to abort the connection? > *> > *> Thanks a lot! > *> > *> - Kun > *> > > Sigh. This is periodic question #13, which recurs about every 2.3 > years. Those who have been around the Internet more than 2.3 years can > stop reading now. > > The inventors of TCP, and in particular Jon Postel, were religiously > opposed to a TCP idle timer ("keep alive"). The knowledge of when to > time out an idle connection belongs logically to the application layer; > TCP itself does not know. But the OS guys (beginning with the Berkeley > Unix folks) disagreed on pragmatic grounds, so BSD TCP had a > keep-alive. > > When the Host Requirements Working Group came to this issue, there was > nearly unanimous agreement with Postel, that TCP should not have a keep > alive. The dissenter was the programmer responsible for the BSD code, > which probably powered the majority of hosts at the time! So, the Host > Requirements group set requirements on Keep-Alives that were > designed to come as close to outlawing them as possible without > actually outlawing them. Thus, keep-alives must be optional, they must > default to "off", the default timeout interval must be very large, > etc. > > Bob Braden > From dpreed at reed.com Wed Mar 1 17:45:48 2006 From: dpreed at reed.com (David P. Reed) Date: Wed, 01 Mar 2006 20:45:48 -0500 Subject: [e2e] tcp connection timeout In-Reply-To: <1141246442.28240.230.camel@localhost.localdomain> References: <200603012029.MAA09641@gra.isi.edu> <1141246442.28240.230.camel@localhost.localdomain> Message-ID: <44064E4C.6060103@reed.com> Bob Braden may remember the following extended discussion's points from the original TCP days. Actually, there is no reason why a TCP connection should EVER time out merely because no one sends a packet over it. It's not the "keepalive" that's the problem, because keepalives were invented only to fix the problem of timing out, which need not have ever been a problem. What is wrong with a connection that takes no resources whatsoever unless someone is trying to send data over it? Sounds good to me... and the cost on each endpoint to maintain a potentially useful relationship is a few bytes of table space. (microcents in todays' dollars). The idea of timing out a connection came because OS folks confused TCP connections with "dialup modem calls" and thought that such connections ought to be treated like expensive hardware resources such as modems. (or perhaps they didn't think much at all - much of the reasoning by analogy in those days came from mindless mapping of network abstractions to things like TTY channels). We successfully removed "out of band signalling" from TCP, for example, rather than embedding every operating system "interrupt character" in the protocol. Why include operating system "hangup" functionality, when only timesharing systems really had such things, and Telnet was an application protocol that could do its own keepalives? And as far as "NAT timeouts" - one of the many objections to NATs was that they depend on accidental and irrelevant properties of the end-to-end layer (like the idea that a connection can't be up forever without sending data). NATs were invented to be pragmatic stopgaps until the endpoint address space could be enlarged (by IPv6, perhaps) - they are not well designed, and are full of serious edge cases. From avg at kotovnik.com Wed Mar 1 19:18:46 2006 From: avg at kotovnik.com (Vadim Antonov) Date: Wed, 1 Mar 2006 19:18:46 -0800 (PST) Subject: [e2e] tcp connection timeout In-Reply-To: <44064E4C.6060103@reed.com> Message-ID: On Wed, 1 Mar 2006, David P. Reed wrote: > Actually, there is no reason why a TCP connection should EVER time out > merely because no one sends a packet over it. The knowledge that connectivity is lost (i.e. that there's no *ability* to send information before the need arises) is valuable. A preemptive action can then be taken to either alert user, or to switch to an alternative. Just an example (with somewhat militarized slant): it does make a lot of difference if you know that you don't know your enemy's position, or if you falsely think that you know where they are, and meanwhile they moved and you simply didn't hear about it because some wire is cut. There's also an issue of dead end-point detection and releasing the resources allocated to such dead point (which may never come back). There is no way to discriminate between dead end-point and an end-point which merely keeps quiet other than using connection loss detection. So, in practice, all useful applications end up with some kind of timeouts (and keepalives!) - embedded in zillion protocols, mostly designed improperly, or left to the user's manual invervention. It makes absolutely no sense - in a good design shared functions must be located at the level below, so there's no replication of functionality. What is needed is an API which lets applications to specify maximal duration of loss of connectivity which is acceptable to them. This part is broken-as-designed in all BSD-ish stacks, so few people use it. Keepalives, arguably, is the crudest method for detection of loss of connectivity, and they load the network with extra traffic and do not provide for the fast detection of the loss. But, because of their crudity, they always work. A good idea would be to fix standard socket API and demand that all TCP stacks allow useful minimal keepalive times (down to seconds), rather than have application-level protocol designers to implement workarounds at the application level. And, yes, provide the TCP stack with a way to probe the application to check if it is alive and not deadlocked (that being another reason to do app-level keepalives). Some tap to the routing system which, in turn, obtains link status from the underlying hardware with its quick detection of loss of carrier would be the best, but it also is complicated. A limited form of it (i.e. shutting down TCP session when a directly attached interface carrying them goes down for longer than their timeouts) could be useful. The same kind of mentality (i.e. if a lower level is broken, just dump the problem into application developers laps) gave us the IPv4->IPv6 application portability issues (by not hiding domain name to transport address conversions) and many other bogosities, so any sizeable software project nowadays includes mandatory work on "OS abstraction" layer which attempts to hide the ugliness - and you still have to produce new binaries every time something changes underneath. There's a rule of thumb: you cannot get rid of the complexity, but you can move it around. If it stays in one place, it is manageable. If you move it so that it is replicated in many places, it always comes to bite you in the back. Keep keepalives in one place, puh-leeease. --vadim From christos at ISI.EDU Wed Mar 1 19:34:06 2006 From: christos at ISI.EDU (Christos Papadopoulos) Date: Wed, 1 Mar 2006 19:34:06 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <44064E4C.6060103@reed.com> References: <200603012029.MAA09641@gra.isi.edu> <1141246442.28240.230.camel@localhost.localdomain> <44064E4C.6060103@reed.com> Message-ID: <20060302033406.GA27982@boreas.isi.edu> On Wed, Mar 01, 2006 at 08:45:48PM -0500, David P. Reed wrote: > ... > What is wrong with a connection that takes no resources whatsoever > unless someone is trying to send data over it? Sounds good to me... > and the cost on each endpoint to maintain a potentially useful > relationship is a few bytes of table space. (microcents in todays' > dollars). Wouldn't this be a good opportunity for a DDoS attack? Christos. From dpreed at reed.com Wed Mar 1 19:57:59 2006 From: dpreed at reed.com (David P. Reed) Date: Wed, 01 Mar 2006 22:57:59 -0500 Subject: [e2e] tcp connection timeout In-Reply-To: References: Message-ID: <44066D47.5080109@reed.com> I was describing a historical perspective. TCP is what it is, and no one (not me) is suggesting it should be changed. Ideally it should be thrown away someday, because it continues to acquire barnicles. However, some comments regarding my points, below: Vadim Antonov wrote: > On Wed, 1 Mar 2006, David P. Reed wrote: > > >> Actually, there is no reason why a TCP connection should EVER time out >> merely because no one sends a packet over it. >> > > The knowledge that connectivity is lost (i.e. that there's no *ability* to > send information before the need arises) is valuable. This cannot be well-defined. This is Jim Gettys' point above. "Connectivity" is NOT lost. The end-to-end probability of packet delivery just varies. Depends on the application's point of viewwhether connectivity is lost. Does the refrigerator light remain on when the door is closed? Why do you care? And more importantly, can you describe precisely when you would care? This is a classic example of the "end to end argument" - you can't define this function "connectivity being lost" at the network layer, because connectivity isn't lost, only packets are lost. > A preemptive action > can then be taken to either alert user, or to switch to an alternative. > Just an example (with somewhat militarized slant): it does make a lot of > difference if you know that you don't know your enemy's position, or if > you falsely think that you know where they are, and meanwhile they moved > and you simply didn't hear about it because some wire is cut. > You never know your enemies' position unless you are God. You only know *where they were* not where they are now. You can waste all your time sending radar pulses every microsecond, and you still won't know where they are, and you'll never know where they will be when you decide to act. At best, your information can be narrowed based on how much energy you put into that. Better at some point to fire the missile based on your best guess and see if it hits. > There's also an issue of dead end-point detection and releasing the > resources allocated to such dead point (which may never come back). There > is no way to discriminate between dead end-point and an end-point which > merely keeps quiet other than using connection loss detection. > > So, in practice, all useful applications end up with some kind of timeouts > (and keepalives!) - embedded in zillion protocols, mostly designed > improperly, or left to the user's manual invervention. It makes > absolutely no sense - in a good design shared functions must be located at > the level below, so there's no replication of functionality. > > What is needed is an API which lets applications to specify maximal > duration of loss of connectivity which is acceptable to them. This part > is broken-as-designed in all BSD-ish stacks, so few people use it. > It's so easy to do this at the application level, and you can do it exactly as you wish - so why implement a slightly general and always-wrong model in the lower layer, especially since most users don't even need it, and some, like Jim Gettys end up having to patch around its false alarms! > Keepalives, arguably, is the crudest method for detection of loss of > connectivity, and they load the network with extra traffic and do not > provide for the fast detection of the loss. But, because of their > crudity, they always work. > They NEVER work. (except if you define working as doing something, even if it is not what you want it to do). > A good idea would be to fix standard socket API and demand that all TCP > stacks allow useful minimal keepalive times (down to seconds), rather than > have application-level protocol designers to implement workarounds at the > application level. > There you go - you think that you can demand that protocol engineers know more than application designers about "usefulness". That's why my cellphone says "call is lost" 20 times a day, requiring me to redial over and over, instead of keeping the call alive until I press hangup. Some network engineer decided that a momentary outage should be treated as a permanent one. Foo. > And, yes, provide the TCP stack with a way to probe the application to > check if it is alive and not deadlocked (that being another reason to do > app-level keepalives). > Oh yeah. Put TCP in charge of the application. Sure. And put the fox in charge of the chicken coop. The purpose of the network is to support applications, not the applications to support the network's needs. Perhaps this is because network protocol designers want to feel powerful? And I suppose the purpose of applications and users are to pay for Bill Gates's house (or Linus's mortgage?) From emmanuel.lochin at gmail.com Wed Mar 1 21:02:58 2006 From: emmanuel.lochin at gmail.com (Emmanuel Lochin) Date: Thu, 2 Mar 2006 16:02:58 +1100 Subject: [e2e] application layer -SCTP!!!! In-Reply-To: References: Message-ID: <138118820603012102v38163b0er@mail.gmail.com> Hi guys, You should look the following papers which propose to add partial reliability to TFRC. Moreover, Guillaume Jourjon who is currently PhD student at the University of New South Wales has implemented and tested a real implementation of TFRC/SACK mechanism. This can be also done in DCCP/CCID3. - Ernesto Exposito, Patrick Senac, and Michel Diaz: UML-SDL modelling of the FPTP QoS oriented transport protocol. In: Proc. of International Multimedia Modelling Conference (MMM). (2004) - Ernesto Exposito: Specification and Implementation of a QoS Oriented Transport Protocol for Multimedia Applications. PhD Thesis, LAAS-CNRS/ENSICA (2003) Emmanuel On 01/03/06, Ian McDonald wrote: > > On 2/28/06, Srinivas Krishnan wrote: > > SCTP does seem to provide on the outside a very nice interface > > borrowing congestion control from TCP and the concept of various > > streams. However, the experiences we had with SCTP are not too good, > > especially with the PR-SCTP. We at UNC-CH wanted a protocol that lets > > choose on the application layer conditions whether we wanted a fully > > reliable protocol or just a best-effort protocol. Unfortunately the 2 > > implementations of SCTP: SCTPLIB (a user level package) or the kernel > > level implementation in Linux do not fully support partial reliability > > or a way of making per packet/object decision of whether we wanted > > full reliability, partial etc. > > > > In the end we rolled out our own protocol based on UDP using TFRC as a > > congestion control algorithm (based it on the sender side). The > > protocol lets us choose whether we wanted to provide full reliability, > > partial reliability based on the application. We even have a module > > which does a form of congestion control in the application itself. > > This is for a video adaptation and hence based on latency we always do > > not need to send the next packet as the time for display on the client > > might have passed. > > I am doing a similar thing but basing it in the transport layer. It > will check whether the packet is past expiry time before sending and > will also weight control packets higher than audio with video being > the lowest. > > I am working with using DCCP CCID3 on Linux at present which is > similar in performance I would assume to UDP using TFRC as CCID3 is > TFRC based. Have you looked at DCCP? You might be able to choose > whether you want reliable or unreliable using a mixture of switching > between TCP and DCCP or were you wanting to switch midstream? > > > > I will describe some more of our work on video adaptation on a separate > post. > > Will be interested in seeing. > > Ian > -- > Ian McDonald > Web: http://wand.net.nz/~iam4 > Blog: http://imcdnzl.blogspot.com > WAND Network Research Group > Department of Computer Science > University of Waikato > New Zealand > -- Emmanuel Lochin http://lochin.new.fr/ emmanuel.lochin at nicta.com.au Networks and Pervasive Computing Research Program National ICT Australia Ltd Locked Bag 9013, Alexandria, NSW 1435 Australia --- "This email and any attachments are confidential. They may contain legally privileged information or copyright material. You should not read, copy, use or disclose them without authorisation. If you are not an intended recipient, please contact us at once by return email and then delete both messages. We do not accept liability in connection with computer virus, data corruption, delay, interruption, unauthorised access or unauthorised amendment. This notice should not be removed" -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.postel.org/pipermail/end2end-interest/attachments/20060302/3ec67160/attachment-0001.html From perfgeek at mac.com Wed Mar 1 21:38:29 2006 From: perfgeek at mac.com (rick jones) Date: Wed, 1 Mar 2006 21:38:29 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <20060302033406.GA27982@boreas.isi.edu> References: <200603012029.MAA09641@gra.isi.edu> <1141246442.28240.230.camel@localhost.localdomain> <44064E4C.6060103@reed.com> <20060302033406.GA27982@boreas.isi.edu> Message-ID: <6902a31d021eda6bd8a2332a89f25e13@mac.com> >> What is wrong with a connection that takes no resources whatsoever >> unless someone is trying to send data over it? Sounds good to me... >> and the cost on each endpoint to maintain a potentially useful >> relationship is a few bytes of table space. (microcents in todays' >> dollars). > > Wouldn't this be a good opportunity for a DDoS attack? Or just plain TCP connections staying in FIN_WAIT_2 because the other side either did an abortive close and the RST was lost, or less likely the other sides FIN never got to us, and the FIN_WAIT_2 state staying there until something was seen from the remote. You don't need DDoS, just non-robust application programmers or a bit of bad luck. rick jones Wisdom teeth are impacted, people are affected by the effects of events From touch at ISI.EDU Wed Mar 1 22:09:26 2006 From: touch at ISI.EDU (Joe Touch) Date: Wed, 01 Mar 2006 22:09:26 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: References: Message-ID: <44068C16.8050905@isi.edu> Vadim Antonov wrote: > On Wed, 1 Mar 2006, David P. Reed wrote: > >> Actually, there is no reason why a TCP connection should EVER time out >> merely because no one sends a packet over it. > > The knowledge that connectivity is lost (i.e. that there's no *ability* to > send information before the need arises) is valuable. Perhaps, but that's not what TCP provides. TCP sends data reliably. If you aren't sending anything, there's nothing to complain about. As to "releasing resources", TCP preserves only the resources that affect sending data reliably. There's no utility to that end in cleaning up old connections; they're reset only when a new connection collides, which is in keeping with the concept that state needs to be adjusted only when it affects the reliable transmission of data. Joe From avg at kotovnik.com Thu Mar 2 00:19:48 2006 From: avg at kotovnik.com (Vadim Antonov) Date: Thu, 2 Mar 2006 00:19:48 -0800 (PST) Subject: [e2e] tcp connection timeout In-Reply-To: <44066D47.5080109@reed.com> Message-ID: On Wed, 1 Mar 2006, David P. Reed wrote: > I was describing a historical perspective. TCP is what it is, and no > one (not me) is suggesting it should be changed. Ideally it should be > thrown away someday, because it continues to acquire barnicles. > However, some comments regarding my points, below: TCP spec is fine with regard to keepalives. The implementations (which follow the misguided recommendation to impose ridiculously long shortest timeout) aren't. > > The knowledge that connectivity is lost (i.e. that there's no *ability* to > > send information before the need arises) is valuable. > This cannot be well-defined. This is Jim Gettys' point above. > "Connectivity" is NOT lost. The end-to-end probability of packet > delivery just varies. If it is not well-defined, there's no point in arguing about where exactly the cut-off point should be. In 99% cases if packets go through, there's adequate connectivity. Besides, TCP does not provide any guarantees of latency or bandwidth. So the objection that "connectivity lost" condition does not match any application-specified threshold of network quality is irrelevant. > This is a classic example > of the "end to end argument" - you can't define this function > "connectivity being lost" at the network layer, because connectivity > isn't lost, only packets are lost. Sorry, but we're talking not about network, but about transport layer. I'm not arguing about design of packet transport, I'm arguing about design of OS kernel and applications. > > Just an example (with somewhat militarized slant): it does make a lot of > > difference if you know that you don't know your enemy's position, or if > > you falsely think that you know where they are, and meanwhile they moved > > and you simply didn't hear about it because some wire is cut. > You never know your enemies' position unless you are God. You only know > *where they were* not where they are now. You can waste all your time > sending radar pulses every microsecond, and you still won't know where > they are, You know better than to twist analogies to the point of absurdity. There's a notion of "good enough". If I know the max speed and max turn acceleration of the target I have an envelope of the possible target positions. It is enough to launch a missile, which will refine its bearing as it gets closer. That's how things work with the real-life missiles (the guidance and targeting systems happen to be my military specialty). And in the real life, there's a lot of sources which get fixes unreliably, fail, etc. One of the points of failure is comm system. So all adequately designed comm systems provide fault indication. > > What is needed is an API which lets applications to specify maximal > > duration of loss of connectivity which is acceptable to them. This part > > is broken-as-designed in all BSD-ish stacks, so few people use it. > > It's so easy to do this at the application level, and you can do it > exactly as you wish - so why implement a slightly general and > always-wrong model in the lower layer, especially since most users don't > even need it, and some, like Jim Gettys end up having to patch around > its false alarms! I've seen hundreds of app-level protocols. Nearly all of them get keepalives wrong. Besides, I do not want a protocol which can easily convert a transient fault when I'm not sending any data into long delays when I want to send - even after the fault went away - and that is exactly what you get when you send keepalives over reliable transport protocol. Loss of a keepalive packet shouldn't collapse window or increase timeouts. Neither should keepalives be retransmitted. Now, how exactly do you propose to implement that on top of TCP? Besides, there's a lot of situations when *network* layer should be able to tell a keepalive from a payload packet - dial-on-demand or cellular links, for example. Move keepalives to application land, and you have a choice between wasting your battery or having the modem to know about all kinds of applications. How about an app-level keepalive message being stuck behind a large piece of data (in high delay - low bandwidth situation)? It is kind of hard to get that situation sorted out right when you have a serialized stream. Finally, TCP stack has RTT estimates. Application doesn't. So it has no way to know if its loss detector is sane. > > Keepalives, arguably, is the crudest method for detection of loss of > > connectivity, and they load the network with extra traffic and do not > > provide for the fast detection of the loss. But, because of their > > crudity, they always work. > > They NEVER work. (except if you define working as doing something, even > if it is not what you want it to do). Sure. That's because all you see is broken implementations of keepalives. All I'm arguing about is that we need only one implementation - but which is designed by someone who has a clue, so all apps can then simply use it. > > A good idea would be to fix standard socket API and demand that all TCP > > stacks allow useful minimal keepalive times (down to seconds), rather than > > have application-level protocol designers to implement workarounds at the > > application level. > > There you go - you think that you can demand that protocol engineers > know more than application designers about "usefulness". The "loss of some percentage of probe packets" (in TCP land 20% loss seems to be a threshold of useability, and not in any way application specific) "observed during a period not exceeding given timeout" (which *is* applicaion-specific) is generic enough. There is *no* way to monitor network conditions with probes in such a way so as not to load it significantly by the probes and get any more specific at the same time. So no application I'm aware of even tries. (Hint: the measurement precision is proportional to measurement duration divided by probe frequency; measurement makes no sense if measured system changes significantly at a rate faster than the measurement period - so unless you're loading the network with probes, the precision is misleading, as measured conditions shoot way outside of the stationary-case error margins while you measure). It follows that most actively probing network monitoring tools produce junk numbers. > That's why my cellphone says "call is lost" 20 times a day, requiring me > to redial over and over, instead of keeping the call alive until I press > hangup. Some network engineer decided that a momentary outage should > be treated as a permanent one. Foo. You're complaining about non-configurable threshold. Well, I said that there should be API for application to say what the timeout is (or if it wants to let user to drop circuit manually). Manual timeout works only if you have a human in the loop - and fails because of human, too. In case of cell phone _not_ dropping a call because of bad conditions and leaving it to the human to worry about can easily cause a situation like that link droppes for a few minutes (common when driving in a hilly countryside) - by the time it comes back one may forget about it, and miss subsequent incoming calls. While keeping the other party wondering how long he's supposed to wait with the receiver glued to his ear. > > And, yes, provide the TCP stack with a way to probe the application to > > check if it is alive and not deadlocked (that being another reason to do > > app-level keepalives). > > > Oh yeah. Put TCP in charge of the application. Sure. And put the > fox in charge of the chicken coop. It exactly the same sense as receiving a block of data from TCP puts it "in charge" of application. It is totally irrelevant if you get an "are you still there?" query in the middle of TCP data and have to reply to it, or if you get "are you still there?" exception and have to reply to it. The only difference is that in the second case you don't have to obfuscate your application-level protocol to allow for these queries in different states and in the middle of half-completed ops. Better yet, you may have OS to do some useful guessing by default if you don't care to implement watchdog code in the application. (Digressing... frankly, I'm rather amused by the fact that no "modern" OS provides an exception on reaching a limit to number of operations between wait states; I guess one knows of usefulness of such things only if he has some experience with real-life fault-tolerant systems - as I had, at my father's design bureau; a critical fault in the kind of systems he designed more often than not entailed some obituaries). Of course, one may argue that properly designed applications never get stuck... well, good luck, and pray for the miraculous salvation when they do. > The purpose of the network is to support applications, not the > applications to support the network's needs. Perhaps this is because > network protocol designers want to feel powerful? Precisely, and that's why network shouldn't force application developers to make their code any more complicated than it should be. Setting a timeout once is an awful lot simpler than keeping asynchronous timers needed for application-level keepalives. Oh, and there's a small issue of significantly more complicated app. protocol FSMs which have to deal with asynchronous timing. Add timeout negotiation (the shorter period of both ends should be used) - know any application-level protocols which do that? And, again, most application developers can not implement the keepalives properly - because they are not network protocol designers. A lot of app. developers out there don't bother with keepalives at all - a five-minute talk to a sysadmin in a large company yields heaps of yarns about stuck backups which didn't time out for a couple of hours and so the whole night's run was ruined, and so on. Transport layer has timers. Applications, in most cases, shouldn't. Majority of application-level protocols are synchronous anyway; why force app developers to do asynchronous coding? --vadim From avg at kotovnik.com Thu Mar 2 00:32:39 2006 From: avg at kotovnik.com (Vadim Antonov) Date: Thu, 2 Mar 2006 00:32:39 -0800 (PST) Subject: [e2e] tcp connection timeout In-Reply-To: <44068C16.8050905@isi.edu> Message-ID: On Wed, 1 Mar 2006, Joe Touch wrote: > Vadim Antonov wrote: > > On Wed, 1 Mar 2006, David P. Reed wrote: > > > > The knowledge that connectivity is lost (i.e. that there's no *ability* to > > send information before the need arises) is valuable. > > Perhaps, but that's not what TCP provides. TCP sends data reliably. If > you aren't sending anything, there's nothing to complain about. There is no such thing as just sending data reliably. All retransmission protocols do is trade off maximal latency for probability of delivery. > As to "releasing resources", TCP preserves only the resources that > affect sending data reliably. There's no utility to that end in cleaning > up old connections; they're reset only when a new connection collides, > which is in keeping with the concept that state needs to be adjusted > only when it affects the reliable transmission of data. Oh, I see. Did you ever consider that there are application servers which have to carry, say, 20 megabytes of in-core state per a client connection? I did see such beauties. Rete algorithm eats memory like hell, if you have enough facts to use. Cleaning up stale or dead connections is something any half-respectable app server does. --vadim From 3dfx232 at sohu.com Thu Mar 2 02:15:50 2006 From: 3dfx232 at sohu.com (shaohe) Date: Thu, 2 Mar 2006 18:15:50 +0800 (CST) Subject: [e2e] A simple question about handling the dump files Message-ID: <10632504.1141294550495.JavaMail.postfix@mx57.mail.sohu.com>

Could some one please give me some advice about handling the tcp dump files? I'm working on an analysis of the network traffic. However, under the Windows environment, I can not find any useful tool to visualize or handle the dump files conveniently.

Tcptrace i known is a common tool to analyze network traffic and take as input dump files. Unfortunately, it is seem that what tcptrace does is very different from what i want.

Could somebody help me ,the information related to the follow topic are valuable for me:

first, how can i to display the dump file in an understandable style, or to transform the binary format of original dump file to a more friend format, such as the text format etc. (note: under Windows OS)

second, the output format of dump file still confused me. Do all records in the files have the same size in bytes? if so, what is the number of bytes?

In addition, I want to read a record each time, but how to judge the end of a record if the lengths of records of different protocols (e.g. tcp, udp) are variable ?

Thanks very much !!

Shaohe lv

Mar. 02 2006 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.postel.org/pipermail/end2end-interest/attachments/20060302/da0c1485/attachment.html From sampad_m at rediffmail.com Thu Mar 2 05:19:00 2006 From: sampad_m at rediffmail.com (sampad mishra) Date: 2 Mar 2006 13:19:00 -0000 Subject: [e2e] A simple question about handling the dump files Message-ID: <20060302131900.8036.qmail@webmail9.rediffmail.com> On Thu, 02 Mar 2006 shaohe wrote : >

Could some one please give me some advice about handling the tcp dump files? I'm working on an analysis of the network traffic. However, under the Windows environment, I can not find any useful tool to visualize or handle the dump files conveniently.

Have you tried ethereal(multi platform protocol analyzer) for windows.... regards, sampad mishra. Tcptrace i known is a common tool to analyze network traffic and take as input dump files. Unfortunately, it is seem that what tcptrace does is very different from what i want.

Could somebody help me ,the information related to the follow topic are valuable for me:

first, how can i to display the dump file in an understandable style, or to transform the binary format of original dump file to a more friend format, such as the text format etc. (note: under Windows OS)

second, the output format of dump file still confused me. Do all records in the files have the same size in bytes? if so, what is the number of bytes?

In addition, I want to read a record each time, but how to ju! > dge the end of a record if the lengths of records of different protocols (e.g. tcp, udp) are variable ?

Thanks very much !!

Shaohe lv

Mar. 02 2006 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.postel.org/pipermail/end2end-interest/attachments/20060302/c0e9fe9b/attachment-0001.html From touch at ISI.EDU Thu Mar 2 07:09:57 2006 From: touch at ISI.EDU (Joe Touch) Date: Thu, 02 Mar 2006 07:09:57 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: References: Message-ID: <44070AC5.2060608@isi.edu> Vadim Antonov wrote: > On Wed, 1 Mar 2006, Joe Touch wrote: > >> Vadim Antonov wrote: >>> On Wed, 1 Mar 2006, David P. Reed wrote: >>> >>> The knowledge that connectivity is lost (i.e. that there's no *ability* to >>> send information before the need arises) is valuable. >> Perhaps, but that's not what TCP provides. TCP sends data reliably. If >> you aren't sending anything, there's nothing to complain about. > > There is no such thing as just sending data reliably. All retransmission > protocols do is trade off maximal latency for probability of delivery. Sure - the checksum is just a statistical statement (it could be right but the data could be wrong). And the ACK could have been munged in transit in a way that preserves the checksum. The point is that TCP is about saying something (even if not 100%) about data transfer - it is NOT about saying something about the connection. That's for things like PPP. >> As to "releasing resources", TCP preserves only the resources that >> affect sending data reliably. There's no utility to that end in cleaning >> up old connections; they're reset only when a new connection collides, >> which is in keeping with the concept that state needs to be adjusted >> only when it affects the reliable transmission of data. > > Oh, I see. Did you ever consider that there are application servers which > have to carry, say, 20 megabytes of in-core state per a client connection? > I did see such beauties. Rete algorithm eats memory like hell, if you > have enough facts to use. > > Cleaning up stale or dead connections is something any half-respectable > app server does. A half-respectable app server wouldn't allocate 20MB to the receive buffers of a connection unnecessarily. Run with smaller receive windows - which is fine unless your sustained rate is over 1.6 Gbps on a land line. The other problem, though, is that you're assuming that once TCP goes away, both sides should just 'cleanup'. If the connection has no outstanding data, it's possible that some ACK'd data has been received but not delivered to the application - 20MB worth. It's not TCP's job to decide to give up on that. It's the app's. The app can issue a close if it doesn't want to stick around after an idle period. But it should - to tell the other end that the data has been received. That's the point of the (statistical) notion of reliable transfer. Joe From zhani_med_faten at yahoo.fr Thu Mar 2 07:00:55 2006 From: zhani_med_faten at yahoo.fr (Zhani Mohamed Faten) Date: Thu, 2 Mar 2006 16:00:55 +0100 (CET) Subject: [e2e] A simple question about handling the dump files In-Reply-To: <20060302131900.8036.qmail@webmail9.rediffmail.com> Message-ID: <20060302150056.64677.qmail@web25715.mail.ukl.yahoo.com> Hi Here you have more traces captured with tcpdump and some software to use to have more friend format and to look graphically the traffic. http://ita.ee.lbl.gov/index.html http://ita.ee.lbl.gov/html/traces.html http://ita.ee.lbl.gov/html/software.html for other tools like ethreal, it is a nice tool but treatment time of the data especially when analysing huge quantity of traffic is very long. tcptrace is a very good tool and you can read more about the existing modules, I think you'll find analysis that you need. for windows or Linux environment, it's better to use Linux since original versions of this tools was developped for Linux but still you can install cygwin under windows and use Linux programs there even with graphical interface what are you interested iin exactly ? Any questions are welcome, Regards Zhani Mohamed Faten sampad mishra a ?crit : On Thu, 02 Mar 2006 shaohe wrote : >

Could some one please give me some advice about handling the tcp dump files? I'm working on an analysis of the network traffic. However, under the Windows environment, I can not find any useful tool to visualize or handle the dump files conveniently.

Have you tried ethereal(multi platform protocol analyzer) for windows.... regards, sampad mishra. Tcptrace i known is a common tool to analyze network traffic and take as input dump files. Unfortunately, it is seem that what tcptrace does is very different from what i want.

Could somebody help me ,the information related to the follow topic are valuable for me:

first, how can i to display the dump file in an understandable style, or to transform the binary format of original dump file to a more friend format, such as the text format etc. (note: under Windows OS)

second, the output format of dump file still confused me. Do all records in the files have the same size in bytes? if so, what is the number of bytes?

In addition, I want to read a record each time, but how to ju! > dge the end of a record if the lengths of records of different protocols (e.g. tcp, udp) are variable ?

Thanks very much !!

Shaohe lv

Mar. 02 2006 --------------------------------- Nouveau : t?l?phonez moins cher avec Yahoo! Messenger ! D?couvez les tarifs exceptionnels pour appeler la France et l'international.T?l?chargez la version beta. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.postel.org/pipermail/end2end-interest/attachments/20060302/be9933f8/attachment.html From lynne at telemuse.net Thu Mar 2 10:46:47 2006 From: lynne at telemuse.net (Lynne Jolitz) Date: Thu, 2 Mar 2006 10:46:47 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <200603012029.MAA09641@gra.isi.edu> Message-ID: <001a01c63e29$a9007ac0$6e8944c6@telemuse.net> Not all Berkeley people were for keepalive. ;-) I've found over the years that timers are often used by engineers and programmers when they don't have a good feel for the systemic effects, so they place a timer into the design to keep the door (or signal) open as basically a get-ahead move. It's an easy thing to do. While even a good designer might use such as a crutch at the beginning of the project, it is the sign of an ignorant designer who clings to such without good cause once the design is well-vetted. The problem is "How do you know when a socket is active"? Say data travels on a session every few hours. If you have a keepalive poking it to check if it is active or not, you now have to determine if it is real data traveling on it, or just the keepalive. The fault tolerant solution to this dilemma (e.g. Tandem, ...) was to place the session reconnect in the sequence in the application layer for specific applications that required such. Thus no keepalive was required (e.g. fail fast and retry). From the design standpoint this was viewed as a subcase of application restart, and not as a Layer-4 mechanism. The fact that Postel and almost the entire Host Requirements Working Group was against it after explanation by "the programmer responsible" should have in retrospect been an obvious warning sign. But money talks, and a "Very Big Company" (VBC) funding Berkeley at the time needed a solution for their telnet drops. Of course, that VBC isn't around anymore (nor does anyone really care). But keepalive lingers on... Lynne Jolitz. ---- We use SpamQuiz. If your ISP didn't make the grade try http://lynne.telemuse.net > -----Original Message----- > From: end2end-interest-bounces at postel.org > [mailto:end2end-interest-bounces at postel.org]On Behalf Of Bob Braden > Sent: Wednesday, March 01, 2006 12:29 PM > To: end2end-interest at postel.org; kunhuang at uga.edu > Subject: Re: [e2e] tcp connection timeout > > > > > *> > *> Hi, > *> > *> I recently got a TCP connection timeout error. > *> > *> If both side don't send packets for a long time while > connection is open, is > *> there a default time out to abort the connection? > *> > *> Thanks a lot! > *> > *> - Kun > *> > > Sigh. This is periodic question #13, which recurs about every 2.3 > years. Those who have been around the Internet more than 2.3 years can > stop reading now. > > The inventors of TCP, and in particular Jon Postel, were religiously > opposed to a TCP idle timer ("keep alive"). The knowledge of when to > time out an idle connection belongs logically to the application layer; > TCP itself does not know. But the OS guys (beginning with the Berkeley > Unix folks) disagreed on pragmatic grounds, so BSD TCP had a > keep-alive. > > When the Host Requirements Working Group came to this issue, there was > nearly unanimous agreement with Postel, that TCP should not have a keep > alive. The dissenter was the programmer responsible for the BSD code, > which probably powered the majority of hosts at the time! So, the Host > Requirements group set requirements on Keep-Alives that were > designed to come as close to outlawing them as possible without > actually outlawing them. Thus, keep-alives must be optional, they must > default to "off", the default timeout interval must be very large, > etc. > > Bob Braden > > From jg at freedesktop.org Thu Mar 2 11:51:54 2006 From: jg at freedesktop.org (Jim Gettys) Date: Thu, 02 Mar 2006 14:51:54 -0500 Subject: [e2e] tcp connection timeout In-Reply-To: References: Message-ID: <1141329115.28240.315.camel@localhost.localdomain> > I've seen hundreds of app-level protocols. Nearly all of them get > keepalives wrong. Besides, I do not want a protocol which can easily > convert a transient fault when I'm not sending any data into long delays > when I want to send - even after the fault went away - and that is exactly > what you get when you send keepalives over reliable transport protocol. > Loss of a keepalive packet shouldn't collapse window or increase timeouts. > Neither should keepalives be retransmitted. And you expect the operating system to have better knowledge than I do, who is familiar with my application? > > Now, how exactly do you propose to implement that on top of TCP? > > Besides, there's a lot of situations when *network* layer should be able > to tell a keepalive from a payload packet - dial-on-demand or cellular > links, for example. Move keepalives to application land, and you have a > choice between wasting your battery or having the modem to know about all > kinds of applications. > > How about an app-level keepalive message being stuck behind a large piece > of data (in high delay - low bandwidth situation)? It is kind of hard to > get that situation sorted out right when you have a serialized stream. If things are stuck behind a large piece of data, in your example, the TCP connection will fail, and I get all the signaling I need, just when I need it; if not, the application will continue to make progress. The keep alive isn't needed to detect a failure or continued progress in this case. > > Finally, TCP stack has RTT estimates. Application doesn't. Depends on the applications. Some do keep RTT estimates, and I certainly would like the OS to let me get the information that TCP keeps on my behalf (I'm not sure what the current state of the sockets API is in this area). > So it has no > way to know if its loss detector is sane. > > > > Keepalives, arguably, is the crudest method for detection of loss of > > > connectivity, and they load the network with extra traffic and do not > > > provide for the fast detection of the loss. But, because of their > > > crudity, they always work. > > > > They NEVER work. (except if you define working as doing something, even > > if it is not what you want it to do). > > Sure. That's because all you see is broken implementations of keepalives. > All I'm arguing about is that we need only one implementation - but which > is designed by someone who has a clue, so all apps can then simply use it. If I care to detect transient failures, my application can/should do this on its own. > > > > A good idea would be to fix standard socket API and demand that all TCP > > > stacks allow useful minimal keepalive times (down to seconds), rather than > > > have application-level protocol designers to implement workarounds at the > > > application level. > > > > There you go - you think that you can demand that protocol engineers > > know more than application designers about "usefulness". > > The "loss of some percentage of probe packets" (in TCP land 20% loss seems > to be a threshold of useability, and not in any way application specific) > "observed during a period not exceeding given timeout" (which *is* > applicaion-specific) is generic enough. > > There is *no* way to monitor network conditions with probes in such a way > so as not to load it significantly by the probes and get any more specific > at the same time. So no application I'm aware of even tries. (Hint: the > measurement precision is proportional to measurement duration divided by > probe frequency; measurement makes no sense if measured system changes > significantly at a rate faster than the measurement period - so unless > you're loading the network with probes, the precision is misleading, as > measured conditions shoot way outside of the stationary-case error margins > while you measure). The only time I care to have an error reported is when I'm actually trying to use the connection. Signaling errors when I don't care just means I get failures that I wouldn't otherwise experience. This is a feature? > > It follows that most actively probing network monitoring tools produce > junk numbers. > > > That's why my cellphone says "call is lost" 20 times a day, requiring me > > to redial over and over, instead of keeping the call alive until I press > > hangup. Some network engineer decided that a momentary outage should > > be treated as a permanent one. Foo. > > You're complaining about non-configurable threshold. Well, I said that > there should be API for application to say what the timeout is (or if it > wants to let user to drop circuit manually). Manual timeout works only if > you have a human in the loop - and fails because of human, too. In case of > cell phone _not_ dropping a call because of bad conditions and leaving it > to the human to worry about can easily cause a situation like that link > droppes for a few minutes (common when driving in a hilly countryside) - > by the time it comes back one may forget about it, and miss subsequent > incoming calls. While keeping the other party wondering how long he's > supposed to wait with the receiver glued to his ear. If there are no costs for listening, I see no problem here. If there are, design your application server to be sane (as HTTP servers do, after much pulling of teeth). People seem to figure out pretty well when to hang up the phone: it isn't all that hard. And it isn't as though you only have a single line, (or ear), so the analogy with a telephone is broken to begin with. > > > > And, yes, provide the TCP stack with a way to probe the application to > > > check if it is alive and not deadlocked (that being another reason to do > > > app-level keepalives). > > > > > Oh yeah. Put TCP in charge of the application. Sure. And put the > > fox in charge of the chicken coop. > > It exactly the same sense as receiving a block of data from TCP puts it > "in charge" of application. It is totally irrelevant if you get an "are > you still there?" query in the middle of TCP data and have to reply to it, > or if you get "are you still there?" exception and have to reply to it. > The only difference is that in the second case you don't have to obfuscate > your application-level protocol to allow for these queries in different > states and in the middle of half-completed ops. Better yet, you may have > OS to do some useful guessing by default if you don't care to implement > watchdog code in the application. The OS will *always* guess wrong, except by chance. > > (Digressing... frankly, I'm rather amused by the fact that no "modern" OS > provides an exception on reaching a limit to number of operations between > wait states; I guess one knows of usefulness of such things only if he > has some experience with real-life fault-tolerant systems - as I had, at > my father's design bureau; a critical fault in the kind of systems he > designed more often than not entailed some obituaries). > > Of course, one may argue that properly designed applications never get > stuck... well, good luck, and pray for the miraculous salvation when they > do. > > > The purpose of the network is to support applications, not the > > applications to support the network's needs. Perhaps this is because > > network protocol designers want to feel powerful? > > Precisely, and that's why network shouldn't force application developers > to make their code any more complicated than it should be. Setting a > timeout once is an awful lot simpler than keeping asynchronous timers > needed for application-level keepalives. Oh, and there's a small issue of > significantly more complicated app. protocol FSMs which have to deal with > asynchronous timing. Add timeout negotiation (the shorter period of both > ends should be used) - know any application-level protocols which do that? > > And, again, most application developers can not implement the keepalives > properly - because they are not network protocol designers. A lot of app. > developers out there don't bother with keepalives at all - a five-minute > talk to a sysadmin in a large company yields heaps of yarns about > stuck backups which didn't time out for a couple of hours and so the whole > night's run was ruined, and so on. Huh? People who build applications protocols are usually at least slightly familiar with underlying transport, though there have been some noticeable counter-examples ;-). > > Transport layer has timers. Applications, in most cases, shouldn't. > Majority of application-level protocols are synchronous anyway; why force > app developers to do asynchronous coding? Huh? Every X application that has blinking cursors (e.g. any text widget), has to have timers. All GUI application toolkits, on any platform I am aware of, have timers of some form (which may or may not be in active use, if the application is truly trivial). So any application that talks to people has this facility available trivially, and by definition, is an event driven program for which dealing with timers is easy. Only batch sorts of applications written without any library support might not have timer facilities, and I'd then ask why you are writing at that low a level? Just about any serious network interface library will have such facilities; you won't be coding them yourself unless you are a flat-rock programmer. As I said, they should be called "keep killing" rather than "keep alives". Regards, - Jim From avg at kotovnik.com Thu Mar 2 15:54:26 2006 From: avg at kotovnik.com (Vadim Antonov) Date: Thu, 2 Mar 2006 15:54:26 -0800 (PST) Subject: [e2e] tcp connection timeout In-Reply-To: <1141329115.28240.315.camel@localhost.localdomain> Message-ID: On Thu, 2 Mar 2006, Jim Gettys wrote: > And you expect the operating system to have better knowledge than I do, > who is familiar with my application? Yes. The only information application has which OS doesn't is the duration of loss of connectivity it is willing to tolerate. You cannot be any more specific than that for the reasons I already explained. Converserly, network stack has access to internal state not available to applications (or which requires some non-portable and ugly contortions to obtain, such as the current state of interfaces). OS internals do really have better knowledge than your application. > If things are stuck behind a large piece of data, in your example, the > TCP connection will fail, and I get all the signaling I need, just when > I need it; if not, the application will continue to make progress. The > keep alive isn't needed to detect a failure or continued progress in > this case. He-he. It may be that application on the other end is slow in sending that data, not that the network is crappy. How do you tell? Repeat, slowly: there is no way to tell slow application from stalled network if the only tool you have is a serialized stream terminated by that application. > > Finally, TCP stack has RTT estimates. Application doesn't. > > Depends on the applications. Some do keep RTT estimates, and I > certainly would like the OS to let me get the information that TCP keeps > on my behalf (I'm not sure what the current state of the sockets API is > in this area). The current state is "no such API". In any case, this is one piece of many, and it needs interpretation. > If I care to detect transient failures, my application can/should do > this on its own. There's no boundary between "transient" and "hard" failures... one can always hope that the end-system on the other end may be fixed, even if it takes a year. Having established that, we're back to the point that the only relevant parameter application can use is the threshold for duration of outage separating transient from hard fault. And, no, applications cannot do that on their own because they cannot send probe packets out of order and without influencing the TCP stream state within the same connection. So all application-level keepalives-over-TCP are broken by definition. Even if their implementors think that they're ok. > The only time I care to have an error reported is when I'm actually > trying to use the connection. Signaling errors when I don't care just > means I get failures that I wouldn't otherwise experience. This is a > feature? Oh, I see. You didn't understand what I said about the value of knowledge that the communication line is down. > If there are no costs for listening, I see no problem here. There always costs. They may be small (like keeping a TCB), they may be large (like keeping 20Mb of state in RAM), they may be intolerable - but there are always some, and you do not have any idea of the costs imposed on the remote end. If the only cost you have is memory for TCB and keeping a port, then timeout of 5-6 hours is probably OK (if you're not running a high-volume server, and then you're down to minutes). If you're running real-time telemetry, then you need to know within a second if you can still read that sensor, or you may blow half of your plant up. > If there are, design your application server to be sane (as HTTP servers > do, after much pulling of teeth). Sure, heh. HTTP still doesn't let me know if the sever is slow (and so I better wait) or if my network croaked (and so I better go call my ISP). And so the first thing ISP tech support is asking is to make sure the site I can't reach is OK... and most users don't have idea what "ping" means... so the techsupport guy is wasting his life explaining how to do that to the hapless user, and the user wastes his life reading from the screen and trying to understand what techsupport guy says. The lack of proper diagnostics also has nontrivial costs. In fact, one of my clients (a large app vendor) comissioned a quick diagnostic tool from my company specifically to discriminate network faults from application faults: it costs them about two mil a year to field tech support calls which end up being resolved as "client's network problem". > People seem to figure out pretty well when to hang up the phone: Tell that to my friend who always wants to recall the entire story of her life every time I talk to her on the phone :) >the analogy with a telephone is broken to begin with. Not my analogy, sorry :) > The OS will *always* guess wrong, except by chance. Sure. That's why we let OSes to run all our applications, on the slim chance that they may guess right what page to load into memory or what block to write on the disk. > Huh? People who build applications protocols are usually at least > slightly familiar with underlying transport, though there have been some > noticeable counter-examples ;-). Like nearly every application-level protocol to date. Oh, HTTP 1.0 seems to be the leader of the "I don't have a clue how the network works" pack. > > Transport layer has timers. Applications, in most cases, shouldn't. > > Majority of application-level protocols are synchronous anyway; why force > > app developers to do asynchronous coding? > > Huh? Every X application that has blinking cursors (e.g. any text > widget), has to have timers. All GUI application toolkits, on any > platform I am aware of, have timers of some form (which may or may not > be in active use, if the application is truly trivial). Most network applications out there aren't GUIs. In fact, most real-world network apps simply use a browser in lieu of GUI. And there are three orders of magnitude more various back-ends than front-end UI apps. > So any application that talks to people has this facility available > trivially, and by definition, is an event driven program for which > dealing with timers is easy. Only batch sorts of applications written > without any library support might not have timer facilities, and I'd > then ask why you are writing at that low a level? Just about any > serious network interface library will have such facilities; you won't > be coding them yourself unless you are a flat-rock programmer. Show me a web browser that does not have glaringly obvious timing bugs, and I may convert to your point of view. As is, I'm sick and tired of nifty widget-encrusted thingies which crashdump when I try to resize window without pausing to make sure they're done whatever they do, etc. Async programming *is* hard. It produces programs which cannot be regression-tested. For any non-trivial state machine there's an exponential number of combinations of timing conditions. There was an article (by Rob Pike et al, if I'm not mistaken) about how hard is to get even a 10-line piece of reentrant code right (on design of spinlocks in Plan 9). The result is that most GUI software and async servers are buggy as hell, simply because they cannot be tested with anything but a token coverage of various timing conditions. And because it is hard to test, most of it is not tested at all. There's the same problem with OS kernels - they're not tested properly, but kernels have hundreds of millions of users, so there's a fair chance bugs will be discovered relatively quickly. Besides, kernels have years of history behind them (and the older they are, the more stable their operation is... I wouldn't use Linux instead of BSD for anything critical, because a BSD kernel has a decade more of bug-fixing behind it, etc). Now if I build a typical application which may have, say, few thousand users and want to make sure it has adequate quality - I cannot rely on the million monkeys doing testing for me. So I have to build my own regression tests - and those cannot test asynchronous operation. Thus, if I care to deliver something which won't have customers calling me to report spontaneous faults and strange lock-ups for the next twenty years, I'd better do purely synchronous design. Fortunately, in nearly all cases that's all I need. (The fact that GUI toolkits have to use timers to blink cursors and do cutesey animations instead of telling the display server to do it merely says that X-windows and MS Windows are crappy designs... even the old alpanumeric display designers had more sense than to send cursor blinks over the wire. Note that with web-based GUIs the actual application code is nearly always purely synchronous, which explains why it is much easier to build a working website than to build a working desktop GUI, even with all the idiotic browser wars and the resulting compatibility issues. Unfortunately, the intellectual "property" considerations often sink good designs (like NeWS) and leave the crap fluorish.) But, then, I guess, nobody cares to write quality software anymore. Hack it together, seems to work, ship. Blame any crashes and lock-ups on Microsoft or el-cheapo PCs. > As I said, they should be called "keep killing" rather than "keep > alives". They should be properly called "2x4 clue bars". --vadim From seth.johnson at RealMeasures.dyndns.org Thu Mar 2 16:10:15 2006 From: seth.johnson at RealMeasures.dyndns.org (Seth Johnson) Date: Thu, 02 Mar 2006 19:10:15 -0500 Subject: [e2e] Important Statement to Review for Signing Message-ID: <44078967.2B43001D@RealMeasures.dyndns.org> (With apologies for this not-strictly-technical post. But it seems to me that people concerned with the transport are the ones who would truly appreciate what's on the line with this treaty. Please consider the following and also send it on to appropriate interested parties. If you are a blogger or know clueful bloggers, please try to have it posted in a highly visible forum. -- Seth Johnson) Hello folks, Please review the important joint statement below, related to the WIPO Broadcaster's Treaty, and consider adding your signature if you are an American citizen. Also make sure those you know who should sign are also given the opportunity. Andy Oram has written a good letter to the US Delegation to WIPO on the subject: > http://www.oreillynet.com/pub/a/etel/2006/01/13/the-problem-with-webcasting.html?page=2 CPTech Links on the Treaty: > http://www.cptech.org/ip/wipo/bt/index.html#Coments Electronic Frontier Foundation Links: > http://www.eff.org/IP/WIPO/broadcasting_treaty/ IP Justice Links: > http://www.ipjustice.org/WIPO/broadcasters.shtml Union for the Public Domain Links: > http://www.public-domain.org/?q=node/47 The Latest Draft of the Treaty: > http://www.cptech.org/ip/wipo/sccr12.2rev2.doc A survey of relevant links: > http://www.hyperorg.com/blogger/mtarchive/wipo_and_the_war_against_the_i.html If you choose to sign, please send your name along with an affiliation or appropriate short phrase to attach to your name for identification purposes, to mailto:seth.p.johnson at gmail.com. If your organization endorses the statement, please indicate that separately, so your organization will be listed under that header. Thank you for consideration. Seth Johnson Corresponding Secretary New Yorkers for Fair Use Joint Statement to Congress: Dear (Relevant Congressional Committees) (cc the WIPO Delegation): Negotiations are currently underway at the World Intellectual Property Organization (WIPO) to develop a treaty giving broadcasters power to suppress currently lawful communications. The United States delegation is also advocating similar rights for "webcasters" through which the authors of new works communicate them to the public. Some provisions of the proposed "Treaty on the Protection of Broadcasting Organizations" would merely update and standardize existing legal norms, but several proposals would require Congress to enact sweeping new laws that give private parties control over information, communication, and even copyrighted works of others, whenever they have broadcast or "webcast" the work. The novel policy areas addressed by this treaty go beyond ordinary treaty-making that seeks worldwide adherence to U.S. policy. Instead, this initiative invades Congress? prerogative to develop and establish national policy. Indeed, even as Congress is debating how best to protect network neutrality, treaty negotiators are debating how to eliminate it. The threat to personal liberties presented by this treaty is too grave to allow these new policy initiatives be handed over to an unelected delegation to negotiate with foreign countries, leaving Congress with the sole option whether to acquiesce. When dealing with policies that are related to copyright and communications, Congress's assigned powers and responsibility under Article I, Section 8 of the Constitution become particularly important. We urge two important steps. First, the new proposed regulations should be published in the Federal Register, with an invitation to the public to comment. Second, the appropriate House and Senate committees should hold hearings to more fully explore the impact of these novel legal restrictions on commerce, freedom of speech, copyright holders, network neutrality, and communications policy. Americans currently enjoy substantial freedoms with respect to broadcast and webcast communications. Under the proposed treaty, the existing options available to commercial enterprises and entrepreneurs as well as the general public to communicate news, information and entertainment would be limited by a new private gatekeeper who adds nothing of value to the content. Communications policies currently under discussion at the FCC would be impacted. Individuals and small businesses would be limited in their freedom of speech. Copyright owners would find their freedom to license their works limited by whether the work had been broadcast or webcast. The principle of network neutrality, already the subject of congressional hearings, would be all but destroyed. As able as the staff of the United States Patent and Trademark Office and the Library of Congress may be, it was never intended that they alone should stake out the United States national policy to be promoted before an unelected international body in entirely new areas abridging civil liberties. Congress should be the first to establish America?s national policies in this new area so that our WIPO delegation will have sufficient guidance to achieve legitimate objectives without impairing Constitutional principles such as freedom of speech and assembly, without impairing the value of copyrights, and without granting to private parties arbitrary power to suppress existing freedoms or burden new technologies. We cannot afford for Congress to wait for the Senate to be presented with a fully formed treaty calling for the enacting of domestic law at odds with fundamental American liberties foreign to American and international legal norms, and that would bring to a close many of the benefits of widespread personal computing and the end-to-end connectivity brought by the Internet. We ask Congress to use its authority now to shape these important communications policies impacting constitutionally based copyright laws and First Amendment liberties. Signed, (Affiliations for individual signers are for identification only. Endorsing organizations are listed separately.) William Abernathy, Independent Technical Editor Scottie D. Arnett, President, Info-Ed, Inc. Jonathan Askin, Pulver.com John Bachir, Ibiblio.org Tom Barger, DMusic.com Fred Benenson, FreeCulture.org Daniel Berninger, VON Coalition Eric Blossom, GNU Radio Joshua Breitbart, Media Tank Dave Burstein, Editor, DSL Prime Michael Calabrese, Vice President, New America Foundation Dave A. Chakrabarti, Community Technologist, CTCNet Chicago Steven Cherry, Senior Associate Editor, IEEE Spectrum Steven Clift, Publicus.Net Roland J. Cole, J.D., Ph.D., Executive Director, Software Patent Institute Gordon Cook, Editor, Publisher and Owner since 1992 of the COOK Report on Internet Protocol Walt Crawford, Editor/Publisher, Cites & Insights Cynthia H. de Lorenzi, Washington Bureau for ISP Advocacy Cory Doctorow, Author, journalist, Fulbright Chair, EFF Fellow Marshall Eubanks, CEO, AmericaFree.tv Harold Feld, Senior Vice President, Media Access Project Miles R. Fidelman, President, The Center for Civic Networking Richard Forno (bio: http://www.infowarrior.org/rick.html) Laura N. Gasaway, Professor of Law, University of North Carolina Paul Gherman, University Librarian, Vanderbilt University Shubha Ghosh, Professor of Law, Southern Methodist University Paul Ginsparg, Cornell University Fred R. Goldstein, Ionary Consulting Robin Gross, IP Justice Michael Gurstein, New Jersey Institute for Technology Jon Hall, President, Linux International Chuck Hamaker, Atkins Library, University of North Carolina - Charlotte Charles M. Hannum, consultant, founder of The NetBSD Project Dewayne Hendricks, CEO, Dandin Group David R Hughes, CEO, Old Colorado City Communications, 1993 EFF Pioneer Award Paul Hyland, Computer Professionals for Social Responsibility David S. Isenberg, Ph.D., Founder & CEO, isen.com, LLC Seth Johnson, New Yorkers for Fair Use Paul Jones, School of Information and Library Science, University of North Carolina - Chapel Hill Peter D. Junger, Professor of Law Emeritus, Case Western Reserve University Brewster Kahle, Internet Archive Jerry Kang, Professor of Law, UCLA School of Law Dennis S. Karjala, Jack E. Brown Professor of Law, Arizona State University Dan Krimm, Independent Musician Michael J. Kurtz, Astronomer and Computer Scientist, Harvard- Smithsonian Center for Astrophysics Michael Maranda, President, Association For Community Networking Kevin Marks, mediAgora Anthony McCann, www.beyondthecommons.com Sascha Meinrath, Champaign-Urbana Community Wireless Network, Free Press Edmund Mierzwinski, Consumer Program Director, U.S. Public Interest Research Group Lee N. Miller, Ph.D., Editor Emeritus, Ecological Society of America John Mitchell, InteractionLaw Tom Moritz, Chief, Knowledge Managment, Getty Research Institute Andrew Odlyzko, University of Minnesota Ken Olthoff, Advisory Board, EFF Austin Andy Oram, Editor, O'Reilly Media Bruce Perens (bio at http://perens.com/Bio.html) Ian Peter, Senior Partner, Ian Peter and Associates Pty Ltd Malla Pollack, Law Professor, American Justice School of Law Jeff Pulver, Pulver.com Tom Raftery, PodLeaders.com David P. Reed, contributor to original Internet Protocol design Jerome H. Reichman, Bunyan S. Womble Professor of Law Lawrence Rosen, Rosenlaw & Einschlag; Stanford University Lecturer in Law Bruce Schneier, security technologist and CTO, Counterpane David J. Smith, Specialist of Distributed Content Distribution and Protocols, Michigan State University Michael E. Smith, LXNY Richard Stallman, President, Free Software Foundation Fred Stutzman, Ph.D. Student, UNC Chapel Hill Peter Suber, Open Access Project Director, Public Knowledge Jay Sulzberger, New Yorkers for Fair Use Aaron Swartz, infogami Stephen H. Unger, Professor, Computer Science Department, Columbia University Eric F. Van de Velde, Ph.D., Director, Library Information Technology, California Institute of Technology Tom Vogt, independent computer security researcher David Weinberger, Harvard Berkman Center Frannie Wellings, Free Press Adam Werbach, President, Ironweed Films Stephen Wolff, igewolff.net Brett Wynkoop, Wynn Data Ltd. John Young, Cryptome.org Endorsing Organizations: Association For Community Networking (AFCN) The Center for Civic Networking Computer Professionals for Social Responsibility Contact Communications The COOK Report on Internet Protocol Cryptome.org Champaign-Urbana Community Wireless Network Dandin Group FreeCulture.org Free Press Free Software Foundation Illinois Community Technology Coalition Internet Archive Ionary Consulting IP Justice isen.com, LLC mediAgora New Yorkers for Fair Use Old Colorado City Communications Podleaders.com Pulver.com Rosenlaw & Einschlag U.S. Public Interest Research Group Washington Bureau for ISP Advocacy Wyoming.com -- [separate one-page attachment] WHY PUBLIC SCRUTINY OF THE PROPOSED BROADCASTER TREATY IS NEEDED If Congress were to hold public hearings, or if the US delegation to WIPO were to publish the current proposal for public review and comment, myriad voices from various segments of society could come forward to show that the proposed Broadcaster's Treaty: * Is written to look like existing copyright treaties, but it is not based on the constitutional requirements for copyright protection, such as originality, and in fact is antagonistic to copyrights * Is promoted as a way of standardizing existing signal protection, but in fact extends well beyond signal protection by giving broadcasters and webcasters a monopoly, for 50 years, over the content created by others the moment it is broadcast or transmitted over the Internet * Gives broadcasters greater rights than producers of original works * Accords exclusive rights to non-authors in direct violation of fundamental rights guaranteed by the Constitution * Attacks the principle of network neutrality which serves as the basis by which the Internet has fostered a profound expansion in human capacities and innovation * Grants privileges that extend beyond broadcast signals to actually give broadcasters control over works conveyed within a broadcast -- including copyrighted and public domain works * Blocks fair use and other copyright provisions that enable the public to make use of and benefit from published information * Chills freedom of expression by extending unwarranted controls over broadcast publication * Benefits broadcasters at the expense of the web, the public and future innovation * Creates a de facto tax on copyrights, freedom of speech, communications and technological progress, all for the benefit of broadcasters and webcasters who have added nothing to deserve such a windfall. From 3dfx232 at sohu.com Thu Mar 2 17:12:44 2006 From: 3dfx232 at sohu.com (shaohe) Date: Fri, 3 Mar 2006 09:12:44 +0800 (CST) Subject: [e2e] A simple question about handling the dump files Message-ID: <18071830.1141348364186.JavaMail.postfix@mx57.mail.sohu.com>

Thanks for Zhani Mohamed Faten's reply. actually, what i intend to do is quite simple: i wish to examine the prediction in very short time scales, e.g. multi-seconds, of network traffic.

Several recent work have examined the predictability of network traffic. I consider the prediction problem base on measurement, which may be inaccurate but need less overhead. Further, the measurement results are the past but the prediction value is the future. In a word, what i concern is that whether i can obtain the relatively accurate prediction of future based on coarse measurements of past.

So i need the actual time average bandwidth to justify our prediction algorithm. Unfortunately, to handle the dump file become a practice problem.

In addition, the format of dump file is still unknown for me. The NS trace file is in text format and one line in the file corresponds to a record, which are very convenient.

Are there some material about the convertion from dump file to text file ?

Thanks !

Shaohe Lv -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.postel.org/pipermail/end2end-interest/attachments/20060303/18161004/attachment-0001.html From 3dfx232 at sohu.com Thu Mar 2 17:17:41 2006 From: 3dfx232 at sohu.com (shaohe) Date: Fri, 3 Mar 2006 09:17:41 +0800 (CST) Subject: [e2e] A simple question about handling the dump files Message-ID: <31936489.1141348661111.JavaMail.postfix@mx57.mail.sohu.com> Thanks, sampad mishra. I see your reply just now and will pay attention to Ethereal.

Shaohe lv

Mar.3 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.postel.org/pipermail/end2end-interest/attachments/20060303/34ee1c5f/attachment.html From touch at ISI.EDU Fri Mar 3 07:54:13 2006 From: touch at ISI.EDU (Joe Touch) Date: Fri, 03 Mar 2006 07:54:13 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: References: Message-ID: <440866A5.9090006@isi.edu> Vadim Antonov wrote: ... >> If there are no costs for listening, I see no problem here. > > There always costs. They may be small (like keeping a TCB), they may be > large (like keeping 20Mb of state in RAM), they may be intolerable - but > there are always some, and you do not have any idea of the costs imposed > on the remote end. Please explain the 20MB case. AFAICT, that only happens when the receiver window is 20MB, AND: a) the received data has a hole in it, notably at least at the beginning, and 20MB of other stuff backed up waiting for retransmission in that case, TCP *WILL* timeout due to a number of failed ACK retransmissions b) the received data does NOT have a hole in it, but the application has not yet retrieved the data in this case, if TCP were to 'timeout', the application would be interrupted unnecessarily Based on the above, *IF* you have 20MB associated with a connection, TCP should not be timing-out on that connection. I.e., the only case where it *might* be appropriate to timeout due to idleness is the case where the receive buffer is empty. It seems feasible to have TCP release those buffers in that case, at which point the only space an idle TCP connection should hold is the TCB, which isn't that large. If a system cannot hold a large number of TCBs, then the application should be cleaning them up themselves. Or a background OS process can go around doing this. But this isn't TCP's job. Joe From zhani_med_faten at yahoo.fr Thu Mar 2 18:24:51 2006 From: zhani_med_faten at yahoo.fr (Zhani Mohamed Faten) Date: Fri, 3 Mar 2006 03:24:51 +0100 (CET) Subject: [e2e] A simple question about handling the dump files In-Reply-To: <18071830.1141348364186.JavaMail.postfix@mx57.mail.sohu.com> Message-ID: <20060303022451.3097.qmail@web25706.mail.ukl.yahoo.com> Hi Here the tool I use to treat files, http://research.wand.net.nz/software/libtrace.php it's under linux but it's the best ;) you can use the tool tracerstats to have the bandwidth for the granularity you specify, if you need treated file (such as bandwidth for actual time), I had a lot of them;especially for real data. for ns, you can use awk script to treat the resulting file of the simulation which calculate the throughtput of data entering into a node, I'm working on traffic prediction too. don't hesitate to contact me for any question, I tried to send my first article directly in you mail but your email server reject, you can give another mail if you have. (I don't know if it's allowed to join files in the mailing list) Regards Zhani Mohamed Faten shaohe <3dfx232 at sohu.com> a ?crit : Thanks for Zhani Mohamed Faten's reply. actually, what i intend to do is quite simple: i wish to examine the prediction in very short time scales, e.g. multi-seconds, of network traffic. Several recent work have examined the predictability of network traffic. I consider the prediction problem base on measurement, which may be inaccurate but need less overhead. Further, the measurement results are the past but the prediction value is the future. In a word, what i concern is that whether i can obtain the relatively accurate prediction of future based on coarse measurements of past. So i need the actual time average bandwidth to justify our prediction algorithm. Unfortunately, to handle the dump file become a practice problem. In addition, the format of dump file is still unknown for me. The NS trace file is in text format and one line in the file corresponds to a record, which are very convenient. Are there some mater! ial about the convertion from dump file to text file ? Thanks ! Shaohe Lv --------------------------------- ?????????????? ??1G??U??????U?U?? ????VIP??????? --------------------------------- Nouveau : t?l?phonez moins cher avec Yahoo! Messenger ! D?couvez les tarifs exceptionnels pour appeler la France et l'international.T?l?chargez la version beta. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.postel.org/pipermail/end2end-interest/attachments/20060303/dc4e0fe6/attachment.html From mallman at icir.org Fri Mar 3 05:31:40 2006 From: mallman at icir.org (Mark Allman) Date: Fri, 03 Mar 2006 08:31:40 -0500 Subject: [e2e] call for participation: PAM 2006 Message-ID: <20060303133140.32CCD3CFCDE@lawyers.icir.org> An embedded and charset-unspecified text was scrubbed... Name: not available Url: http://www.postel.org/pipermail/end2end-interest/attachments/20060303/ec59ba36/attachment.ksh -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 185 bytes Desc: not available Url : http://www.postel.org/pipermail/end2end-interest/attachments/20060303/ec59ba36/attachment.bin From shrin.krishnan at gmail.com Thu Mar 2 09:03:17 2006 From: shrin.krishnan at gmail.com (Srinivas Krishnan) Date: Thu, 2 Mar 2006 12:03:17 -0500 Subject: [e2e] Application Layer Video Adaptation Message-ID: Recently there has been discussion on the list over using Application layer more effectively to boost network performance. The work Ketan Mayer-Patel and I are doing at UNC is based on using application conditions to drive the layers below for video streaming i.e which packets to send, how much to wait etc. The interest behind this research stems from a teleimmersion system for Remote Medical Consultation [http://www.cs.unc.edu/Research/nlm/] being built at UNC.At any given time there are set of 8 cameras looking at a given subject and then transmitting images back to the server. The server then has to send these images/video on demand to a client. The cameras are also controlled remotely by the user and his region of interest. So if the user is interested in the lower quadrant the camera closest to the quadrant should be the one sending the images. However, if we use a traditional fully reliable network protocol like say TCP it will send the video frames to the user quite unintelligently. TCP will sends each frame regardless of any temporal and spatial dependencies that might be present in the video stream. To overcome this problem we have a developed a adaptation layer that sits between the application and network layers and drives which frame/packet to send next.The original algorithm was developed by David Gotz here at UNC for providing multicast adaptation at the client side http://www.cs.unc.edu/~kmp/publications/mm2004_gotz/mm2004_gotz.pdf]. We have taken this algorithm and changed it to perform server side video adaptation i.e the server decides based on user input which packets are best suited to his current needs. The algorithm is based on a graph utility space and all the video encoders from the application layer add the video frames as nodes to this graph. In the teleimmersion space we have 8 encoders producing video streams at 3 different resolutions. This results in a 3 dimensional (Time, Camera,Resolution) utility space over which we need to make adaptation decisions. The edges between various nodes represent intra-frame dependencies (I-P-P-P-I or lower to higher resolution). Each nodes has also an ID that tells us the time*Camera*Resolution. Finally each node can be in one of 3 states Idle (node just added), Available (process of being sent, previous dependencies have been resolved) and resolved (information sent over). The adaptation decision is then based on a Utility Cost Ratio, where Utility is the euclidean distance from a given reference frame closest to the point of interest to the frame being sent and cost is just the frame size. Regarding DCCP, the UDP-TFRC protocol I defined in an earlier post works quite similar to DCCP but has some added features. A notion of smart reliability has been added, in the graph model I described we add an extra node for each existing node and associate with it cost of retransmission. So, each frame has a set number of packets associated with it and if we encounter a loss we add this loss as a cost to the new node. Nodes which have been successfully sent have zero cost and can be used as a reference frame to send any new frame. For eg. say we experience a packet loss, by the time we get the loss feedback an I Frame is produced, our adaptation decision will then send the I Frame and never send the old lost packet, or say our region of interest changes, we will now use a new reference frame and send those packets and forget about the loss. Also we have concept of a time bound, i.e. we will only look at nodes say 30 frames back, the rest are not used in computation, this takes into account the 500 ms latency. The best way to describe the adaptation process is that its like late/dynamic binding, only the appropriate packets are sent and the decision is made on the fly in real time. -- ********************************* Srinivas Krishnan Graduate Student Dept. Of Computer Science UNC-Chapel Hill ++++++++++++++++++++++++++++++++ Phone: **Office: (919) 962-1920 **Cell: (585)295-3359 Email: krishnan at cs.unc.edu, shrin.krishnan at gmail.com ++++++++++++++++++++++++ Web: www.cs.unc.edu/~krishnan ************************************** From avg at kotovnik.com Fri Mar 3 11:29:45 2006 From: avg at kotovnik.com (Vadim Antonov) Date: Fri, 3 Mar 2006 11:29:45 -0800 (PST) Subject: [e2e] tcp connection timeout In-Reply-To: <440866A5.9090006@isi.edu> Message-ID: On Fri, 3 Mar 2006, Joe Touch wrote: > Please explain the 20MB case. > > AFAICT, that only happens when the receiver window is 20MB, AND: TCP state is not the only state kept around when a connection is open. I thought that it was obvious that I meant the user-space state. --vadim From touch at ISI.EDU Fri Mar 3 11:30:21 2006 From: touch at ISI.EDU (Joe Touch) Date: Fri, 03 Mar 2006 11:30:21 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: References: Message-ID: <4408994D.30900@isi.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Vadim Antonov wrote: > On Fri, 3 Mar 2006, Joe Touch wrote: > >> Please explain the 20MB case. >> >> AFAICT, that only happens when the receiver window is 20MB, AND: > > TCP state is not the only state kept around when a connection is open. > > I thought that it was obvious that I meant the user-space state. > > --vadim Wouldn't that be the user-space (application-layer)'s decision then? Joe -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFECJlNE5f5cImnZrsRAiLAAKC3dDq+cppHIIUXPMOZqQuSMiKOcACgxC2d vzp6/+NoDZxBzYSA/ObYTa0= =v604 -----END PGP SIGNATURE----- From avg at kotovnik.com Fri Mar 3 12:02:40 2006 From: avg at kotovnik.com (Vadim Antonov) Date: Fri, 3 Mar 2006 12:02:40 -0800 (PST) Subject: [e2e] tcp connection timeout In-Reply-To: <4408994D.30900@isi.edu> Message-ID: On Fri, 3 Mar 2006, Joe Touch wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > > Vadim Antonov wrote: > > On Fri, 3 Mar 2006, Joe Touch wrote: > > > >> Please explain the 20MB case. > >> > >> AFAICT, that only happens when the receiver window is 20MB, AND: > > > > TCP state is not the only state kept around when a connection is open. > > > > I thought that it was obvious that I meant the user-space state. > > > > --vadim > > Wouldn't that be the user-space (application-layer)'s decision then? It does not matter whose decision it is as long as you have a way of purging it for dead connections. It was merely a counter-argument to the assertion that dead connections are "cheap". There are numerous cases when they are not. I think it is always a good idea to look at the whole picture, because concentrating on one layer is bound to mislead. --vadim From touch at ISI.EDU Fri Mar 3 12:46:46 2006 From: touch at ISI.EDU (Joe Touch) Date: Fri, 03 Mar 2006 12:46:46 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: References: Message-ID: <4408AB36.3000307@isi.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Vadim Antonov wrote: > On Fri, 3 Mar 2006, Joe Touch wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> >> >> Vadim Antonov wrote: >>> On Fri, 3 Mar 2006, Joe Touch wrote: >>> >>>> Please explain the 20MB case. >>>> >>>> AFAICT, that only happens when the receiver window is 20MB, AND: >>> TCP state is not the only state kept around when a connection is open. >>> >>> I thought that it was obvious that I meant the user-space state. >>> >>> --vadim >> Wouldn't that be the user-space (application-layer)'s decision then? > > It does not matter whose decision it is as long as you have a way of > purging it for dead connections. It was merely a counter-argument to the > assertion that dead connections are "cheap". There are numerous cases when > they are not. We HAVE a way for applications to purge resources for connections THEY think are dead - CLOSE or ABORT. > I think it is always a good idea to look at the whole picture, because > concentrating on one layer is bound to mislead. Indeed - I (and others) are suggesting TCP isn't the place to focus in this case. Joe -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFECKs2E5f5cImnZrsRApsqAKCz8i/da3S8Rd5XCpR1DUcV3w8vUACg6fUW XqnS3By02YcJqQL7lC3rc18= =26x+ -----END PGP SIGNATURE----- From touch at ISI.EDU Fri Mar 3 13:13:13 2006 From: touch at ISI.EDU (Joe Touch) Date: Fri, 03 Mar 2006 13:13:13 -0800 Subject: [e2e] Important Statement to Review for Signing -- DO NOT POST REPLIES In-Reply-To: <44078967.2B43001D@RealMeasures.dyndns.org> References: <44078967.2B43001D@RealMeasures.dyndns.org> Message-ID: <4408B169.9020900@isi.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Seth Johnson wrote: > (With apologies for this not-strictly-technical post. ... > -- Seth Johnson) Hi, all, This issue is NOT APPROPRIATE for this mailing list. DO NOT POST replies regarding this to this list. *DO* review the list policy online if you have any questions. Joe -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFECLFpE5f5cImnZrsRAu8gAJ4wHalNsUVK/dee7Nm9pYMOz/oJ3gCdHADi ZSRRuEkAtxprImJ6LUiU6jA= =G8vM -----END PGP SIGNATURE----- From david.borman at windriver.com Fri Mar 3 14:29:04 2006 From: david.borman at windriver.com (David Borman) Date: Fri, 3 Mar 2006 16:29:04 -0600 Subject: [e2e] tcp connection timeout In-Reply-To: <4408AB36.3000307@isi.edu> References: <4408AB36.3000307@isi.edu> Message-ID: On Mar 3, 2006, at 2:46 PM, Joe Touch wrote: > Vadim Antonov wrote: >> On Fri, 3 Mar 2006, Joe Touch wrote: >> >> >> It does not matter whose decision it is as long as you have a way of >> purging it for dead connections. It was merely a counter-argument >> to the >> assertion that dead connections are "cheap". There are numerous >> cases when >> they are not. > > We HAVE a way for applications to purge resources for connections THEY > think are dead - CLOSE or ABORT. Well, of course. But that isn't the issue, the issue is how do they decide that the connection is dead? Keep-alives are one mechanism. > >> I think it is always a good idea to look at the whole picture, >> because >> concentrating on one layer is bound to mislead. > > Indeed - I (and others) are suggesting TCP isn't the place to focus in > this case. > > Joe TCP doesn't provide a mechanism for the application to probe to see if the other side of the connection is still alive, other than by actually sending data and waiting for the connection to time out due to retransmissions, or read the connection for a response that never arrives. Whatever the mechanism, there are 3 responses you can get: an ack (not just a TCP ACK), and you know the connection is still alive; a TCP RST and you know the connection has gone away, or nothing, and you have to decide how to deal with that. It is dropping the connection in the third case that causes angst, when the lack of response is due to an outage in the network, not due to the other side having gone away. But that has to be dealt with no matter how you do the test for connectivity, be it a keep-alive, or a Telnet AYT command, or whatever. Once the application actually tries to send data, most TCPs will eventually time out the connection if no response is received, so an application level probe during an outage will probably result in the connection being dropped, just like lack of response to the keep-alives. Keep-alives provide a specific test for liveliness, and if that works for the server application, then use it. If it doesn't, then don't. RFC 1122 addressed the more egregious issues with keep-alives, by saying that they are off by default, you have to wait at least 2 hours before sending them, and that lack of response from any single keep-alive is not enough of a reason to drop the connection. -David Borman From 3dfx232 at sohu.com Fri Mar 3 16:50:46 2006 From: 3dfx232 at sohu.com (shaohe) Date: Sat, 4 Mar 2006 08:50:46 +0800 (CST) Subject: [e2e] A simple question about handling the dump files Message-ID: <25619834.1141433446145.JavaMail.postfix@mx57.mail.sohu.com> > Thanks for all keen reply and the tools such as netdude, libtrace and etheral etc. are suitable, Thanks very much!

The Netdude and libtrace run on Linux/Unix and Etheral can run on linux/Unix/Windows.

All helps me greatly and i should say thanks again to my friendly and enthusiastic Zhani Mohamed Faten; Christian Kreibich and sampad mishra

Shaohe Lv

-------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.postel.org/pipermail/end2end-interest/attachments/20060304/b55d1563/attachment.html From avg at kotovnik.com Fri Mar 3 17:23:07 2006 From: avg at kotovnik.com (Vadim Antonov) Date: Fri, 3 Mar 2006 17:23:07 -0800 (PST) Subject: [e2e] tcp connection timeout In-Reply-To: <4408AB36.3000307@isi.edu> Message-ID: On Fri, 3 Mar 2006, Joe Touch wrote: > Indeed - I (and others) are suggesting TCP isn't the place to focus in > this case. Joe, you're getting stuck on "repeat", sorry. I've heard that assertion two dozen times already, repeating it does not convey any new information. I'm not interested in hearing opinions (not even in opinions of a large group of people with impressive credentials) - I'm interested in hearing rational argumentation. Since I haven't heard any logically meaningful counter-arguments showing why the offered reasons to implement keepalives in the transport layer rather than at application layer are not good (and neither did I hear anything more specific than "application knows better" - no scenarios, no demonstration of inability to do something in the alternative way - in favour of app-level implementation of keepalives) I assume you don't have any. This makes further discussion pointless. I'm sorry that I failed to explain my position adequately, and my apologies to those who had to read the long rants. --vadim From touch at ISI.EDU Fri Mar 3 17:35:38 2006 From: touch at ISI.EDU (Joe Touch) Date: Fri, 03 Mar 2006 17:35:38 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: References: Message-ID: <4408EEEA.2060406@isi.edu> Vadim Antonov wrote: > On Fri, 3 Mar 2006, Joe Touch wrote: > >> Indeed - I (and others) are suggesting TCP isn't the place to focus in >> this case. > > Joe, you're getting stuck on "repeat", sorry. I've heard that assertion > two dozen times already, repeating it does not convey any new information. > I'm not interested in hearing opinions (not even in opinions of a large > group of people with impressive credentials) - I'm interested in hearing > rational argumentation. You're interested in hearing a solution that requires modification to TCP, despite reasons why this could be solved appropriate at either the application layer (app keepalives) or a link layer (e.g., PPP). I.e., you want to rational_*ize* an excuse to play with TCP. If that's your goal, please stop asking for validation and just do it. Joe From eblanton at cs.ohiou.edu Sat Mar 4 08:37:47 2006 From: eblanton at cs.ohiou.edu (Ethan Blanton) Date: Sat, 4 Mar 2006 11:37:47 -0500 Subject: [e2e] tcp connection timeout In-Reply-To: References: <4408AB36.3000307@isi.edu> Message-ID: <20060304163746.GA22560@localhost.localdomain> Vadim Antonov spake unto us the following wisdom: > Since I haven't heard any logically meaningful counter-arguments showing > why the offered reasons to implement keepalives in the transport layer > rather than at application layer are not good (and neither did I hear > anything more specific than "application knows better" - no scenarios, no > demonstration of inability to do something in the alternative way - in > favour of app-level implementation of keepalives) I assume you don't have > any. I think there is some dissonance here in that you are approaching this problem from a different direction from much of the community; for us, the question is not "is there a reason this should not be in the transport layer", but "is there a reason this SHOULD be in the transport layer". In this case, for many of us the answer is that there is no compelling raeson to push any additional complexity into the transport layer; as previously mentioned, a non-negligible portion of the community feels that the complexity which is already there (viz. Berkeley keep-alives) is already too much. So, the bottom line for some of us is that since keepalives are perfectly suited to application space, and in fact are quite application-dependent, they don't _belong_ in the transport layer. It doesn't matter that they could conceivably be put there -- because there's no reason HTTP *couldn't* be implemented at the transport layer, or SMTP, or ssh, so clearly keepalives (or whatever feature) could be done at the transport layer -- it just wouldn't be a good idea to do so. > This makes further discussion pointless. I'm sorry that I failed to > explain my position adequately, and my apologies to those who had to read > the long rants. I think your position was clear, and I think the opposing position was clear; the problem is that the two positions are coming from a different guiding principle, and the "betterness" of each position lies not in technical correctness, but in design philosophy and elegance. Ethan -- The laws that forbid the carrying of arms are laws [that have no remedy for evils]. They disarm only those who are neither inclined nor determined to commit crimes. -- Cesare Beccaria, "On Crimes and Punishments", 1764 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: Digital signature Url : http://www.postel.org/pipermail/end2end-interest/attachments/20060304/a88dff43/attachment.bin From saikat at cs.cornell.edu Sat Mar 4 15:13:02 2006 From: saikat at cs.cornell.edu (Saikat Guha) Date: Sat, 04 Mar 2006 18:13:02 -0500 Subject: [e2e] tcp connection timeout In-Reply-To: <20060304163746.GA22560@localhost.localdomain> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> Message-ID: <1141513982.517.10.camel@localhost.localdomain> On Sat, 2006-03-04 at 11:37 -0500, Ethan Blanton wrote: > there's no reason HTTP *couldn't* be implemented at the transport > layer, or SMTP, or ssh, so clearly keepalives (or whatever feature) > could be done at the transport layer -- it just wouldn't be a good > idea to do so. What makes it a "good idea" to implement something at the transport level or application level? Clearly all of TCP itself could be app. level, but I suspect the reason to put it in transport has more to do with: a) How complex a particular feature is b) How many applications require it Congestion control, reliability etc. are complex enough and widely used enough to warrant their inclusion into the transport. HTTP, while complex, isn't used nearly as widely and is kept out (although there are calls for a bulk-transfer service in the transport layer by Dave Andersen et al.) The case for Keep-alives isn't clear. It is trivially simple. Long ago, it was rarely used (connectivity mattered only when data needed to be sent). Now with NATs and firewalls, establishing connectivity has a significant overhead, idle time matters (both as state on middle-boxes, and as the subsequent high-overhead connection setup later). Yes, long ago, there was no reason to put keep-alives into the transport layer because applications rarely used it. But with more applications trying to evolve into the NAT/firewalled world, today the situation may have changed. cheers, -- Saikat -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://www.postel.org/pipermail/end2end-interest/attachments/20060304/bc4d826f/attachment.bin From faber at ISI.EDU Mon Mar 6 11:08:09 2006 From: faber at ISI.EDU (Ted Faber) Date: Mon, 6 Mar 2006 11:08:09 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <1141513982.517.10.camel@localhost.localdomain> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> Message-ID: <20060306190808.GK69460@hut.isi.edu> On Sat, Mar 04, 2006 at 06:13:02PM -0500, Saikat Guha wrote: > On Sat, 2006-03-04 at 11:37 -0500, Ethan Blanton wrote: > > there's no reason HTTP *couldn't* be implemented at the transport > > layer, or SMTP, or ssh, so clearly keepalives (or whatever feature) > > could be done at the transport layer -- it just wouldn't be a good > > idea to do so. > > What makes it a "good idea" to implement something at the transport > level or application level? In the case of keepalives, the issue is that transport-level keepalive packets are rarely what an application needs; one usually needs application-level keepalives. Sometimes you can make a thin app layer that's just TCP, but lets ignore that for the moment. It's rarely valuable to know that the transport connection is up. What you want to know is if the other end of the application (web server, TCP-NFS server, realplayer streamer) is still talking to you. A deadlocked web server still ACKs TCP segments. A working TCP connection is necessary but not sufficient, so you need an application keepalive. End-to-end 101. As with other implementation at lower layers, everyone believes that you can implement the keepalive function lower. The questions you mentioned are close to the right ones, but I assert that they include: How much does it help? How much does it hurt? I think the first answer is "not much". If your app is properly designed, a keepalive at the app layer tells you what you wanted to know. TCP timeouts will also tell you that the app connection has failed, but the absence of a TCP timeout is not conclusive. A false positive (TCP checking the connection when the app doesn't care) is actually harmful. One advantage is that applications (mis)designed without a keepalive can use TCP keepalives, but a locked-up application remains undetectable. How much it hurts is pretty much the design issue. There's extra configuration work to be done for apps to set TCP keepalive parameters (and if you can't at least turn them off, they're in the way of applications with long idle connections) and for putting the code into the stack. The code is a wash - setting params is going to happen; the timeout code can be in the stack or a library - but you've put another thing into TCP to interact with everything else that's in TCP. IMHO, adding timeouts is a long run for a short slide. Applications that actually care have to add the keepalive functionality anyway. The vague assurance of a TCP keepalive is insufficient reason to add complexity. Generally lower level functionality that gets added in the face on an end-to-end argument that it's unnecessary is a performance hack, ahem, sorry, enhancement, but keepalives are a matter of correctness. Not much help at lower levels than you need the assurance. Knowing the phone's connected while the person you're talking to is out cold doesn't help enough. Now, I'm obviously wrong, because the option exists, but there's an argument against them and it has nothing to do with tradition or old OS v. network designer disagreements. -- Ted Faber http://www.isi.edu/~faber PGP: http://www.isi.edu/~faber/pubkeys.asc Unexpected attachment on this mail? See http://www.isi.edu/~faber/FAQ.html#SIG -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available Url : http://www.postel.org/pipermail/end2end-interest/attachments/20060306/74ff3715/attachment.bin From david.borman at windriver.com Mon Mar 6 14:44:20 2006 From: david.borman at windriver.com (David Borman) Date: Mon, 6 Mar 2006 14:44:20 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <20060306190808.GK69460@hut.isi.edu> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> Message-ID: On Mar 6, 2006, at 11:08 AM, Ted Faber wrote: > It's rarely valuable to know that the transport connection is up. > What > you want to know is if the other end of the application (web server, > TCP-NFS server, realplayer streamer) is still talking to you. A > deadlocked web server still ACKs TCP segments. A working > TCP connection is necessary but not sufficient, so you need an > application keepalive. End-to-end 101. ... > Knowing the phone's connected while the person you're talking to is > out > cold doesn't help enough. But it is valuable to know that the transport connection is down, and when the keepalive triggers a TCP RST, then it is providing a useful service. No sense in hanging onto the phone if the other side has hung up. Keepalives should be viewed as a way to determine if a connection is dead, not if it is alive. It's a subtle but important difference, and the main issue is when you get no response, does that indicate that the connection is dead? It seems to me that that is where most of the objections to keepalives come, where lack of response due to an outage kills a perfectly good idle connection. -David Borman From lynne at telemuse.net Mon Mar 6 18:19:28 2006 From: lynne at telemuse.net (Lynne Jolitz) Date: Mon, 6 Mar 2006 18:19:28 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: Message-ID: <000901c6418d$8fb2ed20$6e8944c6@telemuse.net> What is the point of having a long-lived TCP session with keepalive in the 21st century? Is this not a security hole ripe for exploitation in an age of ubiquitous bandwidth and zombie machines? This is the heart of the issue. The resource allocated to build a socket is minimal if well-designed in a processor / memory rich environment. Fears about OS resource allocation should not be considered as justification for putting keepalive in transport - that's a red herring, about as silly as libtelnet. ;-) Lynne Jolitz. ---- We use SpamQuiz. If your ISP didn't make the grade try http://lynne.telemuse.net From tim at ivisit.com Mon Mar 6 21:57:10 2006 From: tim at ivisit.com (Tim Dorcey) Date: Mon, 06 Mar 2006 21:57:10 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <000901c6418d$8fb2ed20$6e8944c6@telemuse.net> Message-ID: <03ad01c641ab$fa7b3770$0300a8c0@int.eyematic.com> > What is the point of having a long-lived TCP session with > keepalive in the 21st century? One possible consideration is that many applications operate over a network where traffic is not allowed to flow in one direction unless traffic has recently been sent in the other direction. I am thinking of NAT's and firewalls. If an application wants a host "outside" the NAT/firewall to be able to send something to it at an arbitrary time, there seems no other option then to periodically send something out to it. Since these devices often operate at the transport level, there might be some rational for putting this functionality in the host transport layer. Tim From touch at ISI.EDU Tue Mar 7 07:06:08 2006 From: touch at ISI.EDU (Joe Touch) Date: Tue, 07 Mar 2006 07:06:08 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <03ad01c641ab$fa7b3770$0300a8c0@int.eyematic.com> References: <03ad01c641ab$fa7b3770$0300a8c0@int.eyematic.com> Message-ID: <440DA160.2070705@isi.edu> Tim Dorcey wrote: >> What is the point of having a long-lived TCP session with >> keepalive in the 21st century? > > One possible consideration is that many applications operate over a network > where traffic is not allowed to flow in one direction unless traffic has > recently been sent in the other direction. I am thinking of NAT's and > firewalls. If an application wants a host "outside" the NAT/firewall to be > able to send something to it at an arbitrary time, there seems no other > option then to periodically send something out to it. Since these devices > often operate at the transport level, there might be some rational for > putting this functionality in the host transport layer. > > Tim That sounds like a "NAT-traversing tunnel" property, such as might already be provided by PPP (which already has keepalives). Joe From dpreed at reed.com Tue Mar 7 07:30:40 2006 From: dpreed at reed.com (David P. Reed) Date: Tue, 07 Mar 2006 10:30:40 -0500 Subject: [e2e] tcp connection timeout In-Reply-To: References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> Message-ID: <440DA720.1090300@reed.com> David Borman wrote: > > But it is valuable to know that the transport connection is down, and > when the keepalive triggers a TCP RST, then it is providing a useful > service. No sense in hanging onto the phone if the other side has > hung up. David - you obviously missed my note that started this part of the discussion. TCP connections are NOT phone calls. They are NOT hung up in that way (a CLOSE is sent as part of disconnection). They don't tie up wires, they don't tie up routers, they don't tie up bandwidth. Reasoning about phone calls adds nothing to this discussion. Might as well think about the analogy to alligator wrestling for all that helps. > > Keepalives should be viewed as a way to determine if a connection is > dead, not if it is alive. It's a subtle but important difference, and > the main issue is when you get no response, does that indicate that > the connection is dead? It seems to me that that is where most of the > objections to keepalives come, where lack of response due to an outage > kills a perfectly good idle connection. > Keepalives don't tell if a connection is dead. If their function is definable at all, the precise definition is that: they tell you that a particular internally generated packet is lost or misrouted at a particular point in time, chosen without the control of the application that cares, and they do more than that: they don't just tell you, they KILL the connection, and you discover that later when you get around to checking or try to send data, if you ever do. They work only when the application itself is not sending data, which is most likely when the application doesn't care (if it cared, it would either be sending data, or doing its own polling of the responsiveness of the *application* at the other end, which is all it cares about). Sometimes it really helps to think, not about what something "says it does", but actually what it does. They also don't keep a connection alive! But that is a misnaming of the function. Can we stop this debate? From dpreed at reed.com Tue Mar 7 07:38:29 2006 From: dpreed at reed.com (David P. Reed) Date: Tue, 07 Mar 2006 10:38:29 -0500 Subject: [e2e] tcp connection timeout In-Reply-To: <03ad01c641ab$fa7b3770$0300a8c0@int.eyematic.com> References: <03ad01c641ab$fa7b3770$0300a8c0@int.eyematic.com> Message-ID: <440DA8F5.7000000@reed.com> Because NAT doesn't work, we have to change all the perfectly correct TCP stacks and make keepalives the standard. And to make NATs more memory efficient, we should make the default keepalive interval 100 msec. lest the "losers" who write applications open a lot of connections and forget to close them. I suppose that is reasonable, given that NAT has become the Internet architecture by default. Certainly IPv6 is stillborn in the US residential and corporate market. Perhaps we should next move to the true future: IBM SNA 2007. The future will be about putting the semantics of unit record devices into the routers. All hail Physical Unit Type 2. Tim Dorcey wrote: >> What is the point of having a long-lived TCP session with >> keepalive in the 21st century? >> > > One possible consideration is that many applications operate over a network > where traffic is not allowed to flow in one direction unless traffic has > recently been sent in the other direction. I am thinking of NAT's and > firewalls. If an application wants a host "outside" the NAT/firewall to be > able to send something to it at an arbitrary time, there seems no other > option then to periodically send something out to it. Since these devices > often operate at the transport level, there might be some rational for > putting this functionality in the host transport layer. > > Tim > > > > From kempf at docomolabs-usa.com Tue Mar 7 09:13:29 2006 From: kempf at docomolabs-usa.com (James Kempf) Date: Tue, 7 Mar 2006 09:13:29 -0800 Subject: [e2e] 0% NAT - checkmating the disconnectors References: <20060222085720.6477C1A0214@smtp-1.hotpop.com><43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> Message-ID: <027401c6420a$74a081a0$026115ac@dcml.docomolabsusa.com> > Does anyone have any good thoughts on how to collectively create the next > generation *Inter* Net - one that actually provides the interoperability > that all of us old codgers dreamed was possible when Licklider, Taylor, > Englebart, etc. first imagined it and Vint Cerf and Bob Kahn made it > happen? > If you want it to be secure and open, keep the NATs out but put in place a legal/social/commercial solution for security, kind of an Internet CSI. One thing I think we should have learned from the Cold War is that depending only on technical measures for security just leads to arms races. jak From dhc2 at dcrocker.net Tue Mar 7 10:12:53 2006 From: dhc2 at dcrocker.net (Dave Crocker) Date: Tue, 07 Mar 2006 10:12:53 -0800 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <027401c6420a$74a081a0$026115ac@dcml.docomolabsusa.com> References: <20060222085720.6477C1A0214@smtp-1.hotpop.com><43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <027401c6420a$74a081a0$026115ac@dcml.docomolabsusa.com> Message-ID: <440DCD25.6050008@dcrocker.net> James Kempf wrote: >> Does anyone have any good thoughts on how to collectively create the >> next generation *Inter* Net - one that actually provides the >> interoperability that all of us old codgers dreamed was possible when >> Licklider, Taylor, Englebart, etc. first imagined it and Vint Cerf and >> Bob Kahn made it happen? >> > > If you want it to be secure and open, keep the NATs out but put in place > a legal/social/commercial solution for security, kind of an Internet > CSI. One thing I think we should have learned from the Cold War is that > depending only on technical measures for security just leads to arms races. Let's consider something completely different: Assume that a NAT represent more than just a device to do address administration. Assume that it is part of a function the represents a desire of intrnet operators to have a clear distinction between inside and outside. To some extent, routers do the same thing. (Yes, NATs are more complex and are stateful, but I'm going for a basic issue, here, so please just tolerate my hand-waving.) Note that routers do address translation too. They change the current link-layer address to be a new one. (Dontcha just luv layers?) For all of the implied lessons in distinguishing internal routing from exterior routing, we seem to resist re-applying the lesson to other parts of the architecture. I've come to believe that most of the approach to dealing with NATs almost comes for free if we do locator/identifier properly and provide a useful 'session' layer (or equivalent function with the app layer.) d/ -- Dave Crocker Brandenburg InternetWorking From dga+ at cs.cmu.edu Tue Mar 7 11:11:53 2006 From: dga+ at cs.cmu.edu (David Andersen) Date: Tue, 7 Mar 2006 14:11:53 -0500 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <440DCD25.6050008@dcrocker.net> References: <20060222085720.6477C1A0214@smtp-1.hotpop.com><43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <027401c6420a$74a081a0$026115ac@dcml.docomolabsusa.com> <440DCD25.6050008@dcrocker.net> Message-ID: On Mar 7, 2006, at 1:12 PM, Dave Crocker wrote: > Note that routers do address translation too. They change the > current link-layer address to be a new one. (Dontcha just luv > layers?) > > For all of the implied lessons in distinguishing internal routing > from exterior routing, we seem to resist re-applying the lesson to > other parts of the architecture. > > I've come to believe that most of the approach to dealing with NATs > almost comes for free if we do locator/identifier properly and > provide a useful 'session' layer (or equivalent function with the > app layer.) Most, but not all. The "session" identifier or other equivalent end- to-end identity tokens (e.g., the identifiers used in HIP, in TCP Migrate, etc.) are great for improving communication between two endpoints. They have all sorts of benefits other than NATs: they facilitate mobility, multi-homing, and probably other things that begin with "m" (but not multicast, thank you). Unfortunately, they aren't enough by themselves to provide a global identifier that retains its validity when passed between hosts (i.e., the introduction problem with NATs: You tell me to talk to David P Reed, but the identifier "David P. Reed" is not valid in my scope). Note that I said "by themselves" - you can certainly add extra things (e.g., the way i3 does) to enable this. But most such solutions are really changing the fundamental unit of addressing, not just adding "session" identifiers. This situation is parallel to the one you cited. Layer two addresses are not global (though by fate of manufacturing they are mostly unique), and have no validity outside the local scope. If we make IP behave the same way, then we'll just end up replacing it with some higher layer addressing and routing space. I like overlays, and I still think it's a waste to have to use them in this manner when we've got a perfectly salvagable addressing scheme in ipv6. Yours in additional levels of indirection, -Dave -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: This is a digitally signed message part Url : http://www.postel.org/pipermail/end2end-interest/attachments/20060307/de8bde22/PGP.bin From agthorr at cs.uoregon.edu Tue Mar 7 12:03:48 2006 From: agthorr at cs.uoregon.edu (Daniel Stutzbach) Date: Tue, 7 Mar 2006 12:03:48 -0800 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: References: <43FC9D43.7010800@reed.com> <027401c6420a$74a081a0$026115ac@dcml.docomolabsusa.com> <440DCD25.6050008@dcrocker.net> Message-ID: <20060307200347.GA1844@cs.uoregon.edu> On Tue, Mar 07, 2006 at 02:11:53PM -0500, David Andersen wrote: > This situation is parallel to the one you cited. Layer two addresses > are not global (though by fate of manufacturing they are mostly > unique), and have no validity outside the local scope. If we make IP > behave the same way, then we'll just end up replacing it with some > higher layer addressing and routing space. I like overlays, and I > still think it's a waste to have to use them in this manner when > we've got a perfectly salvagable addressing scheme in ipv6. I'm hoping for standardized protocols for doing IPv6-over-UDP-over-IPv4, something like STUN for NAT penetration, a DHT-based rendezvous service, and an Anycast bootstrapping mechanism to make initial contact with the DHT. We have all the pieces; we just need to put them together. Otherwise we're going to wake up one day to discover that Peer-to-Peer developers have invented twelve different incompatible protocols that all accomplish this goal. (each with their own buggy "TCP-friendly" congestion control algorithms--yuck) If a feature fits logically in the transport layer, but isn't, it will end up in the application. -- Daniel Stutzbach Computer Science Ph.D Student http://www.barsoom.org/~agthorr University of Oregon From david.borman at windriver.com Tue Mar 7 12:38:17 2006 From: david.borman at windriver.com (David Borman) Date: Tue, 7 Mar 2006 14:38:17 -0600 Subject: [e2e] tcp connection timeout In-Reply-To: <440DA720.1090300@reed.com> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> <440DA720.1090300@reed.com> Message-ID: <211A23FE-7921-4188-BBDB-21DCB7092D59@windriver.com> Hi David, On Mar 7, 2006, at 9:30 AM, David P. Reed wrote: > David Borman wrote: >> >> But it is valuable to know that the transport connection is down, >> and when the keepalive triggers a TCP RST, then it is providing a >> useful service. No sense in hanging onto the phone if the other >> side has hung up. > David - you obviously missed my note that started this part of the > discussion. TCP connections are NOT phone calls. They are NOT > hung up in that way (a CLOSE is sent as part of disconnection). > They don't tie up wires, they don't tie up routers, they don't tie > up bandwidth. Reasoning about phone calls adds nothing to this > discussion. Might as well think about the analogy to alligator > wrestling for all that helps. I know how TCP works. :-) I only referred to a phone call as it was already used as an analogy in a previous message. My apologies if that muddled up my message. My only point is that keepalives also can elicit an active response in the form of a RST for idle connections where the other side has terminated abnormally, or gone away during a period of network outage; they are not just about killing perfectly good idle connections due to lack of a response, but that's what everyone focuses on. I'll be the first to say they have a very limited scope/purpose, and shouldn't be used beyond that. -David Borman From kempf at docomolabs-usa.com Tue Mar 7 13:16:59 2006 From: kempf at docomolabs-usa.com (James Kempf) Date: Tue, 7 Mar 2006 13:16:59 -0800 Subject: [e2e] 0% NAT - checkmating the disconnectors References: <20060222085720.6477C1A0214@smtp-1.hotpop.com><43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com><027401c6420a$74a081a0$026115ac@dcml.docomolabsusa.com> <440DCD25.6050008@dcrocker.net> Message-ID: <075201c6422c$78967b80$026115ac@dcml.docomolabsusa.com> Hi Dave, So here's a security scenerio that, I'm told, is fairly common today. A spammer exchanges what is know as a "pink letter" with an ISP. The ISP promises not to cut off the spammer in exchange for a kickback. How would your proposal solve this problem? jak ----- Original Message ----- From: "Dave Crocker" To: Sent: Tuesday, March 07, 2006 10:12 AM Subject: Re: [e2e] 0% NAT - checkmating the disconnectors > James Kempf wrote: >>> Does anyone have any good thoughts on how to collectively create the >>> next generation *Inter* Net - one that actually provides the >>> interoperability that all of us old codgers dreamed was possible when >>> Licklider, Taylor, Englebart, etc. first imagined it and Vint Cerf and >>> Bob Kahn made it happen? >>> >> >> If you want it to be secure and open, keep the NATs out but put in place >> a legal/social/commercial solution for security, kind of an Internet CSI. >> One thing I think we should have learned from the Cold War is that >> depending only on technical measures for security just leads to arms >> races. > > > > Let's consider something completely different: > > Assume that a NAT represent more than just a device to do address > administration. Assume that it is part of a function the represents a > desire of intrnet operators to have a clear distinction between inside and > outside. > > To some extent, routers do the same thing. (Yes, NATs are more complex and > are stateful, but I'm going for a basic issue, here, so please just > tolerate my hand-waving.) > > Note that routers do address translation too. They change the current > link-layer address to be a new one. (Dontcha just luv layers?) > > For all of the implied lessons in distinguishing internal routing from > exterior routing, we seem to resist re-applying the lesson to other parts > of the architecture. > > I've come to believe that most of the approach to dealing with NATs almost > comes for free if we do locator/identifier properly and provide a useful > 'session' layer (or equivalent function with the app layer.) > > d/ > -- > > Dave Crocker > Brandenburg InternetWorking > > From touch at ISI.EDU Tue Mar 7 18:23:50 2006 From: touch at ISI.EDU (Joe Touch) Date: Tue, 07 Mar 2006 18:23:50 -0800 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <440DCD25.6050008@dcrocker.net> References: <20060222085720.6477C1A0214@smtp-1.hotpop.com><43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <027401c6420a$74a081a0$026115ac@dcml.docomolabsusa.com> <440DCD25.6050008@dcrocker.net> Message-ID: <440E4036.1030905@isi.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Dave Crocker wrote: ... > Let's consider something completely different: > > Assume that a NAT represent more than just a device to do address > administration. Assume that it is part of a function the represents a > desire of intrnet operators to have a clear distinction between inside > and outside. > > To some extent, routers do the same thing. (Yes, NATs are more complex > and are stateful, but I'm going for a basic issue, here, so please just > tolerate my hand-waving.) > > Note that routers do address translation too. They change the current > link-layer address to be a new one. (Dontcha just luv layers?) They don't translate anything. They remove the incoming link header and write a new outgoing link header. The two link headers are not related to each other: the outgoing header may be a function of the incoming link and IP header, but is NOT a function of the incoming link header per se. Joe -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEDkA1E5f5cImnZrsRAiziAJ48hhh4y4BIydesrpRAZIfGQuBYDwCg0cv5 uLAtvk8aPg/1ZmZBObjUaRQ= =EMtR -----END PGP SIGNATURE----- From saikat at cs.cornell.edu Tue Mar 7 20:44:58 2006 From: saikat at cs.cornell.edu (Saikat Guha) Date: Tue, 07 Mar 2006 23:44:58 -0500 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <43FC9D43.7010800@reed.com> References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> Message-ID: <1141793098.14453.30.camel@localhost.localdomain> On Wed, 2006-02-22 at 12:20 -0500, David P. Reed wrote: > We know that NATs don't protect us very well, and we know that firewalls > don't either. Yet they sure get in the way and create points of power > for those who would keep us disconnected. Not to detract from the original discussion, but "those who would keep us disconnected" can simply yank the cable. IT departments turn off ports in response to suspicious activity, and no Internet architecture can _force_ them to provide connectivity if they don't want to. NATs and firewalls aren't the problem here. The problem is that the Internet architecture has largely marginalized the voice of the network operator. They have a say. They will enforce it whether or not end users like it. If they have a problem with one application, the Internet should provide them tools to squelch that one application, otherwise they'll be forced to squelch them all. Instead of "checkmating the disconnectors", I believe it worth looking at how to work _with_ them. Is there a way to architect the Internet to give the network operator full control over his network? So, when his boss (who paid for the wires and routers) asks him to block application X, he can do just that and not cause the collateral damage that firewall-hacks cause today. Shameless plug: we believe signaling is one way to work _with_ the network, and not against it (http://saikat.guha.cc/pub/sosp05wip-guha.pdf). But, this is just one solution. My 2c. -- Saikat -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://www.postel.org/pipermail/end2end-interest/attachments/20060307/eeb5737a/attachment.bin From perfgeek at mac.com Tue Mar 7 21:11:07 2006 From: perfgeek at mac.com (rick jones) Date: Tue, 7 Mar 2006 21:11:07 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <20060306190808.GK69460@hut.isi.edu> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> Message-ID: How, if at all, might the TCP FIN_WAIT_2 state fit into all of this? Either in the "detached" case of the application having called close(), or in the simplex case of the application having called shutdown(SHUT_WR)? rick jones there is no rest for the wicked, yet the virtuous have no pillows From agthorr at cs.uoregon.edu Tue Mar 7 21:18:50 2006 From: agthorr at cs.uoregon.edu (Daniel Stutzbach) Date: Tue, 7 Mar 2006 21:18:50 -0800 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <1141793098.14453.30.camel@localhost.localdomain> References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <1141793098.14453.30.camel@localhost.localdomain> Message-ID: <20060308051849.GE5696@cs.uoregon.edu> On Tue, Mar 07, 2006 at 11:44:58PM -0500, Saikat Guha wrote: > Is there a way to architect the Internet to give the network operator > full control over his network? So, when his boss (who paid for the wires > and routers) asks him to block application X, he can do just that and > not cause the collateral damage that firewall-hacks cause today. I believe this could be easily accomplished as an extension to the protocol defined in RFC 3514. ;) -- Daniel Stutzbach Computer Science Ph.D Student http://www.barsoom.org/~agthorr University of Oregon From randy at psg.com Wed Mar 8 01:20:14 2006 From: randy at psg.com (Randy Bush) Date: Tue, 7 Mar 2006 23:20:14 -1000 Subject: [e2e] 0% NAT - checkmating the disconnectors References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <027401c6420a$74a081a0$026115ac@dcml.docomolabsusa.com> Message-ID: <17422.41422.346821.764989@roam.psg.com> > If you want it to be secure and open, keep the NATs out you mean shim6? randy From dhc2 at dcrocker.net Wed Mar 8 02:27:58 2006 From: dhc2 at dcrocker.net (Dave Crocker) Date: Wed, 08 Mar 2006 02:27:58 -0800 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: References: <20060222085720.6477C1A0214@smtp-1.hotpop.com><43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <027401c6420a$74a081a0$026115ac@dcml.docomolabsusa.com> <440DCD25.6050008@dcrocker.net> Message-ID: <440EB1AE.7030101@dcrocker.net> >> I've come to believe that most of the approach to dealing with NATs >> almost comes for free if we do locator/identifier properly and provide >> a useful 'session' layer (or equivalent function with the app layer.) > > Most, but not all. The "session" identifier or other equivalent > end-to-end identity tokens (e.g., the identifiers used in HIP, in TCP > Migrate, etc.) are great for improving communication between two > endpoints. right. > Unfortunately, they aren't enough by themselves to provide a global > identifier that retains its validity when passed between hosts That's ok. I didn't suggest (or have) that as a goal. It's a perfectly nice goal, but it goes far, far beyond a) common practice, independent of NAT's, and b) seems to have even less market demand than mobility... (Mind you, I'm a great fan of mobile IP -- and I think being able to have an inter-process link migrate across host-platforms is delightful -- but the market pull doesn't seem to be creating any urgency for either of them. It would, if it were strong.) > This situation is parallel to the one you cited. Layer two addresses > are not global (though by fate of manufacturing they are mostly unique), > and have no validity outside the local scope. If we make IP behave the > same way, then we'll just end up replacing it with some higher layer > addressing and routing space. I like overlays, Me to. One might even think of a meta-net layer, on top of the current inter-net layer... (Hey, it's been about 30 years since that stunt was pulled in the networking game. Maybe it's time to do it again...) James Kempf wrote: > So here's a security scenerio that, I'm told, is fairly common today. A > spammer exchanges what is know as a "pink letter" with an ISP. The ISP > promises not to cut off the spammer in exchange for a kickback. > > How would your proposal solve this problem? I obviously do not understand the question, because all I can think of is the infinite number of problems that this does not solve, because they are not related. It does not make a milkshake, or create world peace, and it certainly does not solve collusion between a spammer and an ISP. How the heck would you expect a mechanism intended to do a few, specific things like making NATs tolerable have anything to do with the example you raise? Joe Touch wrote: > They don't translate anything. They remove the incoming link header and > write a new outgoing link header. Sounds a bit like removing the incoming IP header and adding a new, outgoing IP header. That, at least, was the image I was intending to invoke. It's a tad uncomfortable, but I claim it is not unreasonable. The bottom line that this perspective promotes is that IP is not end-to-end -- anymore, if it ever truly was -- but that some stuff on top of it (still) needs to be. More generally, end-to-end is always rather relative, particularly seeming to exist relative to the layer below, but rarely to the layer above. d/ -- Dave Crocker Brandenburg InternetWorking From dpreed at reed.com Wed Mar 8 04:56:12 2006 From: dpreed at reed.com (David P. Reed) Date: Wed, 08 Mar 2006 07:56:12 -0500 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <1141793098.14453.30.camel@localhost.localdomain> References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <1141793098.14453.30.camel@localhost.localdomain> Message-ID: <440ED46C.4040207@reed.com> Saikat Guha wrote: > Is there a way to architect the Internet to give the network operator > full control over his network? So, when his boss (who paid for the wires > and routers) asks him to block application X, he can do just that and > not cause the collateral damage that firewall-hacks cause today. > Shameless plug: we believe signaling is one way to work _with_ the > network, and not against it > (http://saikat.guha.cc/pub/sosp05wip-guha.pdf). But, this is just one > solution. > I'm amazed. The network operator in this case wants to join the Internet, but not join the Internet. The Internet is a fully interoperable network. That means inherently that all operators that carry Internet traffic agree to carry their fair share. What you are describing is not the Internet, but something else. The "cooperation-optional" network, perhaps? Or maybe the "screw you" network? If the network advertises that it routes packets to a destination, how is the source to know that its packets will be destroyed based on their content? At that point, it's time for those who agree to the original terms of the Internet social compact (which is far more than social) to blackball, boycott, and refuse to connect to that operator. Screw him. From Jon.Crowcroft at cl.cam.ac.uk Wed Mar 8 08:25:55 2006 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Wed, 08 Mar 2006 16:25:55 +0000 Subject: [e2e] layered neutrality & end2end Message-ID: So Vint Cerf gave a cool talk at Google in London last night... Lots of nice examples of the way things will probably head :- In the Q&A afterwards, one thing no-one really asked much about (there was 1 question about governance that hinted at it) was neutrality - It seems to me that the sort of arguments we have about once per nano-year on this list, about lack of end2end transparency can be applied at any layer (actually, this was the topic of the edge and near edge middleboxes ctl debate a few years back in the IETF) :- So telco-style ISPs would love to remove net transparency for users via differential charging for the QoS and security services. They charge you more for add-ons like jitter&delay bounding for VOIP (essentialyl by disabling it for others :- Fact is that since most people with enough capacity for X will use X, for any X={audio/video/game etc), there cannot be _any_ internet or router scale mechanism that it makes sense to turn off for a minority of users for) that actually realy saves any significant real resources imho. Anyhow, so net neutrality is a cool thing, but what about transport and application layer neutrality? So google's defense for running the great firewall of China is that they already do one of these for everyone else :- (e.g. no holocaust denial websites in austria, no pictures of naked men in the UK, no pictures of English food for French users, you get the idea) Of course, we can build an overlay around that... As in the argument that the Internet will evolve to route around any damage to e2e (e.g. do NAT traversal, RON to policy route etc), the web will evolve to route around any differential search engine results etc etc....maybe.... discuss:- cheers j. ------------> REALMAN DEMO SUBMISSION DEADLINE HAS BEEN EXTENDED 19 MARCH 2006, midnight PST CALL FOR DEMOS Second International Workshop on Multi-hop Ad hoc Networks: from theory to reality REALMAN 2006 http://www.cl.cam.ac.uk/realman Sponsored by ACM SIGMOBILE in conjunction with MobiHoc 2006 From saikat at cs.cornell.edu Wed Mar 8 11:09:09 2006 From: saikat at cs.cornell.edu (Saikat Guha) Date: Wed, 08 Mar 2006 14:09:09 -0500 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <440ED46C.4040207@reed.com> References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <1141793098.14453.30.camel@localhost.localdomain> <440ED46C.4040207@reed.com> Message-ID: <1141844949.14453.47.camel@localhost.localdomain> On Wed, 2006-03-08 at 07:56 -0500, David P. Reed wrote: > At that point, it's time for those who agree to the original terms of > the Internet social compact (which is far more than social) to > blackball, boycott, and refuse to connect to that operator. Screw him. As someone said: Talking about firewalls is a lot like sex education. Preaching abstinence only goes so far. Telling people how to do it right avoids serious problems down the road. Empowering network owners is just my take on things, others need not subscribe to it. I just figured it is worth bringing up when talking about designing the next-generation Internet. peace, -- Saikat -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://www.postel.org/pipermail/end2end-interest/attachments/20060308/7c1a3a3d/attachment.bin From swb at employees.org Wed Mar 8 11:33:08 2006 From: swb at employees.org (Scott W Brim) Date: Wed, 8 Mar 2006 14:33:08 -0500 Subject: [e2e] layered neutrality & end2end In-Reply-To: References: Message-ID: <20060308193306.GC4884@sbrim-wxp01> On Wed, Mar 08, 2006 04:25:55PM +0000, Jon Crowcroft allegedly wrote: > So Vint Cerf gave a cool talk at Google in London last night... > > Lots of nice examples of the way things will probably head :- Please, say more, what is his thinking these days? From Jon.Crowcroft at cl.cam.ac.uk Thu Mar 9 02:32:02 2006 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Thu, 09 Mar 2006 10:32:02 +0000 Subject: [e2e] layered neutrality & end2end In-Reply-To: Message from Scott W Brim of "Wed, 08 Mar 2006 14:33:08 EST." <20060308193306.GC4884@sbrim-wxp01> Message-ID: In missive <20060308193306.GC4884 at sbrim-wxp01>, Scott W Brim typed: >>On Wed, Mar 08, 2006 04:25:55PM +0000, Jon Crowcroft allegedly wrote: >>> So Vint Cerf gave a cool talk at Google in London last night... >>> Lots of nice examples of the way things will probably head :- >>Please, say more, what is his thinking these days? mostly as you'd excpect -i'm assuming the talk will show up on his blog or whatever soon, so don't want to speak for him! the main thing was not relaly futurology- just restatement of e2e principle (fairly strongly) and a bunch of stats about user base, and quite a lot on mobile (since there's 2B+ mobile users which sort of dwarfs the fixed line internet user base and is about 2 times the fixed phone line base now) .... most the event was about hiring Schmoozing a bunch of alumni from top UK universities before and after Vint's keynote... viz: http://networks.silicon.com/webwatch/0,39024667,39157022,00.htm for example... London is a cool place to work and they have a nice building in the center...plus obviously if you want to be near europe (:-) where a lot of the handset design goes on, or indeed, engage with the financial service sector, then it seems like a smart place to be (although if i was gonan do wireless this century, i'd base something in korea and china, and for other stuff, i'd have a lab in india near one of the IITs- they are _especially_ imaginative as well as very very good technically (i.e. can program AND do math:-)... the last bit of the talk was a quick upfdate on interplanetary stuff, which is far out, as far as i am concerned:) cheers jon From dhc2 at dcrocker.net Thu Mar 9 05:22:05 2006 From: dhc2 at dcrocker.net (Dave Crocker) Date: Thu, 09 Mar 2006 05:22:05 -0800 Subject: [e2e] layered neutrality & end2end In-Reply-To: References: Message-ID: <44102BFD.4030908@dcrocker.net> > the last bit of the talk was a quick upfdate on interplanetary stuff, > which is far out, as far as i am concerned:) it's supposed to be... d/ -- Dave Crocker Brandenburg InternetWorking From Jon.Crowcroft at cl.cam.ac.uk Thu Mar 9 06:15:36 2006 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Thu, 09 Mar 2006 14:15:36 +0000 Subject: [e2e] layered neutrality & end2end In-Reply-To: Message from Dave Crocker of "Thu, 09 Mar 2006 05:22:05 PST." <44102BFD.4030908@dcrocker.net> Message-ID: well gosh - i think nanonets is more interesting - after al, there are more molecules in a glass of water than there are glasses of water in the sea... the big problem is that IP Packets just wont fit in between the hydrogen bonds In missive <44102BFD.4030908 at dcrocker.net>, Dave Crocker typed: >>> the last bit of the talk was a quick upfdate on interplanetary stuff, >>> which is far out, as far as i am concerned:) >> >>it's supposed to be... >> >>d/ >> >>-- >> >>Dave Crocker >>Brandenburg InternetWorking >> cheers jon From touch at ISI.EDU Thu Mar 9 09:54:13 2006 From: touch at ISI.EDU (Joe Touch) Date: Thu, 09 Mar 2006 09:54:13 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> Message-ID: <44106BC5.4090400@isi.edu> rick jones wrote: > How, if at all, might the TCP FIN_WAIT_2 state fit into all of this? > Either in the "detached" case of the application having called close(), > or in the simplex case of the application having called shutdown(SHUT_WR)? FIN_WAIT_2 should happen only after a CLOSE() call is issued, that side sends a FIN, and then that side receives an ACK of that FIN. There is no timeout for FIN_WAIT_2 (at least none I could find in RFC 793). shutdown(SHUT_WR) isn't specified in the TCP API; I don't have the Linux source code, but it should issue a CLOSE() call as well. The FIN_WAIT_2 results in kept state until a new connection is tried that collides. That happens on the local end when an app tries to open a new connection or send data on the old one; either return an error, at which point the application can decide to issue an ABORT() so it can proceed with a new connection. The same would occur when an application dies, i.e., when it 'disconnects' from the socket, where the OS can issue an ABORT. It happens on the remote end when the old connection tries to open a new connection, where at some point the other side sends a RST or a FIN. I.e., overall, the APPLICATION on one side or the other has to decide what to do. This can be accomplished by a global OS parameter that effectively emulates the timeout for the application, but to TCP it's an application-layer decision. Joe From touch at ISI.EDU Thu Mar 9 10:01:33 2006 From: touch at ISI.EDU (Joe Touch) Date: Thu, 09 Mar 2006 10:01:33 -0800 Subject: [e2e] layered neutrality & end2end In-Reply-To: References: Message-ID: <44106D7D.6010705@isi.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Jon Crowcroft wrote: ... > So telco-style ISPs would love to remove net transparency > for users via differential charging for the QoS and security > services. They charge you more for add-ons like jitter&delay > bounding for VOIP (essentialyl by disabling it for others :- ... > Anyhow, so net neutrality is a cool thing, > but what about transport and application layer neutrality? According to recent reports in the US, they also think they can charge differential prices for different applications. http://www.latimes.com/business/la-fi-golden9mar09,1,6155758.column?coll=la-utilities-business I guess they haven't heard about tunnels ;-) Joe -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEEG19E5f5cImnZrsRAnLaAKColxUYqZTVE2xoUnfdE9lMGQLQswCgu3x+ EMblQpQddQ9fqU57DW4gqo0= =fCTu -----END PGP SIGNATURE----- From gds at best.com Thu Mar 9 14:17:38 2006 From: gds at best.com (Greg Skinner) Date: Thu, 9 Mar 2006 15:17:38 -0700 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <440ED46C.4040207@reed.com>; from dpreed@reed.com on Wed, Mar 08, 2006 at 07:56:12AM -0500 References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <1141793098.14453.30.camel@localhost.localdomain> <440ED46C.4040207@reed.com> Message-ID: <20060309151738.A52703@gds.best.vwh.net> On Wed, Mar 08, 2006 at 07:56:12AM -0500, David P. Reed wrote: > Saikat Guha wrote: > > Is there a way to architect the Internet to give the network operator > > full control over his network? So, when his boss (who paid for the wires > > and routers) asks him to block application X, he can do just that and > > not cause the collateral damage that firewall-hacks cause today. > > Shameless plug: we believe signaling is one way to work _with_ the > > network, and not against it > > (http://saikat.guha.cc/pub/sosp05wip-guha.pdf). But, this is just one > > solution. > > > I'm amazed. The network operator in this case wants to join the > Internet, but not join the Internet. > > The Internet is a fully interoperable network. That means inherently > that all operators that carry Internet traffic agree to carry their fair > share. Hmmm ... I don't remember offhand any Internet design document that states this. There were restrictive policies implemented, even in the original Internet, for cause (such as the Mailbridges that could be configured to deny traffic from the ARPAnet to the MILnet except for destination SMTP port). > What you are describing is not the Internet, but something else. The > "cooperation-optional" network, perhaps? Or maybe the "screw you" network? Rather than arguing about whether this is or is not the Internet, perhaps the question should be reframed as whether this constitutes a set of principles upon which the next generation network can be built. > If the network advertises that it routes packets to a destination, how > is the source to know that its packets will be destroyed based on their > content? The way I read the paper, the source would be notified that the attempt was refused due to insufficient privilege. > At that point, it's time for those who agree to the original terms of > the Internet social compact (which is far more than social) to > blackball, boycott, and refuse to connect to that operator. Screw him. Why? Because he wants to protect his network? From perfgeek at mac.com Thu Mar 9 19:18:23 2006 From: perfgeek at mac.com (rick jones) Date: Thu, 9 Mar 2006 19:18:23 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <44106BC5.4090400@isi.edu> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> <44106BC5.4090400@isi.edu> Message-ID: <0e3bbfed062bc5ff42bc815c4c399858@mac.com> > shutdown(SHUT_WR) isn't specified in the TCP API; I don't have the > Linux > source code, but it should issue a CLOSE() call as well. that basically says it won't be sending any more data but might be receiving more data. > The FIN_WAIT_2 results in kept state until a new connection is tried > that collides. That could be a very long time indeed. > That happens on the local end when an app tries to open a new > connection > or send data on the old one; either return an error, at which point the > application can decide to issue an ABORT() so it can proceed with a new > connection. The same would occur when an application dies, i.e., when > it > 'disconnects' from the socket, where the OS can issue an ABORT. Perhaps it is implementation specific, but how about when another application, or instance of that application tries to start a listen endpoint. I guess you would consider that a new connection being tried that collides? > It happens on the remote end when the old connection tries to open a > new > connection, where at some point the other side sends a RST or a FIN. Which may never happen when the remote simply goes poof. > I.e., overall, the APPLICATION on one side or the other has to decide > what to do. This can be accomplished by a global OS parameter that > effectively emulates the timeout for the application, but to TCP it's > an > application-layer decision. Isn't that global OS parameter that effectively emulates the timeout for the application effectively a TCP keepalive? rick jones there is no rest for the wicked, yet the virtuous have no pillows From touch at ISI.EDU Thu Mar 9 21:19:29 2006 From: touch at ISI.EDU (Joe Touch) Date: Thu, 09 Mar 2006 21:19:29 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <0e3bbfed062bc5ff42bc815c4c399858@mac.com> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> <44106BC5.4090400@isi.edu> <0e3bbfed062bc5ff42bc815c4c399858@mac.com> Message-ID: <44110C61.2030301@isi.edu> rick jones wrote: >> shutdown(SHUT_WR) isn't specified in the TCP API; I don't have the Linux >> source code, but it should issue a CLOSE() call as well. > > that basically says it won't be sending any more data but might be > receiving more data. Why isn't that a CLOSE()? (or is it a TCP CLOSE(), but not a socket 'close'?) >> The FIN_WAIT_2 results in kept state until a new connection is tried >> that collides. > > That could be a very long time indeed. The point is that it doesn't matter. The state gets cleaned up ONLY when it interferes with a new connection. Cleaning up old state isn't part of how TCP is designed. >> That happens on the local end when an app tries to open a new connection >> or send data on the old one; either return an error, at which point the >> application can decide to issue an ABORT() so it can proceed with a new >> connection. The same would occur when an application dies, i.e., when it >> 'disconnects' from the socket, where the OS can issue an ABORT. > > Perhaps it is implementation specific, but how about when another > application, or instance of that application tries to start a listen > endpoint. I guess you would consider that a new connection being tried > that collides? Starting a listen isn't a collision - it doesn't do anything at the TCP level. The collision happens only when the new connection is attempted, which, for that model, assumes the remote side sends the SYN. If the SYN is from a different port, there's no interference and it proceeds as usual. It's only when the SYN received indicates the port from the old connection that the state gets cleaned up - or needs to. >> It happens on the remote end when the old connection tries to open a new >> connection, where at some point the other side sends a RST or a FIN. > > Which may never happen when the remote simply goes poof. In which case there's nothing to interfere, and thus no reason to cleanup. >> I.e., overall, the APPLICATION on one side or the other has to decide >> what to do. This can be accomplished by a global OS parameter that >> effectively emulates the timeout for the application, but to TCP it's an >> application-layer decision. > > Isn't that global OS parameter that effectively emulates the timeout for > the application effectively a TCP keepalive? A keepalive would send TCP packets with no data, or - in the absence of such - decide to terminate the connection. Having the 'application' (OS, in this case, but as far as TCP is concerned, it's just anything above TCP) terminate the connection is the application's decision. It has nothing to do with TCP - or how much or little progress it is making. Joe From agthorr at cs.uoregon.edu Thu Mar 9 22:05:55 2006 From: agthorr at cs.uoregon.edu (Daniel Stutzbach) Date: Thu, 9 Mar 2006 22:05:55 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <44110C61.2030301@isi.edu> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> <44106BC5.4090400@isi.edu> <0e3bbfed062bc5ff42bc815c4c399858@mac.com> <44110C61.2030301@isi.edu> Message-ID: <20060310060554.GD4452@cs.uoregon.edu> On Thu, Mar 09, 2006 at 09:19:29PM -0800, Joe Touch wrote: > >> The FIN_WAIT_2 results in kept state until a new connection is tried > >> that collides. > > > > That could be a very long time indeed. > > The point is that it doesn't matter. The state gets cleaned up ONLY when > it interferes with a new connection. Cleaning up old state isn't part of > how TCP is designed. RFCs or no, most real-world TCP implementations do exit FIN_WAIT_2 after a timeout. Windows, Linux, OpenBSD, NetBSD, FreeBSD, and Solaris all do it. -- Daniel Stutzbach Computer Science Ph.D Student http://www.barsoom.org/~agthorr University of Oregon From dpreed at reed.com Fri Mar 10 07:09:47 2006 From: dpreed at reed.com (David P. Reed) Date: Fri, 10 Mar 2006 10:09:47 -0500 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <20060309151738.A52703@gds.best.vwh.net> References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <1141793098.14453.30.camel@localhost.localdomain> <440ED46C.4040207@reed.com> <20060309151738.A52703@gds.best.vwh.net> Message-ID: <441196BB.7060000@reed.com> Greg - don't be part of a revisionist game of language. The Internet "specs" are ex post, and rest on a great deal of history going back to 1967 and even earlier. I was there. Vint was there. You were not. J.C.R. Licklider (DARPA IPTO founder and carrier of the fundamental dream of an interoperable network-of-networks) was a close personal mentor of mine from almost the day I arrived at MIT in 1969. Bob Kahn, Bob Taylor, Jon Postel, and Vint were all carrying that vision every day explicitly and tacitly. This should NOT be revisionistly defined as merely an aspirational vision for the "next Internet". It's about preserving the #1, premier design goal on which the Internet of today was founded - achieving global interoperability and connectivity among diverse networks (you can read about it if you like in Licklider&Taylor's papers in the '60's, including Scientific American article in 1967). There may be some discussion around the edges of what that means, but what it never meant was allowing the owners of a wire to micromanage the end-to-end applications. The view was clearly to allow each network to contribute its reach in exchange for obtaining global reach. It's absurdly twisted to rephrase of my comments, about the right of someone voluntarily joining the Internet social compact to selectively free-ride on that compact by ignoring the ground rules, to be about "punishing" somebody for "protecting their network." It would be amusing if it were not so typical of what passes for logic in today's world of Hardball and other anti-rational noise fests that confuse conservative thought with hate-filled rudeness. At best the motivation is about charging more rents or extending the powers of ownership to enforce opinions - conflating that with "protection" is a new concept to me. Tell me exactly how does allowing an application that a network owner does not like (perhaps promoting abortion for all or advocating murder of abortionists, depending on your preferences for extremism, or perhaps just charging usurious rates of interest or promoting gift economies) "damage the network"? If that damages the owner's network, there must be lots of fibers melting out there because of applications and content the management or stockholders of Verizon don't personally like. (I regularly criticize RCN over links that I purchase from them - is that damaging their network? - I presume it is something they don't like, but perhaps they enjoy it?) I'm merely saying "you can't have your cake and eat it too" - you can't claim to be part of the INTERoperability-defined NETwork and claim rights to arbitrarily and unilaterally be non-interoperable. The only "punishment" is disconnnection. That leaves the operator completely free (though it may lose lots of customers because of it - is there a natural right to business success independent of strategic choices to fail?). There are certainly many issues about protecting networks and the Internet-as-a-whole that are actually real in the context of the design goals of the Internet. For example, finding ways to prevent actual damage to networks, preventing "teaming up" by operators of disparate networks to act against the interests of all, etc. These require careful thinking, of the sort that can only be done by listening to the meaning of someone's words, not twisting their meaning. Greg Skinner wrote: > On Wed, Mar 08, 2006 at 07:56:12AM -0500, David P. Reed wrote: > >> Saikat Guha wrote: >> >>> Is there a way to architect the Internet to give the network operator >>> full control over his network? So, when his boss (who paid for the wires >>> and routers) asks him to block application X, he can do just that and >>> not cause the collateral damage that firewall-hacks cause today. >>> Shameless plug: we believe signaling is one way to work _with_ the >>> network, and not against it >>> (http://saikat.guha.cc/pub/sosp05wip-guha.pdf). But, this is just one >>> solution. >>> >>> >> I'm amazed. The network operator in this case wants to join the >> Internet, but not join the Internet. >> >> The Internet is a fully interoperable network. That means inherently >> that all operators that carry Internet traffic agree to carry their fair >> share. >> > > Hmmm ... I don't remember offhand any Internet design document that > states this. There were restrictive policies implemented, even in the > original Internet, for cause (such as the Mailbridges that could be > configured to deny traffic from the ARPAnet to the MILnet except for > destination SMTP port). > > >> What you are describing is not the Internet, but something else. The >> "cooperation-optional" network, perhaps? Or maybe the "screw you" network? >> > > Rather than arguing about whether this is or is not the Internet, > perhaps the question should be reframed as whether this constitutes a > set of principles upon which the next generation network can be > built. > > >> If the network advertises that it routes packets to a destination, how >> is the source to know that its packets will be destroyed based on their >> content? >> > > The way I read the paper, the source would be notified that the > attempt was refused due to insufficient privilege. > > >> At that point, it's time for those who agree to the original terms of >> the Internet social compact (which is far more than social) to >> blackball, boycott, and refuse to connect to that operator. Screw him. >> > > Why? Because he wants to protect his network? > > > From touch at ISI.EDU Fri Mar 10 07:08:35 2006 From: touch at ISI.EDU (Joe Touch) Date: Fri, 10 Mar 2006 07:08:35 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <20060310060554.GD4452@cs.uoregon.edu> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> <44106BC5.4090400@isi.edu> <0e3bbfed062bc5ff42bc815c4c399858@mac.com> <44110C61.2030301@isi.edu> <20060310060554.GD4452@cs.uoregon.edu> Message-ID: <44119673.5030604@isi.edu> Daniel Stutzbach wrote: > On Thu, Mar 09, 2006 at 09:19:29PM -0800, Joe Touch wrote: >>>> The FIN_WAIT_2 results in kept state until a new connection is tried >>>> that collides. >>> That could be a very long time indeed. >> The point is that it doesn't matter. The state gets cleaned up ONLY when >> it interferes with a new connection. Cleaning up old state isn't part of >> how TCP is designed. > > RFCs or no, most real-world TCP implementations do exit FIN_WAIT_2 > after a timeout. Windows, Linux, OpenBSD, NetBSD, FreeBSD, and > Solaris all do it. Perhaps to be added to a future update to RFC 2525. I.e., just because it's implemented doesn't mean it's not a bug. Joe From perfgeek at mac.com Fri Mar 10 07:42:23 2006 From: perfgeek at mac.com (rick jones) Date: Fri, 10 Mar 2006 07:42:23 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <44110C61.2030301@isi.edu> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> <44106BC5.4090400@isi.edu> <0e3bbfed062bc5ff42bc815c4c399858@mac.com> <44110C61.2030301@isi.edu> Message-ID: >> that basically says it won't be sending any more data but might be >> receiving more data. > > Why isn't that a CLOSE()? (or is it a TCP CLOSE(), but not a socket > 'close'?) indeed, it is not a socket close(). the socket remains "open" but just in simplex (receive only). >>> The FIN_WAIT_2 results in kept state until a new connection is tried >>> that collides. >> >> That could be a very long time indeed. > > The point is that it doesn't matter. The state gets cleaned up ONLY > when > it interferes with a new connection. Cleaning up old state isn't part > of > how TCP is designed. Is that why we had such "fun" with FIN_WAIT_2 and web servers? > Starting a listen isn't a collision - it doesn't do anything at the TCP > level. The collision happens only when the new connection is attempted, > which, for that model, assumes the remote side sends the SYN. If the > SYN > is from a different port, there's no interference and it proceeds as > usual. It's only when the SYN received indicates the port from the old > connection that the state gets cleaned up - or needs to. Should the semantics of SO_REUSEADDR be the default then? >>> It happens on the remote end when the old connection tries to open a >>> new >>> connection, where at some point the other side sends a RST or a FIN. >> >> Which may never happen when the remote simply goes poof. > > In which case there's nothing to interfere, and thus no reason to > cleanup. _No_ reason? Admittedly, the TCP connection state doesn't have to be particularly large, but there can be a particularly large number of them. >>> I.e., overall, the APPLICATION on one side or the other has to decide >>> what to do. This can be accomplished by a global OS parameter that >>> effectively emulates the timeout for the application, but to TCP >>> it's an >>> application-layer decision. >> >> Isn't that global OS parameter that effectively emulates the timeout >> for >> the application effectively a TCP keepalive? > > A keepalive would send TCP packets with no data, or - in the absence of > such - decide to terminate the connection. Having the 'application' > (OS, > in this case, but as far as TCP is concerned, it's just anything above > TCP) terminate the connection is the application's decision. It has > nothing to do with TCP - or how much or little progress it is making. So the receive-only side of a simplex TCP connection should have an an application-level timeout that goes pop when no data has been received from the remote side, since by definition it cannot send an application-level keepalive yes? That TCP could send something to at least reasonably guess if the remote is still there is a don't care. rick jones there is no rest for the wicked, yet the virtuous have no pillows From Jon.Crowcroft at cl.cam.ac.uk Fri Mar 10 07:54:34 2006 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Fri, 10 Mar 2006 15:54:34 +0000 Subject: [e2e] tcp connection timeout Message-ID: I keep getting these tcp connection timeout messages. Is it because the end2end list is broken? j. From touch at ISI.EDU Fri Mar 10 11:53:37 2006 From: touch at ISI.EDU (Joe Touch) Date: Fri, 10 Mar 2006 11:53:37 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> <44106BC5.4090400@isi.edu> <0e3bbfed062bc5ff42bc815c4c399858@mac.com> <44110C61.2030301@isi.edu> Message-ID: <4411D941.20206@isi.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 rick jones wrote: >>> that basically says it won't be sending any more data but might be >>> receiving more data. >> >> Why isn't that a CLOSE()? (or is it a TCP CLOSE(), but not a socket >> 'close'?) > > indeed, it is not a socket close(). the socket remains "open" but just > in simplex (receive only). > >>>> The FIN_WAIT_2 results in kept state until a new connection is tried >>>> that collides. >>> >>> That could be a very long time indeed. >> >> The point is that it doesn't matter. The state gets cleaned up ONLY when >> it interferes with a new connection. Cleaning up old state isn't part of >> how TCP is designed. > > Is that why we had such "fun" with FIN_WAIT_2 and web servers? That was largely TIME_WAIT. There were versions that had problems with FIN_WAIT_2, but mostly due to poor application behavior. >> Starting a listen isn't a collision - it doesn't do anything at the TCP >> level. The collision happens only when the new connection is attempted, >> which, for that model, assumes the remote side sends the SYN. If the SYN >> is from a different port, there's no interference and it proceeds as >> usual. It's only when the SYN received indicates the port from the old >> connection that the state gets cleaned up - or needs to. > > Should the semantics of SO_REUSEADDR be the default then? That only helps if you're in TIME_WAIT. If you want to do a listen and your OS tells you there's an error (because you've bound to the whole tuple, rather than leaving some aspects open), you get an error which should help YOU decide to issue a corresponding abort call - IF YOU WANT. >>>> It happens on the remote end when the old connection tries to open a >>>> new >>>> connection, where at some point the other side sends a RST or a FIN. >>> >>> Which may never happen when the remote simply goes poof. >> >> In which case there's nothing to interfere, and thus no reason to >> cleanup. > > _No_ reason? Admittedly, the TCP connection state doesn't have to be > particularly large, but there can be a particularly large number of them. TCP is not optimized for this anywhere else. If you want to redesign the WHOLE protocol to minimize the amount of spare state hanging around, that'd be interesting, but a significant redesign. TCP is currently designed to clean state _only_ when it interferes with a new connection. >>>> I.e., overall, the APPLICATION on one side or the other has to decide >>>> what to do. This can be accomplished by a global OS parameter that >>>> effectively emulates the timeout for the application, but to TCP >>>> it's an >>>> application-layer decision. >>> >>> Isn't that global OS parameter that effectively emulates the timeout for >>> the application effectively a TCP keepalive? >> >> A keepalive would send TCP packets with no data, or - in the absence of >> such - decide to terminate the connection. Having the 'application' (OS, >> in this case, but as far as TCP is concerned, it's just anything above >> TCP) terminate the connection is the application's decision. It has >> nothing to do with TCP - or how much or little progress it is making. > > So the receive-only side of a simplex TCP connection should have an an > application-level timeout that goes pop when no data has been received > from the remote side, since by definition it cannot send an > application-level keepalive yes? You chose to use TCP as a simplex channel by leaving it in half-closed and continuing to send new data and "it hurts when I do that"-- so don't _do_ that. I.e., if your application decides to half-close the connection, it has little right to complain it no longer has that data channel to play with. > That TCP could send something to at > least reasonably guess if the remote is still there is a don't care. Sure - TCP could do lots of things to compensate for a poorly written application; as others have noted, it's not an issue as to what can put into TCP, but rather what should. Adding features to compensate for the deliberate misbehavior of an application seems a particularly poor motivation. Joe -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEEdlBE5f5cImnZrsRAkiVAJ94WipB+8/4WCvB4Q2w7RrInvp8sACglG5I Chio3mIyYyzfzaT34fNMMzg= =pKlX -----END PGP SIGNATURE----- From michael.welzl at uibk.ac.at Fri Mar 10 13:32:11 2006 From: michael.welzl at uibk.ac.at (Michael Welzl) Date: Fri, 10 Mar 2006 22:32:11 +0100 Subject: [e2e] Since we're already learning TCP fundamentals... References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> <44106BC5.4090400@isi.edu> <0e3bbfed062bc5ff42bc815c4c399858@mac.com> <44110C61.2030301@isi.edu> <4411D941.20206@isi.edu> Message-ID: <000501c6448a$17cdd9a0$0200a8c0@fun> ... here's a stupid question: Why does TCP count bytes and not segments? Bytestream semantics, I know - but what's the benefit? TCP is not supposed to receive half a segment AFAIK, and counting segments = less space, or less risk of wrapping. There must have been a reason for this design choice. I expected to find an explanation in RFC 793, but I couldn't find anything - might have overlooked it though... Cheers, Michael From fred at cisco.com Fri Mar 10 14:10:19 2006 From: fred at cisco.com (Fred Baker) Date: Fri, 10 Mar 2006 14:10:19 -0800 Subject: [e2e] Since we're already learning TCP fundamentals... In-Reply-To: <000501c6448a$17cdd9a0$0200a8c0@fun> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> <44106BC5.4090400@isi.edu> <0e3bbfed062bc5ff42bc815c4c399858@mac.com> <44110C61.2030301@isi.edu> <4411D941.20206@isi.edu> <000501c6448a$17cdd9a0$0200a8c0@fun> Message-ID: <45F230BA-ECEE-4BB5-9C95-C5BF28195AC1@cisco.com> oh my goodness. this was a long debate... Let's say that there is no "right" way to do it - one could count octets or bits or segments and it would work. In 1980, there were a number of transports that in fact counted segments - XNS SPP, DECNET, Apple, ISO, and so on. When the proponents of one or another was asked about the claims of the other, they would generally note that an API could be built using their version that simulated the other. And they were correct; RFC 1006 describes a way to run the ISO transport over TCP, and there are several API implementations that run streams over packet-based transports. When I implemented the XNS SPP for CDC (seriously many moons ago), there was one problem that I encountered in which counting packets (which the Sequenced Packet Protocol called them) worked against me. That was when the network sent an error message indicating that I had sent a sent a packet that was too large (IDP doesn't fragment). In that case, I might well have sent subsequent packets, and I therefore didn't have the option of resending the packet as several sequential smaller packets. I had to close the connection and restart. I didn't have that problem with TCP; I could simply send several smaller segments. See PMTU. I would also note that the reverse is true; if the Nagle option is implemented and the one outstanding segment is lost and therefore needs to be retransmitted, and if I happen to have more data than I originally sent, I can actually send new data in the retransmitted segment if I like. On Mar 10, 2006, at 1:32 PM, Michael Welzl wrote: > ... here's a stupid question: > > Why does TCP count bytes and not segments? > > Bytestream semantics, I know - but what's the benefit? TCP is > not supposed to receive half a segment AFAIK, and counting > segments = less space, or less risk of wrapping. > > There must have been a reason for this design choice. > I expected to find an explanation in RFC 793, but I couldn't > find anything - might have overlooked it though... > > Cheers, > Michael From craig at aland.bbn.com Fri Mar 10 15:07:17 2006 From: craig at aland.bbn.com (Craig Partridge) Date: Fri, 10 Mar 2006 18:07:17 -0500 Subject: [e2e] Since we're already learning TCP fundamentals... In-Reply-To: Your message of "Fri, 10 Mar 2006 22:32:11 +0100." <000501c6448a$17cdd9a0$0200a8c0@fun> Message-ID: <20060310230717.355CB67@aland.bbn.com> Maintaining segment boundaries is a performance challenge. You cannot trust applications to pick segment boundaries (size that is right for application may be too small [see Nagle algorithm] or too big [see Mogul/Kent on why fragmentation is bad]). If you let TCP pick segment boundaries, what happens if TCP picks wrong (MTU changes? -- see previous paragraph) or needs to retransmit (aggregating segments into one big packet is a win..., but aggregation is an complex mechanism...) Craig PS: You're right that segments make some mechanisms a lot simpler (if you want to take a peek, I can send you a UNIX RDP implementation from c. 1989 -- very clean). But the semantics of segments is lousy for performance and flexibility. In message <000501c6448a$17cdd9a0$0200a8c0 at fun>, "Michael Welzl" writes: >... here's a stupid question: > >Why does TCP count bytes and not segments? > >Bytestream semantics, I know - but what's the benefit? TCP is >not supposed to receive half a segment AFAIK, and counting >segments = less space, or less risk of wrapping. > >There must have been a reason for this design choice. >I expected to find an explanation in RFC 793, but I couldn't >find anything - might have overlooked it though... > >Cheers, >Michael From Anil.Agarwal at viasat.com Fri Mar 10 17:39:15 2006 From: Anil.Agarwal at viasat.com (Agarwal, Anil) Date: Fri, 10 Mar 2006 17:39:15 -0800 Subject: [e2e] Since we're already learning TCP fundamentals... Message-ID: We can envision TCP offering a byte-stream service to applications (no user segment boundaries), as it does now, but internally use segments (appropriately sized, with PMTUD) and segment numbers. If PMTU changes in the middle of a connection, then rely on IP fragmentation/reassembly or some new procedure. That can lead to a substantial CPU performance boost and code simplification for TCP and TCP buffer management, especially with SACK and PAWS. Would that be worth something? Perhaps, there are historical reasons why TCP was designed to use byte numbers - telnet over 2.4 kbps links, aggregation, 2^32 seemed like a very large number, etc. Regards, Anil -----Original Message----- From: end2end-interest-bounces at postel.org [mailto:end2end-interest-bounces at postel.org] Sent: Friday, March 10, 2006 6:07 PM To: Michael Welzl Cc: End to End Subject: Re: [e2e] Since we're already learning TCP fundamentals... Maintaining segment boundaries is a performance challenge. You cannot trust applications to pick segment boundaries (size that is right for application may be too small [see Nagle algorithm] or too big [see Mogul/Kent on why fragmentation is bad]). If you let TCP pick segment boundaries, what happens if TCP picks wrong (MTU changes? -- see previous paragraph) or needs to retransmit (aggregating segments into one big packet is a win..., but aggregation is an complex mechanism...) Craig PS: You're right that segments make some mechanisms a lot simpler (if you want to take a peek, I can send you a UNIX RDP implementation from c. 1989 -- very clean). But the semantics of segments is lousy for performance and flexibility. In message <000501c6448a$17cdd9a0$0200a8c0 at fun>, "Michael Welzl" writes: >... here's a stupid question: > >Why does TCP count bytes and not segments? > >Bytestream semantics, I know - but what's the benefit? TCP is >not supposed to receive half a segment AFAIK, and counting >segments = less space, or less risk of wrapping. > >There must have been a reason for this design choice. >I expected to find an explanation in RFC 793, but I couldn't >find anything - might have overlooked it though... > >Cheers, >Michael From gds at best.com Fri Mar 10 19:35:20 2006 From: gds at best.com (Greg Skinner) Date: Fri, 10 Mar 2006 20:35:20 -0700 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <441196BB.7060000@reed.com>; from dpreed@reed.com on Fri, Mar 10, 2006 at 10:09:47AM -0500 References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <1141793098.14453.30.camel@localhost.localdomain> <440ED46C.4040207@reed.com> <20060309151738.A52703@gds.best.vwh.net> <441196BB.7060000@reed.com> Message-ID: <20060310203520.A58167@gds.best.vwh.net> On Fri, Mar 10, 2006 at 10:09:47AM -0500, David P. Reed wrote: > Tell me exactly how does allowing an application that a network owner > does not like (perhaps promoting abortion for all or advocating murder > of abortionists, depending on your preferences for extremism, or > perhaps just charging usurious rates of interest or promoting gift > economies) "damage the network"? I went back and reread Saikat's paper. I did not view his remarks in the light that you seem to. I read them as "a network operator would like to protect his network from abuse, and enable its authorized users to freely communicate." --gregbo From perfgeek at mac.com Sat Mar 11 09:12:03 2006 From: perfgeek at mac.com (rick jones) Date: Sat, 11 Mar 2006 09:12:03 -0800 Subject: [e2e] tcp connection timeout In-Reply-To: <4411D941.20206@isi.edu> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> <44106BC5.4090400@isi.edu> <0e3bbfed062bc5ff42bc815c4c399858@mac.com> <44110C61.2030301@isi.edu> <4411D941.20206@isi.edu> Message-ID: <0fba95170d56368d26ef9e6ce6cc8cef@mac.com> >> Is that why we had such "fun" with FIN_WAIT_2 and web servers? > > That was largely TIME_WAIT. TIME_WAIT was a benchmarking issue with which we dealt reasonably easily simply by hashing the TCP connection list. I was very deliberately picking FIN_WAIT_2. > There were versions that had problems with FIN_WAIT_2, but mostly due > to > poor application behavior. > >>> Starting a listen isn't a collision - it doesn't do anything at the >>> TCP >>> level. The collision happens only when the new connection is >>> attempted, >>> which, for that model, assumes the remote side sends the SYN. If the >>> SYN >>> is from a different port, there's no interference and it proceeds as >>> usual. It's only when the SYN received indicates the port from the >>> old >>> connection that the state gets cleaned up - or needs to. >> >> Should the semantics of SO_REUSEADDR be the default then? > > That only helps if you're in TIME_WAIT. Really? I thought that in just about all implementations the SO_REUSEADDR would allow a new listen endpoint while there were others in anything but LISTEN. > If you want to do a listen and your OS tells you there's an error > (because you've bound to the whole tuple, rather than leaving some > aspects open), you get an error which should help YOU decide to issue a > corresponding abort call - IF YOU WANT. That then would only be if you _didn't_ do the application close() so you still had a way to reach-out and touch the TCP endpoint. >> _No_ reason? Admittedly, the TCP connection state doesn't have to be >> particularly large, but there can be a particularly large number of >> them. > > TCP is not optimized for this anywhere else. If you want to redesign > the > WHOLE protocol to minimize the amount of spare state hanging around, > that'd be interesting, but a significant redesign. TCP is currently > designed to clean state _only_ when it interferes with a new > connection. TCP doesn't have a hole anywhere else. Any other state always has either a reference open to the application so will go away when the application goes away, or has a timer running that can get it along to the next state (perhaps closed). Only with FIN_WAIT_2 is there the prospect of a TCP endpoint with no connection to an application on "this" end, and waiting on an action from the other end that may never happen. >> So the receive-only side of a simplex TCP connection should have an an >> application-level timeout that goes pop when no data has been received >> from the remote side, since by definition it cannot send an >> application-level keepalive yes? > > You chose to use TCP as a simplex channel by leaving it in half-closed > and continuing to send new data and "it hurts when I do that"-- so > don't > _do_ that. > > I.e., if your application decides to half-close the connection, it has > little right to complain it no longer has that data channel to play > with. It feels a bit like a trap set by TCP then - look, but don't touch the simplex functionality we provide, because if you do, you cannot do what we assert _you_ must do which is provide application-level probes to make sure everything remains OK. rick jones Wisdom teeth are impacted, people are affected by the effects of events. From braden at ISI.EDU Sun Mar 12 19:50:25 2006 From: braden at ISI.EDU (Bob Braden) Date: Sun, 12 Mar 2006 19:50:25 -0800 Subject: [e2e] Since we're already learning TCP fundamentals... In-Reply-To: <000501c6448a$17cdd9a0$0200a8c0@fun> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> <44106BC5.4090400@isi.edu> <0e3bbfed062bc5ff42bc815c4c399858@mac.com> <44110C61.2030301@isi.edu> <4411D941.20206@isi.edu> Message-ID: <5.1.0.14.2.20060312194654.02903c08@boreas.isi.edu> You are confusing a segment with a record. You meant to ask, why doesn't TCP count records? (A segment is just another name for a TCP packet; the sender is free to packetize any way it pleases, so the count of segments has no significance.) One simple answer to counting records is: because if you count records, the protocol has to negotiate record sizes. That is an extra mechanism. Bob Braden At 10:32 PM 3/10/2006 +0100, Michael Welzl wrote: >... here's a stupid question: > >Why does TCP count bytes and not segments? > >Bytestream semantics, I know - but what's the benefit? TCP is >not supposed to receive half a segment AFAIK, and counting >segments = less space, or less risk of wrapping. > >There must have been a reason for this design choice. >I expected to find an explanation in RFC 793, but I couldn't >find anything - might have overlooked it though... > >Cheers, >Michael From dpreed at reed.com Mon Mar 13 04:52:28 2006 From: dpreed at reed.com (David P. Reed) Date: Mon, 13 Mar 2006 07:52:28 -0500 Subject: [e2e] Since we're already learning TCP fundamentals... In-Reply-To: <5.1.0.14.2.20060312194654.02903c08@boreas.isi.edu> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> <44106BC5.4090400@isi.edu> <0e3bbfed062bc5ff42bc815c4c399858@mac.com> <44110C61.2030301@isi.edu> <4411D941.20206@isi.edu> <5.1.0.14.2.20060312194654.02903c08@boreas.isi.edu> Message-ID: <44156B0C.7070102@reed.com> And in fact a TCP source may use different segmentation in a retransmission from the segmentation it used in the original segmentation. This has advantages, too, for example when using TCP to send typed characters one at a time. There were those who advocated record-based counting, flow control, and error control in the original TCP design group, but the reason Bob states is one of several arguments against that (e.g., what if a record is bigger than the maximum segment size?) In general the TCP designers tended to avoid putting into the protocol anything that was argued solely because it would be "useful to most applications". This is one of the many places where end-to-end arguments were clearly uttered (though it's important to note that Saltzer, Clark and myself didn't describe and name the category of such arguments until a couple of years later, when we recognized the pattern as important). Bob Braden wrote: > You are confusing a segment with a record. You meant to ask, > why doesn't TCP count records? (A segment is just another > name for a TCP packet; the sender is free to packetize any way it > pleases, so the count of segments has no significance.) > > One simple answer to counting records is: because > if you count records, the protocol has to negotiate record sizes. > That is an extra mechanism. > > Bob Braden > > At 10:32 PM 3/10/2006 +0100, Michael Welzl wrote: >> ... here's a stupid question: >> >> Why does TCP count bytes and not segments? >> >> Bytestream semantics, I know - but what's the benefit? TCP is >> not supposed to receive half a segment AFAIK, and counting >> segments = less space, or less risk of wrapping. >> >> There must have been a reason for this design choice. >> I expected to find an explanation in RFC 793, but I couldn't >> find anything - might have overlooked it though... >> >> Cheers, >> Michael > > > From dpreed at reed.com Mon Mar 13 05:14:10 2006 From: dpreed at reed.com (David P. Reed) Date: Mon, 13 Mar 2006 08:14:10 -0500 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <20060310203520.A58167@gds.best.vwh.net> References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <1141793098.14453.30.camel@localhost.localdomain> <440ED46C.4040207@reed.com> <20060309151738.A52703@gds.best.vwh.net> <441196BB.7060000@reed.com> <20060310203520.A58167@gds.best.vwh.net> Message-ID: <44157022.7020903@reed.com> Greg Skinner wrote: > I went back and reread Saikat's paper. I did not view his remarks in > the light that you seem to. I read them as "a network operator would > like to protect his network from abuse, and enable its authorized > users to freely communicate." > I did not read the following paragraph from Saikat's email that way: > Is there a way to architect the Internet to give the network operator > full control over his network? So, when his boss (who paid for the wires > and routers) asks him to block application X, he can do just that and > not cause the collateral damage that firewall-hacks cause today. > It's important to realize that the Hushaphone decision was argued (and won) on the basis that AT&T's claim that ANY application they didn't like had a risk of "damaging" the network, which was demonstrably owned by AT&T. So there is a plausible (but outlandish) risk that any user action can damage the network (even attaching a piece of plastic to the phone handset!) The resolution of Carterfone was not based on a demonstration the there was NO risk to the network from attached devices. It was based on AT&T abusing its social contract with the US Government, whereby the government acknowledged a de facto monopoly, in exchange for a variety of public goods that it promised (such as investing in and deploying new technology via Bell Labs) and its failure to deliver those public goods. The same deal exists in the implicit Internet Compact (such as it is) - if you offer to carry IP traffic, you offer to carry all of it, just as all other AS's do. Subject of course to making yourself a target of directed attacks that are in fact real. The Internet as a whole aids each other in finding and fixing such problems. Unilateral behavior leads to balkanization, and at that point there is no Internet. From dpreed at reed.com Mon Mar 13 06:40:56 2006 From: dpreed at reed.com (David P. Reed) Date: Mon, 13 Mar 2006 09:40:56 -0500 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <44157022.7020903@reed.com> References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <1141793098.14453.30.camel@localhost.localdomain> <440ED46C.4040207@reed.com> <20060309151738.A52703@gds.best.vwh.net> <441196BB.7060000@reed.com> <20060310203520.A58167@gds.best.vwh.net> <44157022.7020903@reed.com> Message-ID: <44158478.8050509@reed.com> Note that in the commentary about "social compacts" below I was making an analogy between the Internet compact and the AT&T-USGovt. compact. This analogy is, of course, limited. In particular the counterparties of the Internet compact are the set of all other Internet participants, not the government. The Internet is not an entity that fits within the jurisdiction of the US Govt. It transcends that boundary by its very nature as a framework for cooperation. Similarly English language culture transcends the US Government (though the French seem to think they define the French language by governmental fiat). David P. Reed wrote: > > Greg Skinner wrote: >> I went back and reread Saikat's paper. I did not view his remarks in >> the light that you seem to. I read them as "a network operator would >> like to protect his network from abuse, and enable its authorized >> users to freely communicate." >> > I did not read the following paragraph from Saikat's email that way: >> Is there a way to architect the Internet to give the network operator >> full control over his network? So, when his boss (who paid for the wires >> and routers) asks him to block application X, he can do just that and >> not cause the collateral damage that firewall-hacks cause today. >> > It's important to realize that the Hushaphone decision was argued (and > won) on the basis that AT&T's claim that ANY application they didn't > like had a risk of "damaging" the network, which was demonstrably > owned by AT&T. So there is a plausible (but outlandish) risk that > any user action can damage the network (even attaching a piece of > plastic to the phone handset!) > > The resolution of Carterfone was not based on a demonstration the > there was NO risk to the network from attached devices. It was based > on AT&T abusing its social contract with the US Government, whereby > the government acknowledged a de facto monopoly, in exchange for a > variety of public goods that it promised (such as investing in and > deploying new technology via Bell Labs) and its failure to deliver > those public goods. > > The same deal exists in the implicit Internet Compact (such as it is) > - if you offer to carry IP traffic, you offer to carry all of it, just > as all other AS's do. Subject of course to making yourself a target > of directed attacks that are in fact real. The Internet as a whole > aids each other in finding and fixing such problems. Unilateral > behavior leads to balkanization, and at that point there is no Internet. > > > From dmchiu at ie.cuhk.edu.hk Mon Mar 13 08:29:43 2006 From: dmchiu at ie.cuhk.edu.hk (Dah Ming Chiu) Date: Tue, 14 Mar 2006 00:29:43 +0800 Subject: [e2e] 0% NAT - checkmating the disconnectors References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <1141793098.14453.30.camel@localhost.localdomain> <440ED46C.4040207@reed.com> <20060309151738.A52703@gds.best.vwh.net> <441196BB.7060000@reed.com> <20060310203520.A58167@gds.best.vwh.net><44157022.7020903@reed.com> <44158478.8050509@reed.com> Message-ID: <73f901c646bb$59256750$db60bd89@iepclan.ie.cuhk.edu.hk> I have only read the last 2-3 posts of this thread - interesting discussion. What puzzles me is why it is necessary to have such a "social compact" to ensure global transit. Isn't market forces strong enough to guarantee connectivity? In other words, if you are an ISP preferring to limit transit, maybe some of your customers will find other transit providers? Of course, at some levels, when there is monopoly, there still needs to be government regulations. DMC ----- Original Message ----- From: "David P. Reed" To: "David P. Reed" Cc: Sent: Monday, March 13, 2006 10:40 PM Subject: Re: [e2e] 0% NAT - checkmating the disconnectors > Note that in the commentary about "social compacts" below I was making an > analogy between the Internet compact and the AT&T-USGovt. compact. This > analogy is, of course, limited. In particular the counterparties of the > Internet compact are the set of all other Internet participants, not the > government. The Internet is not an entity that fits within the > jurisdiction of the US Govt. It transcends that boundary by its very > nature as a framework for cooperation. Similarly English language culture > transcends the US Government (though the French seem to think they define > the French language by governmental fiat). > > David P. Reed wrote: >> >> Greg Skinner wrote: >>> I went back and reread Saikat's paper. I did not view his remarks in >>> the light that you seem to. I read them as "a network operator would >>> like to protect his network from abuse, and enable its authorized >>> users to freely communicate." >>> >> I did not read the following paragraph from Saikat's email that way: >>> Is there a way to architect the Internet to give the network operator >>> full control over his network? So, when his boss (who paid for the wires >>> and routers) asks him to block application X, he can do just that and >>> not cause the collateral damage that firewall-hacks cause today. >>> >> It's important to realize that the Hushaphone decision was argued (and >> won) on the basis that AT&T's claim that ANY application they didn't like >> had a risk of "damaging" the network, which was demonstrably owned by >> AT&T. So there is a plausible (but outlandish) risk that any user >> action can damage the network (even attaching a piece of plastic to the >> phone handset!) >> >> The resolution of Carterfone was not based on a demonstration the there >> was NO risk to the network from attached devices. It was based on AT&T >> abusing its social contract with the US Government, whereby the >> government acknowledged a de facto monopoly, in exchange for a variety of >> public goods that it promised (such as investing in and deploying new >> technology via Bell Labs) and its failure to deliver those public goods. >> >> The same deal exists in the implicit Internet Compact (such as it is) - >> if you offer to carry IP traffic, you offer to carry all of it, just as >> all other AS's do. Subject of course to making yourself a target of >> directed attacks that are in fact real. The Internet as a whole aids >> each other in finding and fixing such problems. Unilateral behavior >> leads to balkanization, and at that point there is no Internet. >> >> >> From tvest at pch.net Mon Mar 13 09:03:05 2006 From: tvest at pch.net (Tom Vest) Date: Mon, 13 Mar 2006 12:03:05 -0500 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <73f901c646bb$59256750$db60bd89@iepclan.ie.cuhk.edu.hk> References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <1141793098.14453.30.camel@localhost.localdomain> <440ED46C.4040207@reed.com> <20060309151738.A52703@gds.best.vwh.net> <441196BB.7060000@reed.com> <20060310203520.A58167@gds.best.vwh.net><44157022.7020903@reed.com> <44158478.8050509@reed.com> <73f901c646bb$59256750$db60bd89@iepclan.ie.cuhk.edu.hk> Message-ID: <088E4DAD-864D-4736-8EEB-FE5EE0DCA2A9@pch.net> Hi Dah Ming, Forgive me for belaboring the obvious, but IP transit services have to be delivered/terminated somewhere, and many of the world's "somewheres" are characterized by IP transit options that are highly constrained by non-economic factors that are not very sensitive to economic pressures. Over time, some of these places do seem to respond to ecological pressures (i.e., would-be local operators shift production out to more favorable local environments, causing exaggerated differential growth rates), but the transit-constrained localities are often the very same places where exit is also vigorously discouraged. Or so it seems to me... Tom On Mar 13, 2006, at 11:29 AM, Dah Ming Chiu wrote: > I have only read the last 2-3 posts of this thread - interesting > discussion. > What puzzles me is why it is necessary to have such a "social compact" > to ensure global transit. Isn't market forces strong enough to > guarantee > connectivity? In other words, if you are an ISP preferring to > limit transit, > maybe some of your customers will find other transit providers? Of > course, > at some levels, when there is monopoly, there still needs to be > government > regulations. > > DMC > > ----- Original Message ----- From: "David P. Reed" > To: "David P. Reed" > Cc: > Sent: Monday, March 13, 2006 10:40 PM > Subject: Re: [e2e] 0% NAT - checkmating the disconnectors > > >> Note that in the commentary about "social compacts" below I was >> making an analogy between the Internet compact and the AT&T- >> USGovt. compact. This analogy is, of course, limited. In >> particular the counterparties of the Internet compact are the set >> of all other Internet participants, not the government. The >> Internet is not an entity that fits within the jurisdiction of the >> US Govt. It transcends that boundary by its very nature as a >> framework for cooperation. Similarly English language culture >> transcends the US Government (though the French seem to think they >> define the French language by governmental fiat). >> >> David P. Reed wrote: >>> >>> Greg Skinner wrote: >>>> I went back and reread Saikat's paper. I did not view his >>>> remarks in >>>> the light that you seem to. I read them as "a network operator >>>> would >>>> like to protect his network from abuse, and enable its authorized >>>> users to freely communicate." >>>> >>> I did not read the following paragraph from Saikat's email that way: >>>> Is there a way to architect the Internet to give the network >>>> operator >>>> full control over his network? So, when his boss (who paid for >>>> the wires >>>> and routers) asks him to block application X, he can do just >>>> that and >>>> not cause the collateral damage that firewall-hacks cause today. >>>> >>> It's important to realize that the Hushaphone decision was argued >>> (and won) on the basis that AT&T's claim that ANY application >>> they didn't like had a risk of "damaging" the network, which was >>> demonstrably owned by AT&T. So there is a plausible (but >>> outlandish) risk that any user action can damage the network >>> (even attaching a piece of plastic to the phone handset!) >>> >>> The resolution of Carterfone was not based on a demonstration the >>> there was NO risk to the network from attached devices. It was >>> based on AT&T abusing its social contract with the US Government, >>> whereby the government acknowledged a de facto monopoly, in >>> exchange for a variety of public goods that it promised (such as >>> investing in and deploying new technology via Bell Labs) and its >>> failure to deliver those public goods. >>> >>> The same deal exists in the implicit Internet Compact (such as it >>> is) - if you offer to carry IP traffic, you offer to carry all of >>> it, just as all other AS's do. Subject of course to making >>> yourself a target of directed attacks that are in fact real. >>> The Internet as a whole aids each other in finding and fixing >>> such problems. Unilateral behavior leads to balkanization, and >>> at that point there is no Internet. >>> >>> > From houyou at fmi.uni-passau.de Mon Mar 13 09:25:05 2006 From: houyou at fmi.uni-passau.de (Amine M. Houyou) Date: Mon, 13 Mar 2006 18:25:05 +0100 Subject: [e2e] CFP: IWSOS 2006 - Passau (Germany) - Submission deadline approaching (1st International Workshop on Self-Organizing Systems) Message-ID: <4415AAF1.4030702@fmi.uni-passau.de> Dear all, I would like to remind you of the approaching deadline for the first self-organizing systems related workshop: IWSOS 2006. Important Dates * Paper abstract deadline: March 23rd 2006, 11:59pm CET * Paper submission deadline: April 1st 2006, 11:59pm CET Apologies for the multiple copies. ------------------------------------------------------------------------- CALL FOR PAPERS New Trends in Network Architectures and Services: International Workshop on Self-Organizing Systems (IWSOS 2006) September 18 - 20, 2006 University of Passau, Germany http://www.fmi.uni-passau.de/iwsos Overview: The evolution of the Internet reveals surprising turns and obstacles. Centralized approaches of introducing new services and architectures consistently failed to materialize at large scale. Quality of Service, group communication, or mobility support are only some examples for the difficulty with orchestrated approaches. The success story of the Internet, on the other hand, is strongly linked to decentralization. Robustness to failures or flexibility in introducing new applications such as the World Wide Web or Peer-to-Peer systems have been key momentum to technological advancements and economics. Future networks are envisioned to be highly complex and difficult to manage due to heterogeneity of networks, spontaneous set-up of networks, and the envisioned number of interconnected devices, appliances, and artifacts. The question that poses itself is whether self-organization can be exploited to a larger scale for solving some of the pending problems for such future networks. Self-organization may even play a key architectural role of the future Internet for enhanced flexibility and evolvability. It is the goal of this workshop to bring together leading international and multi-disciplinary researchers to create a visionary forum for inves- tigating the potentials in self-organization and the means to achieve it. Key Note Speaker: * A Panel on "Would self-organized or self-managed networks lead to a better networking world?" * A Works in Progress Session focusing on emerging research * Industrial exhibitions and demonstrations Important Dates: * Paper abstract deadline: March 23rd 2006, 11:59pm CET * Paper submission deadline: April 1st 2006, 11:59pm CET * Notification of acceptance: May 18th 2006 * Camera-ready papers due: June 7th 2006, 11:59pm CET * Author registration deadline: June 7th 2006, 11:59 CET * Early registration deadline: August 1st, 2006 * Hotel reservation cut-off date: August 15th, 2006 * Workshop dates: September 18th - 20th 2006 * Workshop reception and welcome party: September 17th 2006 Steering Committee: * Hermann de Meer, University of Passau, Germany * David Hutchison, Lancaster University, UK * Bernhard Plattner, ETH Zurich, Switzerland * James Sterbenz, University of Kansas, USA Program Chairs: * Hermann De Meer University of Passau Passau, Germany * James Sterbenz University of Kansas Lawrence, KS, USA -- Contacts: Amine Houyou Richard Holzer University of Passau Germany -- MEng. EE. Amine M. Houyou Chair for Computer Communications and Networks - Prof. De Meer - Faculty for Computer Science and Mathematics University of Passau Innstr. 43 D-94032 Passau Germany http://www.net.fmi.uni-passau.de From WLiu at ntu.edu.sg Sun Mar 12 18:10:34 2006 From: WLiu at ntu.edu.sg (Liu Wei) Date: Mon, 13 Mar 2006 10:10:34 +0800 Subject: [e2e] CFP netgames 2006 Message-ID: <7CD06E15ADF4104A9F2E4DC2DE678F89021F4405@EXCHANGE21.staff.main.ntu.edu.sg> NetGames 2006 5th Workshop on Network & System Support for Games 2006 30-31 October 2006, Singapore The field of network game and entertainment has aroused great interest among researchers and developers in both academic and industrial fields as it is duly recognized as being very promising in bringing on new exciting characters to computer games and attracting more and more players. For example, next generation console gaming platform such as Xbox 360 by Microsoft and Playstation 3 by Sony have the ability to allow users to play network games. Also, the undeniable attraction of online role-playing games (RPG) has attracted increasing number of gamers every year. The purpose of NetGames 2006 is to bring together researchers and developers from academic and industry to share ideas and present new research in understanding networked games and entertainment and in enabling the next generation of online games. The conference is organised by Interaction and Entertainment Research Centre (Nanyang Technological University, Singapore) and the Association of Computer Machinery-Computer Human Interaction (Singapore) in co-operation with ACM SIGCOMM. This year's workshop will held in Singapore. Located in the heart of fascinating Southeast Asia, Singapore is a dynamic city rich in contrast and color where you'll find a harmonious blend of culture, cuisine, arts and architecture. As a thriving center of commerce and industry, Singapore has the busiest port in the world. Brimming with unbridled energy and bursting with exciting events, the city offers countless unique, memorable experience waiting to be discovered. Prospective authors are now invited to submit papers and extended abstracts describing research related to networked games and entertainment on all platforms via the conference website, http://www.netgames2006.org . Submission Deadline: July 15, 2006. ACM, the world's leading computer science society, will be publishing all accepted papers. Submissions are sought in any area related to networked games and entertainment. Topics of interest include, but are not limited to: 1. Multi-player game architectures and platforms 2. Games on mobile and resource-scarce devices 3. Protocols for peer-to-peer networked games 4. Text and voice messaging in games 5. Prevention and detection of cheating 6. Latency compensation and hiding 7. Modeling, usage studies, and characterization 8. Systems support for authentication and accounting 9. Multiplayer mobile and ubiquitous games 10. Networks of sensors and actuators for games 11. Advanced network services for mobile games (localisation, identification, group management) 12. Security and right management in games 13. Network and mobile game engine 14. Grid architecture for games 15. Advanced protocols and games (reliable multicast, group streaming, time and state synchronisation) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.postel.org/pipermail/end2end-interest/attachments/20060313/1cc05efa/attachment.html From saikat at cs.cornell.edu Mon Mar 13 22:02:33 2006 From: saikat at cs.cornell.edu (Saikat Guha) Date: Tue, 14 Mar 2006 01:02:33 -0500 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <44157022.7020903@reed.com> References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <1141793098.14453.30.camel@localhost.localdomain> <440ED46C.4040207@reed.com> <20060309151738.A52703@gds.best.vwh.net> <441196BB.7060000@reed.com> <20060310203520.A58167@gds.best.vwh.net> <44157022.7020903@reed.com> Message-ID: <1142316153.13622.59.camel@localhost.localdomain> On Mon, 2006-03-13 at 08:14 -0500, David P. Reed wrote: > It was based on AT&T > abusing its social contract with the US Government Based on your example of a few lawsuits, it seems censorship and such need to be tackled at the social layer (Layer 10?), by the Government, through laws etc. I know this is an antagonistic thing to say, but while you can guarantee some "freedom" (of speech) at the transport layer, e2e suggests there is little value in doing so. That said, as Greg rightly pointed out, the paper I linked is not about censorship. It is about letting authorized users freely communicate. cheers, -- Saikat -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 191 bytes Desc: This is a digitally signed message part Url : http://www.postel.org/pipermail/end2end-interest/attachments/20060314/e3d0f773/attachment.bin From michael.welzl at uibk.ac.at Tue Mar 14 02:44:40 2006 From: michael.welzl at uibk.ac.at (Michael Welzl) Date: 14 Mar 2006 11:44:40 +0100 Subject: [e2e] Since we're already learning TCP fundamentals... In-Reply-To: <44156B0C.7070102@reed.com> References: <4408AB36.3000307@isi.edu> <20060304163746.GA22560@localhost.localdomain> <1141513982.517.10.camel@localhost.localdomain> <20060306190808.GK69460@hut.isi.edu> <44106BC5.4090400@isi.edu> <0e3bbfed062bc5ff42bc815c4c399858@mac.com> <44110C61.2030301@isi.edu> <4411D941.20206@isi.edu> <5.1.0.14.2.20060312194654.02903c08@boreas.isi.edu> <44156B0C.7070102@reed.com> Message-ID: <1142333080.4798.107.camel@lap10-c703.uibk.ac.at> Hi all, Many thanks to everybody for your comments - it's really interesting to see how many facets there are to this question. I was just about to ask why you can't use segment counters with variable size segments (which some of you mentioned as the reason for using byte counters), but this: On Mon, 2006-03-13 at 13:52, David P. Reed wrote: > And in fact a TCP source may use different segmentation in a > retransmission from the segmentation it used in the original > segmentation. This has advantages, too, for example when using TCP to > send typed characters one at a time. There were those who advocated explains it; and perhaps there are other reasons too. Anyway, the answers to my question are far more interesting than I expected. Thanks again! Cheers, Michael From dpreed at reed.com Tue Mar 14 05:55:00 2006 From: dpreed at reed.com (David P. Reed) Date: Tue, 14 Mar 2006 08:55:00 -0500 Subject: [e2e] 0% NAT - checkmating the disconnectors In-Reply-To: <1142316153.13622.59.camel@localhost.localdomain> References: <20060222085720.6477C1A0214@smtp-1.hotpop.com> <43FC87C8.40706@isi.edu> <43FC9D43.7010800@reed.com> <1141793098.14453.30.camel@localhost.localdomain> <440ED46C.4040207@reed.com> <20060309151738.A52703@gds.best.vwh.net> <441196BB.7060000@reed.com> <20060310203520.A58167@gds.best.vwh.net> <44157022.7020903@reed.com> <1142316153.13622.59.camel@localhost.localdomain> Message-ID: <4416CB34.2070904@reed.com> Saikat Guha wrote: > I know this is an antagonistic thing to say, but while you can guarantee > some "freedom" (of speech) at the transport layer, e2e suggests there is > little value in doing so. > > I don't care about antagonism - I care about careful thinking: Don't twist my words. I was not suggesting (and would never suggest) "guaranteeing" anything at the transport layer - freedom of speech or anything else. I was suggesting that the transport layer of the Internet was an inappropriate layer to attempt to enforce application-level policies. Note I said the Internet. Hence my point that the structural definition of the Internet is that it is the collective agreement of many nets to carry each others' traffic. I cited the court cases, not to prove any point about the Internet, but to illustrate that there are subtleties in the issues we are discussing that relate to "rights" which are never absolute, and which are always competing with each other in context. Hushaphone focuses on the rights of owners. Carterfone focuses on social good (antitrust) and the right of the public to be protected from bad actions by monopolies (in its case a government-controlled and granted monopoly, rather than one earned by economic competition). Our legal system in the US has many aspects that are weighed by the courts - since the antitrust argument was not invoked in Hushaphone, the Hushaphone company lost its bid to offer add-on products (under that portion of the law). Separate from the law is a set of issues related to desirable policies (perhaps implemented by revisions of the law, either by statute or common law precedent - but as we computer scientists understand, the law is an implementation of policy - the same policy can be achieved by many implementation techniques). The end-to-end arguments are predominantly arguments about architecture, and the Internet was designed around an architecture of cooperation and moving special functions to the edges as a means to implement the policy goals of a single world-wide unified network and of capability for incorporating unanticipated innovation (both below the neck of the hourglass in AS's and above it in applications). The "rights" of network owners begin and end at the right to connect to the Internet (and to be accespted by the Internet as an AS). As I said, the network owner is not entitled by any rule I know of to be allowed to transport Internet traffic - so the remedy for not meeting the specs of the Internet should be disconnection (total and complete), also known as "routing around damage". If there were an actual law that guaranteed a network owner the right to carry traffic and to earn the related profits, that would conflict with the Internet's architectural assumptions. In the original Internet design, we ignored such networks - attempting to include them was a waste of time and technical effort. Perhaps we should revisit that design choice. But in doing so, it would be wise to avoid assuming that such a state of affairs is so important that it must distort the Internet architecture and its primary goal of providing universal network-of-networks functionality. The very reason pipsqueaks like Verizon are calling for franchise laws that exclude competitors is to disrupt the impact the Internet threatens to their business merely by providing competition. That's because the Internet works well, not because it works poorly. From supratik at sprintlabs.com Tue Mar 14 13:50:14 2006 From: supratik at sprintlabs.com (Supratik Bhattacharyya) Date: Tue, 14 Mar 2006 13:50:14 -0800 Subject: [e2e] ICNP 2006 Call for Papers Message-ID: <43DA00CDF5C0EA43BA8ABCB662536305014AD3B9@PDAWB15C.ad.sprint.com> ========================================================================= Call for Papers 14th IEEE International Conference on Network Protocols (ICNP 2006) Santa Barbara, California, USA November 12 - 15, 2006 ICNP 2006, the fourteenth IEEE International Conference on Network Protocols, is a single-track conference covering all aspects of network protocols including design, analysis, specification, verification, implementation, and performance. ICNP 2006 will be held in Santa Barbara, California, USA on Nov. 12-15, 2006. Papers describing significant research contributions to the field of network protocols are solicited for submission. Topics of interest include, but are not limited to: - Protocol testing, analysis, design and implementation - Measurement and monitoring of protocols - Protocols designed for specific functions, such as: routing, flow and congestion control, QoS, signaling, security, and resiliency - Protocols designed for specific networks, such as: wireless and mobile networks, Ad hoc and sensor networks, and ubiquitous networks Papers must be no longer than ten (10) pages in IEEE double-column format with standard margins and at least a 10 point font. They should be formatted for printing on US LETTER (8.5" by 11") size paper. Only electronic submissions will be accepted from the following URL. Papers must be submitted in PDF (Portable Document Format). http://www.ieee-icnp.org/2006/submission.html The details of the paper submission instruction are also shown. Papers cannot be previously published nor under review by another conference or journal. Longer submissions will not be reviewed. Please note the HARD deadline for registering the title and the abstract of the paper: 11:59pm Pacific Standard Time (PST), May 9, 2005. The deadline for submitting the actual paper (that has already been registered) is May 16, 2005, 11:59pm PST. ---------------------- Important Dates ---------------------- - Paper registration : May 9, 2006 - Paper submission : May 16, 2006 - Notification of acceptance : July 21, 2006 - Camera ready version : August 11, 2006 ---------------------- Conference Organizers ---------------------- Executive Committee : David Lee, Ohio State University, USA (chair) Mostafa Ammar, Georgia Tech, USA Ken Calvert, University of Kentucky, USA Teruo Higashino, Osaka University, Japan Raymond Miller, University of Maryland, USA Organizing Committee : General Chair : Kevin C. Almeroth, University of California Santa Barbara, USA Vice General Chair : David Yau, Purdue University, USA ========================================================================= Call for Papers 14th IEEE International Conference on Network Protocols (ICNP 2006) Santa Barbara, California, USA November 12 - 15, 2006 ICNP 2006, the fourteenth IEEE International Conference on Network Protocols, is a single-track conference covering all aspects of network protocols including design, analysis, specification, verification, implementation, and performance. ICNP 2006 will be held in Santa Barbara, California, USA on Nov. 12-15, 2006. Papers describing significant research contributions to the field of network protocols are solicited for submission. Topics of interest include, but are not limited to: - Protocol testing, analysis, design and implementation - Measurement and monitoring of protocols - Protocols designed for specific functions, such as: routing, flow and congestion control, QoS, signaling, security, and resiliency - Protocols designed for specific networks, such as: wireless and mobile networks, Ad hoc and sensor networks, and ubiquitous networks Papers must be no longer than ten (10) pages in IEEE double-column format with standard margins and at least a 10 point font. They should be formatted for printing on US LETTER (8.5" by 11") size paper. Only electronic submissions will be accepted from the following URL. Papers must be submitted in PDF (Portable Document Format). http://www.ieee-icnp.org/2006/submission.html The details of the paper submission instruction are also shown. Papers cannot be previously published nor under review by another conference or journal. Longer submissions will not be reviewed. Please note the HARD deadline for registering the title and the abstract of the paper: 11:59pm Pacific Standard Time (PST), May 9, 2005. The deadline for submitting the actual paper (that has already been registered) is May 16, 2005, 11:59pm PST. ---------------------- Important Dates ---------------------- - Paper registration : May 9, 2006 - Paper submission : May 16, 2006 - Notification of acceptance : July 21, 2006 - Camera ready version : August 11, 2006 ---------------------- Conference Organizers ---------------------- Executive Committee : David Lee, Ohio State University, USA (chair) Mostafa Ammar, Georgia Tech, USA Ken Calvert, University of Kentucky, USA Teruo Higashino, Osaka University, Japan Raymond Miller, University of Maryland, USA Organizing Committee : General Chair : Kevin C. Almeroth, University of California Santa Barbara, USA Vice General Chair : David Yau, Purdue University, USA Program Chair : Teruo Higashino, Osaka University, Japan Panel Chair : Michalis Faloutsos, University of California, Riverside,USA Local Arrangement Chair : Elizabeth Belding, University of California Santa Barbara, USA Web Chair : Ben Zhao, University of California Santa Barbara, USA Publicity Chair : Supratik Bhattacharyya, Sprint Advanced Technology Laboratories, USA Poster Chair: Sonia Fahmy, Purdue University, USA Technical Program Committee : Sudhir Aggarwal, Florida State University, USA Kevin Almeroth, UC Santa Barbara, USA Ehab Al-Shaer, DePaul University, USA Mostafa Ammar, Georgia Tech., USA Anish Arora, The Ohio State University, USA Bobby Bhattacharjee, Univ. of Maryland, USA Bob Briscoe, British Telecom & UCL, UK Ken Calvert, University of Kentucky, USA Guohong Cao, Penn. State University, USA Ana Cavalli, Institut National des Telecommunications, France Shigang Chen, University of Florida, USA Chan Mun Choon, National University of Singapore, Singapore Jorge Cobb, University of Texas at Dallas, USA Reuven Cohen, Technion, Israel John Crowcroft, University of Cambridge, UK Magda El Zarki, UC Irvine USA Cristian Estan, University of Wisconsin-Madison, USA Sonia Fahmy, Purdue university, USA Michalis Faloutsos, UC Riverside, USA Mohamed Gouda, University of Texas at Austin, USA Timothy G. Griffin, University of Cambridge, UK Jim Griffioen, University of Kentucky, USA Roland Groz, INPG-ENSIMAG Lab, France Carl A. Gunter, UIUC, USA Takahiro Hara, Osaka Univ. Japan Ahmed Helmy, USC/ISI, USA Jennifer C. Hou, UIUC, USA Raj Jain, Washington University in St. Louis, USA Kevin Jeffay, UNC, Chapel Hill, USA Shudong Jin, Case Western Reserve University, USA Yoshiaki Kakuda, Hiroshima City Univ., Japan Harmut Koenig, Brandenburg University of Technology, Germany Sandeep Kulkarni, Michigan State University, USA Tom La Porta, Penn. State University, USA T. V. Lakshman, Lucent Bell Labs, USA David Lee, The Ohio State University, USA Wang-Chien Lee, Penn. State University, USA Baochun Li, Univ. of Toronto, Canada Li (Erran) Li, Lucent Bell Labs, USA Victor O. K. Li, The Univ. of Hong Kong, Hong Kong Jorg Liebeherr, University of Virginia, USA Yow-Jian Lin, Telcordia Technologies, USA John C.S. Lui, Chinese University of Hong Kong, Hong Kong Ibrahim Matta, Boston University, USA Raymond Miller, Univ. of Maryland, USA Kihong Park, Purdue University, USA Thomas Plagemann, University of Oslo, Norway K. K. Ramakrishnan, AT&T Research, USA Jonathan Shapiro, Michigan State University, USA Hiroshi Shigeno, Keio Univ., Japan Raghupathy Sivakumar, Georgia Tech., USA Peter Steenkiste, Carnegie Mellon University, USA Terry Todd, McMaster University, Canada Joe Touch, USC/ISI, USA Jon Turner, Washington University in St. Louis, USA Hasan Ural, University of Ottawa, Canada Takashi Watanabe, Shizuoka Univ., Japan Thomas Woo, Bell Labs, USA Jianping Wu, Tsinghua Univ., China Dong Xuan, The Ohio State University, USA Hirozumi Yamaguchi, Osaka University, Japan Keiichi Yasumoto, Nara Institute of Science and Technology, Japan David Yau, Purdue University, USA Bulent Yener, Rensselaer Polytechnic Institute (RPI), USA Lixia Zhang, UCLA, USA Zhi-Li Zhang, Univ. of Minnesota, USA Ben Y. Zhao, UC Santa Barbara, USA Steering Committee : Simon Lam, University of Texas, USA (co-chair) David Lee, Ohio State University, USA (co-chair) Mostafa Ammar, Georgia Tech, USA Ken Calvert, University of Kentucky, USA Mohamed Gouda, University of Texas, USA Teruo Higashino, Osaka University, Japan Mike T. Liu, Ohio State University, USA Raymond Miller, University of Maryland, USA Krishan Sabnani, Bell Labs, USA Further Information : Web site: http://www.ieee-icnp.org/2006/ E-mail: icnp2006-org AT cs.ucsb.edu ======= From pdini at cisco.com Wed Mar 15 03:13:50 2006 From: pdini at cisco.com (Petre Dini (pdini)) Date: Wed, 15 Mar 2006 03:13:50 -0800 Subject: [e2e] =?iso-8859-1?q?ICISP_2006_=7C=7C_International_Conference_o?= =?iso-8859-1?q?n_Internet_Surveillance_and_Protection_=7C=7C_C=F4t?= =?iso-8859-1?q?e_d=27Azur=2C_France=2C_August_27-29=2C_2006?= Message-ID: <6B9C4B97B82F924485E26968EB05A6EE0150E68D@xmb-sjc-224.amer.cisco.com> ============================== Apologies if you receive multiple copies FIRST Call for Submissions ICISP 2006 International Conference on Internet Surveillance and Protection C?te d'Azur, France, August 27-29, 2006 For submissions, go on the ICISP 2006 page at http://www.iaria.org/conferences/ICISP.htm and click Submit a paper Important deadlines: Full paper submission April 5, 2006 Authors Notification: April 25, 2006 Camera ready, full papers due: May 15, 2006 The International Conference on Internet Surveillance and Protection (ICISP 2006) initiates a series of special events targeting security, performance, vulnerabilities in Internet, as well as disaster prevention and recovery. Dedicated events focus on measurement, monitoring and lessons learnt in protecting the user. We solicit both academic, research, and industrial contributions. ICISP 2006 will offer tutorials, plenary sessions, and panel sessions. The ICISP 2006 Proceedings will be published by IEEE Computer Society, posted on Xplore IEEE system, and indexed by SCI. The conference has the following specialized events: TRASI 2006: Internet traffic surveillance and interception IPERF 2006: Internet performance RTSEC 2006: Security for Internet-based real-time systems SYNEV 2006: Systems and networks vulnerabilities DISAS 2006: Disaster prevention and recovery EMERG 2006: Networks and applications emergency services MONIT 2006: End-to-end sampling, measurement, and monitoring REPORT 2006: Experiences & lessons learnt in securing networks and applications USSAF 2006: User safety, privacy, and protection over Internet We welcome technical papers presenting research and practical results, position papers addressing the pros and cons of specific proposals, such as those being discussed in the standard fora or in industry consortia, survey papers addressing the key problems and solutions on any of the above topics short papers on work in progress, and panel proposals. The topics suggested by the conference can be discussed in term of concepts, state of the art, standards, implementations, running experiments and applications. Authors are invited to submit complete unpublished papers, which are not under review in any other conference or journal in the following, but not limited topic areas. Industrial presentations are not subject to these constraints. Tutorials on specific related topics and panels on challenging areas are encouraged. Regular papers Only .pdf or .doc files will be accepted for paper submission. All received papers will have a unique ID sent to the contact author by the EDAS system when submitting. Final author manuscripts will be 8.5" x 11" (two columns IEEE format), not exceeding 6 pages; max 4 extra pages allowed at additional cost. The formatting instructions can be found via anonymous FTP site at: ftp://pubftp.computer.org/Press/Outgoing/proceedings/8.5x11%20-%20Formatting%20files/instruct.pdf Once you receive the notification of paper acceptance, you will be provided by the IEEE CS Press an online author kit with all the steps an author needs to follow to submit the final version. The author kits URL will be included in the letter of acceptance. Technical marketing/business/positioning presentations The conference initiates a series of business, technical marketing, and positioning presentations on the same topics. Speakers must submit a 10-12 slide deck presentations with substantial notes accompanying the slides, in the .ppt format (.pdf-ed). The slide deck will be published in the conference's CD collection, together with the regular papers. Please send your presentations to petre at iaria.org. Tutorials Tutorials provide overviews of current high interest topics. Proposals can be for half or full day tutorials. Please send your proposals to petre at iaria.org Panel proposals: The organizers encourage scientists and industry leaders to organize dedicated panels dealing with controversial and challenging topics and paradigms. Panel moderators are asked to identify their guests and manage that their appropriate talk supports timely reach our deadlines. Moderators must specifically submit an official proposal, indicating their background, panelist names, their affiliation, the topic of the panel, as well as short biographies. For more information, petre at iaria.org Workshop proposals We welcome workshop proposals on issues complementary to the topics of this conference. Your requests should be forwarded to petre at iaria.org Committees: ICISP Advisory Committee: David Bonyuet, Delta Search Labs, USA Petre Dini, Cisco Systems Inc., USA // Concordia Univ., Canada Lothar Fritsch, Johann Wolfgang Goethe-University, Germany Stein Gjessing, Simula Research Laboratory, Norway Danielle Kaminsky, CERPAC, France John Kristoff, UltraDNS, USA Michael Logothetis, University of Patras, Greece Pascal Lorenz, University of Haute Alsace, France Bruce Maggs, Carnegie Mellon University and Akamai, USA Gerard Parr, University of Ulster Coleraine Campus, Northern Ireland Igor Podebrad, Commerzbank, Germany Raul Siles, Hewlett-Packard, USA Joseph (Joe) Touch, Information Sciences Institute, USA Henk Uijterwaal, RIPE, The Netherlands Rob van der Mei, Vrije Universiteit, The Netherlands ICISP 2006 Technical Program Committee: Ehab Al-Shaer, DePaul University, USA Ernst Biersack, Eurecom, France David Bonyuet, Delta Search Labs, USA Herbert Bos, VU Amsterdam, The Netherlands Wojciech Burakowski, Warsaw University of Technology, Poland Baek-Young Choi, University of Missouri-Kansas City, USA Benoit Claise, Cisco Systems, Inc., Belgium Petre Dini, Cisco Systems Inc., USA // Concordia Univ., Canada Thomas D?bendorfer, Google, Switzerland Nick Feamster, Georgia Tech, USA Lothar Fritsch, Johann Wolfgang Goethe-University, Germany Sorin Georgescu, Ericsson Research, Canada Stein Gjessing, Simula Research Laboratory, Norway Stefanos Gritzalis,University of the Aegean, Greece Fabrice Guillemin, France Telecom R&D, France Abdelhakim Hafid, University of Montreal, Canada Danielle Kaminsky, CERPAC, France Frank Hartung, Ericsson Research, Germany John Kristoff, UltraDNS, USA Pascal Lorenz, University of Haute Alsace, France Simon Leinen, Switch, Switherland Michael Logothetis, University of Patras, Greece Tulin Mangir, California State University at Long Beach, USA Tony McGregor, Waikato University, New Zealand Muthu Muthukrishnan, Rutgers Poytechnic Institute, USA Jaime Lloret Mauri, Universidad Polit?cnica de Valencia, Spain Javier Lopez, University of Malaga, Spain Ioannis Moscholios, University of Patras, Greece Philippe Owezarski, LAAS, France Dina Papagiannaki, Intel Research, UK Gerard Parr, University of Ulster Coleraine Campus, Northern Ireland Igor Podebrad, Commerzbank, Germany Reza Rejaie, University of Oregon, USA Fulvio Risso, Politecnico di Torino, Italy Heiko Rossnagel, Johan Wolfgang Goethe-University, Germany Matthew Roughan, University of Adelaide, Australia Kamil Sara?, The University of Texas at Dallas, USA Raul Siles, Hewlett-Packard, USA Charalabos Skianis, National Centre for Scientific Research Demokritos, Greece Joel Sommers, University of Wisconsin, USA Joseph (Joe) Touch, Information Sciences Institute, USA Steve Uhlig, Universit? catholique de Louvin, Belgium Henk Uijterwaal, RIPE, The Netherlands Rob van der Mei, Vrije Universiteit and CWI, Amsterdam, Netherlands Darryl Veitch, University of Melbourne, Australia Arno Wagner, ETH Zurich, Switzerland Paul Watters, MacQuarie University, Australia James Won-Ki Hong, POSTECH, Korea Weider D. Yu, San Jose State University, USA Zhi-Li Zhang, University of Minnesota, USA Tanja Zseby, Fraunhofer FOKUS, Germany Details: petre at iaria.org, conf at iaria.org, dumitru.roman at deri.org ============================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.postel.org/pipermail/end2end-interest/attachments/20060315/80bb9254/attachment-0001.html From pdini at cisco.com Wed Mar 15 03:27:25 2006 From: pdini at cisco.com (Petre Dini (pdini)) Date: Wed, 15 Mar 2006 03:27:25 -0800 Subject: [e2e] =?iso-8859-1?q?ICDT_2006_=7C=7C_International_Conference_on?= =?iso-8859-1?q?_Digital_Telecommunications_=7C=7C_C=F4te_d=27Azur?= =?iso-8859-1?q?=2C_France=2C_August_30-31=2C_2006?= Message-ID: <6B9C4B97B82F924485E26968EB05A6EE0150E68F@xmb-sjc-224.amer.cisco.com> ============================== Apologies if you receive multiple copies FIRST Call for Submissions ICDT 2006 International Conference on Digital Telecommunications C?te d'Azur, France, August 30 - September 2 , 2006 For submissions, go on the ICDT 2006 page at http://www.iaria.org/conferences/ICDT.htm and click Submit a paper . Important deadlines: Full paper submission April 5, 2006 Authors Notification: April 25, 2006 Camera ready, full papers due: May 15, 2006 The International Conference on Digital Telecommunications (ICDT 2006) initiates a series of special events focusing on telecommunications aspects in multimedia environments. The scope of the conference is to focus on the lower layers of systems interaction and identify the technical challenges and the most recent achievements. The conference will serve as a forum for researchers from both the academia and the industry, professionals, and practitioners to present and discuss the current state-of-the art in research and best practices as well as future trends and needs (both in research and practices) in the areas of multimedia telecommunications, signal processing in telecommunications, data processing, audio transmission and reception systems, voice over packet networks, video, conferencing, telephony, as well as image producing, sending, and mining, speech producing and processing, IP/Mobile TV, Multicast/Broadcast Triple-Quadruple-play, content production and distribution, multimedia protocols, H-series towards SIP, and control and management of multimedia telecommunications. We welcome technical papers presenting research and practical results, position papers addressing the pros and cons of specific proposals, such as those being discussed in the standard fora or in industry consortia, survey papers addressing the key problems and solutions on any of the above topics short papers on work in progress, and panel proposals. The topics suggested by the conference can be discussed in term of concepts, state of the art, standards, implementations, running experiments and applications. Authors are invited to submit complete unpublished papers, which are not under review in any other conference or journal in the following, but not limited topic areas. Industrial presentations are not subject to these constraints. Tutorials on specific related topics and panels on challenging areas are encouraged. ICDT 2006 will offer tutorials, plenary sessions, and panel sessions. The ICDT 2006 Proceedings will be published by the IEEE Computer Society Press, posted on the Xplore IEEE system, and indexed by SCI. The conference has the following specialized events: MULTE 2006: Multimedia Telecommunications SIGNAL 2006: Signal processing in telecommunications DATA 2006: Data processing AUDIO 2006: Audio transmission and reception systems VOICE 2006: Voice over packet networks VIDEO 2006: Video, conferencing, telephony IMAGE 2006: Image producing, sending, and mining SPEECH 2006: Speech producing and processing IPTV 2006: IP/Mobile TV MULTI 2006: Multicast/Broadcast Triple-Quadruple-play CONTENT 2006: Production, distribution HXSIP 2006: H-series towards SIP MEDMAN 2006: Control and management of multimedia telecommunications We welcome technical papers presenting research and practical results, position papers addressing the pros and cons of specific proposals, such as those being discussed in the standard fora or in industry consortia, survey papers addressing the key problems and solutions on any of the above topics short papers on work in progress, and panel proposals. The topics suggested by the conference can be discussed in term of concepts, state of the art, standards, implementations, running experiments and applications. Authors are invited to submit complete unpublished papers, which are not under review in any other conference or journal in the following, but not limited topic areas. Industrial presentations are not subject to these constraints. Tutorials on specific related topics and panels on challenging areas are encouraged. Regular papers Only .pdf or .doc files will be accepted for paper submission. All received papers will have a unique ID sent to the contact author by the EDAS system when submitting. Final author manuscripts will be 8.5" x 11" (two columns IEEE format), not exceeding 6 pages; max 4 extra pages allowed at additional cost. The formatting instructions can be found via anonymous FTP site at: ftp://pubftp.computer.org/Press/Outgoing/proceedings/8.5x11%20-%20Formatting%20files/instruct.pdf Once you receive the notification of paper acceptance, you will be provided by the IEEE CS Press an online author kit with all the steps an author needs to follow to submit the final version. The author kits URL will be included in the letter of acceptance. Technical marketing/business/positioning presentations The conference initiates a series of business, technical marketing, and positioning presentations on the same topics. Speakers must submit a 10-12 slide deck presentations with substantial notes accompanying the slides, in the .ppt format (.pdf-ed). The slide deck will be published in the conference's CD collection, together with the regular papers. Please send your presentations to petre at iaria.org . Tutorials Tutorials provide overviews of current high interest topics. Proposals can be for half or full day tutorials. Please send your proposals to petre at iaria.org Panel proposals: The organizers encourage scientists and industry leaders to organize dedicated panels dealing with controversial and challenging topics and paradigms. Panel moderators are asked to identify their guests and manage that their appropriate talk supports timely reach our deadlines. Moderators must specifically submit an official proposal, indicating their background, panelist names, their affiliation, the topic of the panel, as well as short biographies. For more information, petre at iaria.org Workshop proposals We welcome workshop proposals on issues complementary to the topics of this conference. Your requests should be forwarded to petre at iaria.org Committees: ICDT Advisory Committee: Tulin Atmaca, INT, France Claus Bauer, Dolby Laboratories, USA Petre Dini, Cisco Systems Inc., USA // Concordia Univ., Canada Pascal Lorenz, University of Haute Alsace, France Mihai Nadin, University of Texas - Dallas, USA Gerald Schaefer, Nottingham Trent University, UK Charalabos Skianis, University of the Aegean, Greece Hans-J?rgen Zepernick, Blekinge Institute of Technology, Sweden Thanh van Do, Telenor, Norway Carlos Becker Westphall, Universidade Federal de Santa Catarina, Brazil ICDT 2006 Technical Program Committee: Ralf Ackermann, Darmstadt University of Technology, Germany Ozgur B. Akan, Middle East Technical University Ankara, Turkey Ant?nio Marcos Alberti, INATEL, Brazil Maria Teresa Andrade, Universidade do Porto, Portugal Regina B. Araujo, Universidade Federal de S?o Carlos, Brazil Stefan Arbanowski, Fraunhofer FOKUS, Germany Tulin Atmaca, INT, France Koichi Asatani, Kogakuin University, Japan Irfan Awan, University of Bradford, UK Dragana Bajic, University of Novi Sad, Serbia and Montenegro Luis Orozco Barbosa, Universidad de Castilla La Mancha, Spain Claus Bauer, Dolby Laboratories, USA Paolo Bellavista, Universit? di Bologna, Italy Yolande Berbers, Katholieke Universiteit Leuven, Belgium Chatschik Bisdikian, IBM T.J. Watson Research Center, USA Vasile Bota, Technical University of Cluj-Napoca, Romania Nizar Bouabda, IRISA, France Torsten Braun, University of Bern, Switzerland Stefano Bregni, Politecnico di Milano, Italy Richard Brooks, Clemson University, USA Wes Carter, Martel Europe, UK Wojciech Cellary, The Poznan University of Economics, Poland Tiziana Catarci, Universit? degli Studi di Roma "La Sapienza", Italy Han-Chieh Chao, National Ilan University, Taiwan Claude Chaudet, ENST, France Thomas Chen, Southern Methodist University, USA Liang Cheng, Lehigh University, USA Song Ci, University of Massachusetts, - Boston, USA Hsiao-Hwa Chen, National Sun Yat-Sen University, Taiwan, P. R. China Adrian Conway, Verison, USA Pedro Angel Cuenca Castillo, Universidad de Castilla-La Mancha, Spain Gerard Damm, Alcatel - Dallas, USA Francisco Delicado, Universidad de Castilla La Mancha, Spain Petre Dini, Cisco Systems Inc., USA // Concordia Univ., Canada Christos Douligeris, University of Piraeus, Greece Schahram Dustdar, Technical University of Vienna, Austria Jos? Luis Gonz?lez S?nchez, Universidad de Extremadura Lisandro Zambenedetti Granville, Federal university of Rio Grande do Sul, Brazil Fabrizio Granelli, University of Trento, Italy Stefanos Gritzalis, University of the Aegean, Greece H?lio Crestana Guardia, Universidade Federal de S?o Carlos, Brazil Rock Ha, University of Ottawa, Canada Abdelhakim Hafid, University of Montreal, Canada Ridha Hamila, Etisalat University College, UAE Stefan H?kansson, LK (LU/EAB) Ericsson, Sweden Frank Hartung, Ericsson Research, Germany Oliver Heckmann, University of Technology. Darmstadt, Germany Mario Huemer, University of Erlangen, Germany Georgi Iliev, Technical University of Sofia, Bulgaria Sandor Imre, Budapest University of Technology and Economy, Hungary Kai Jakobs, Technical University of Aachen's (RWTH), Germany Xiang Ji, University of Kansas, USA Raja Jurdak, University of California - Irvine, USA Hyun Kook Kahng, Korea University, Korea Markus Kampmann, Ericsson Research, Germany Rajgopal Kannan, Louisiana State University, USA Sokratis K. Katsikas, University of the Aegean, Greece Ousmane Kon?, University of Toulouse - CNRS Kimon Kontovasilis, Demokritos Research Institute, Greece George Kormentzas, University of the Aegean, Greece Tasos Kourtis, Demokritos Research Institute, Greece Anastasios Kourtis, Demokritos Research Institute, Greece Tho LeNgoc, McGill University, Canada Dave Lewis, Trinity College Dublin, Ireland Qilian Liang, University of Texas - Arlington, USA Joan Lluis Pijoan, Ramon Llull University, Spain Simona Lohan, Tampere University of Technology, Finland Pascal Lorenz, University of Haute Alsace, France Ilias Maglogiannis, University of Aegean, Greece Shiwen Mao, Virginia Tech, USA Dario Maggiorini, University of Milano, Italy Tulin Mangir, California State University at Long Beach, USA Pietro Manzoni, Universidad Polit?cnica de Valencia, Spain Pedro Jose Marron, University of Stuttgart, Germany Marco Mattavelli, (Swiss Federal Institute of Technology) EPFL, Switzerland Ahmed Mehaoua, University of Versailles, France Tommaso Melodia, Georgia Tech, USA Marco Mesiti, University of Milano, Italy Francisco Mic? Engu?danos, Universidad de Valencia, Spain Nader F. Mir, San Jose State University, USA Ali Miri, School of Information Technology and Engineering (SITE), Canada Refik Molva, EURESCOM, France Marie-Jose Montpetit, Motorola, USA Matt Mutka, Michigan State University, USA Mihai Nadin, University of Texas - Dallas, USA Daniel Negru, Universit? de Versailles, France Uyen Trang Nguyen, York University, Canada Sotiris Nikoletseas, University of Patras and CTI, Greece Giorgio Nunzi, NEC Europe Ltd., Germany Santashil PalChaudhuri, Meru Networks, USA Evangelos Pallis, Centre for Techno logical Research of Crete, Greece Pubudu Pathirana, Deakin University, Australia Dmitri Perkins, University of Louisiana - Lafayette, USA Samuel Pierre, Ecole Polytechique de Montreal, Canada Low Chor Ping, Information Communication Institute of Singapore, Singapore Ednaldo Pizzolato, Universidade Federal de S?o Carlos, Brazil Ciprian Popoviciu, Cisco Systems, Inc., USA George N. Prezerakos, TEI Piraeus // National Technical University of Athens, Greece Francisco J. Quiles, Universidad de Castilla-La Mancha, Spain Branimir Reljin, University of Belgrade, Serbia and Montenegro Kurt Rothermel, University of Stuttgart, Germany Angelos N. Rouskas, University of the Aegean, Greece Marcelo G. Rubinstein, Universidade do Estado do Rio de Janeiro, Brazil Pedro M. Ruiz, University of Murcia, Spain Debashis Saha, Indian Institute of Management (IIM), India Jean-Michel Sahut, Groupe Sup de Co La Rochelle and INT, France Gerald Schaefer, Nottingham Trent University, UK Stefan Schmid, NEC Europe Ltd., Germany Vojin Senk, University of Novi Sad, Serbia and Montenegro Charalabos Skianis, University of the Aegean, Greece Caterina Scoglio, Kansas State University, USA David Simplot-Ryl, Universit? Lille 1, France Waleed Smari, University of Dayton, USA John Soldatos, Athens Information Technology, Greece Leonel Sousa, IST/INESC-ID, Portugal Dirk Staehle, University of W?rzburg, Germany Ivan Stojmenovic, SITE, Canada Weilian Su, Naval Postgraduate School - Monterey, USA Zhili Sun, University of Surrey, UK Yutaka Takahashi, Kyoto University Kyoto, Japan Ashit Talukder, Jet Propulsion Laboratory, USA Tomo Taniguchi, Fujitsu Laboratories Limited, Japan Velio Tralli, University of Ferrara, Italy Murat Uysal, University of Waterloo, Canada Thanh van Do, Telenor, Norway Iakovos S. Venieris, National Technical University of Athens, Greece Dimitrios D. Vergados, University of the Aegean, Greece Krzysztof Walczak, The Poznan University of Economics, Poland Andy Ju An Wang, Southern Polytechnic State University, USA Carlos Becker Westphall, Universidade Federal de Santa Catarina, Brazil Hongyi Wu, University of Louisiana - Lafayette, USA Qishi Wu, Oak Ridge National Laboratory, USA Xiaotao Wu, Columbia University, USA Miki Yamamoto, Kansai University, Japan Katsuyuki Yamazaki, KDDI Labs / Kyushu Institute of Technology, Japan Chihsiang Yeh, Queen's University, Canada Moustafa Youssef, University of Maryland - College Park, USA Meng Yu, Monmouth University, USA Anne Ya Zhang, University of Kansas, USA Kenichi Yamazaki, NTT DoCoMo, Inc., Japan Lisandro Zambenedetti Granville, Federal University of Rio Grande do Sul, Brazil Hans-J?rgen Zepernick, Blekinge Institute of Technology, Sweden Qing Zhao, University of California - Davis, USA Dong Zhou, DoCoMo USA Labs, USA Details: petre at iaria.org , conf at iaria.org , dumitru.roman at deri.org ========================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.postel.org/pipermail/end2end-interest/attachments/20060315/99350fd4/attachment-0001.html From touch at ISI.EDU Fri Mar 17 21:33:06 2006 From: touch at ISI.EDU (Joe Touch) Date: Fri, 17 Mar 2006 21:33:06 -0800 Subject: [e2e] CFP - Computer Networks Special Issue on fast/long distance network protocols Message-ID: <441B9B92.4060405@isi.edu> Call For Papers SPECIAL ISSUE OF COMPUTER NETWORKS Hot Topics in Transport Protocols for Very Fast and Very Long Distance Networks For more information see: http://www.ens-lyon.fr/LIP/RESO/COMNET_pfldnet/ Dates: May 1st, 2006 Deadline for paper submissions June 10th, 2006 Notification of acceptance/rejection July 15, 2006 Submission of final version Editors Pascale Vicat-Blanc Primet Joe Touch Katsushi Kobayashi INRIA USC/ISI National Inst. for Info. & Comm. Technology Pascale.Primet at inria.fr touch at isi.edu ikob at koganei.wide.ad.jp From Jon.Crowcroft at cl.cam.ac.uk Wed Mar 22 22:57:20 2006 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Thu, 23 Mar 2006 06:57:20 +0000 Subject: [e2e] HTTP Compliance w/ e2e principles? Message-ID: I believe I have found a flaw in all the implementations of HTTP GET - its clear that leaving the content behind at the server means that most HTTP protocol implementaitons are actrually implementing an HTTP copy. This has legal as well as technical consequences (especialy in countries where fair re-use is not permitted, suchas the UK) - I believe that we must now implement HTTP GET properly - It turns out that on one of my research projects I have got 56K pounds which I can use for equipment, and I propose to implement what I call Hype Text Transport Physically GET - I will hire a removal man, and whenever I type a URL at _my_ browser implementaiton, this will give him a dispatch order - he will forthwith go to the lat-long site associated with the server, identify the machine with the disks and so forth, and bring it to me. This also has the added bonus that It saves powr (in fact i wont need disks on my client side at all), and that if one wants to implement a distributed +update+ protocol for the web, concurrency control is enforced by strict locks. I've been talking to the royal mail about implementing HTTP POST correctly too, but they don't take me seriously. I'm guessing they see this as potential competition with amazon or netflix or something... j. From shemminger at osdl.org Thu Mar 23 09:59:46 2006 From: shemminger at osdl.org (Stephen Hemminger) Date: Thu, 23 Mar 2006 09:59:46 -0800 Subject: [e2e] HTTP Compliance w/ e2e principles? In-Reply-To: References: Message-ID: <20060323095946.6ba5b661@localhost.localdomain> On Thu, 23 Mar 2006 06:57:20 +0000 Jon Crowcroft wrote: > I believe I have found a flaw in all the implementations of HTTP GET - its clear that leaving the > content behind at the server means that most HTTP protocol implementaitons are actrually implementing an > HTTP copy. This has legal as well as technical consequences (especialy in countries where fair re-use is not > permitted, suchas the UK) - I believe that we must now implement HTTP GET properly - It turns out that on one of my > research projects I have got 56K pounds which I can use for equipment, and I propose to implement what I call > Hype Text Transport Physically GET - I will hire a removal man, and whenever I type a URL at _my_ browser > implementaiton, this will give him a dispatch order - he will forthwith go to the lat-long site associated with the > server, identify the machine with the disks and so forth, and bring it to me. > > This also has the added bonus that It saves powr (in fact i wont need disks on my client side at all), and that if > one wants to implement a distributed +update+ protocol for the web, concurrency control is enforced by strict > locks. > > I've been talking to the royal mail about implementing HTTP POST correctly too, but they don't take me seriously. > I'm guessing they see this as potential competition with amazon or netflix or something... > > j. Hey mate, April Fools is next week From braden at ISI.EDU Fri Mar 24 11:11:15 2006 From: braden at ISI.EDU (Bob Braden) Date: Fri, 24 Mar 2006 11:11:15 -0800 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: <001301c60a4a$9831dc60$0200a8c0@fun> Message-ID: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> At 07:31 PM 12/26/2005 +0100, Michael Welzl wrote: >Hi everybody, > >Here's something that I've had on my mind for quite a while now: >I'm wondering why T/TCP ( RFC 1644 ) failed. I mean, nobody seems >to use it. I believe someone explained this to me once (perhaps even >on this list? but I couldn't find this in the archives...), saying that >there >were security concerns with it, but I don't remember any other details. As the designer of T/TCP, I think I can answer this. There are three reasons, I believe. (1) There are very few situations in which single-packet exchanges are possible, so T/TCP is very seldom a significant performance improvement. But it does have significant complexity. (2) Since the server is asked to do a perhaps signficant computation before the 3WHS has completed, it is an open invitation to DoS attacks. (This would be OK if you could assume that all T/TCP clients were authenticated using IPsec,) (3) I have heard rumors that someone has found an error in the specific state transitions, of T/TCP although I have never seen the details. Bob Braden From mycroft at netbsd.org Fri Mar 24 12:48:04 2006 From: mycroft at netbsd.org (Charles M. Hannum) Date: Fri, 24 Mar 2006 15:48:04 -0500 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> References: <001301c60a4a$9831dc60$0200a8c0@fun> <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> Message-ID: <20060324204804.GQ9416@multics.mit.edu> On Fri, Mar 24, 2006 at 11:11:15AM -0800, Bob Braden wrote: > (3) I have heard rumors that someone has found an error in the > specific state transitions, of T/TCP although I have never seen > the details. I'm not sure whether you're referring to me or not, since I was the one who originally made this claim (back in 1996). The specific problem is that the state diagrams in RFC 793 indicate that a SYN-FIN packet should be *dropped*. T/TCP systems will sometimes send SYN-FIN packets even to non-T/TCP systems. I had to change the TCP processing in NetBSD (back in 1996) to work around this (by simply ignoring the FIN and letting it be retransmitted later) and remain compatible with BSD/OS and FreeBSD hosts. ISTR ka9q made a similar change, for the same reason. I highly recommend Googling for "T/TCP security". The first hit is, not surprisingly, my old draft from 1996 -- but there are now a bunch of other papers, comments from IETF working groups, etc., on the same issues. As well as the FreeBSD security advisory about one of the holes that I mentioned (1.5 years after my draft, after the hole was used to break into the FreeBSD CVS repository). From acaro at bbn.com Fri Mar 24 13:44:25 2006 From: acaro at bbn.com (Armando L. Caro, Jr.) Date: Fri, 24 Mar 2006 16:44:25 -0500 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> Message-ID: <44246839.4090209@bbn.com> There is a fourth reason, and I believe it's the main reason T/TCP support was removed from OSes in the late 90s / early 2000s. "An accelerated open is initiated by a client by sending a new TCP option, called CC, to the server. The kernel keeps a special cache for each host it communicated with, among others containing the value of the last CC option used by the client. A new accelerated open is allowed when the CC sent is larger than the one in the per-host cache. Thus one can spoof complete connections." http://www.ciac.org/ciac/bulletins/i-051.shtml Armando Bob Braden wrote: > At 07:31 PM 12/26/2005 +0100, Michael Welzl wrote: >> Hi everybody, >> >> Here's something that I've had on my mind for quite a while now: >> I'm wondering why T/TCP ( RFC 1644 ) failed. I mean, nobody seems >> to use it. I believe someone explained this to me once (perhaps even >> on this list? but I couldn't find this in the archives...), saying that >> there >> were security concerns with it, but I don't remember any other details. > > > As the designer of T/TCP, I think I can answer this. There are three > reasons, I believe. > > (1) There are very few situations in which single-packet exchanges > are possible, so T/TCP is very seldom a significant performance > improvement. But it does have significant complexity. > > (2) Since the server is asked to do a perhaps signficant computation > before the 3WHS has completed, it is an open invitation to > DoS attacks. (This would be OK if you could assume that all > T/TCP clients were authenticated using IPsec,) > > (3) I have heard rumors that someone has found an error in the > specific state transitions, of T/TCP although I have never seen > the details. > > Bob Braden From touch at ISI.EDU Fri Mar 24 17:37:46 2006 From: touch at ISI.EDU (Joe Touch) Date: Fri, 24 Mar 2006 17:37:46 -0800 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> Message-ID: <44249EEA.9050108@isi.edu> Bob Braden wrote: > At 07:31 PM 12/26/2005 +0100, Michael Welzl wrote: >> Hi everybody, >> >> Here's something that I've had on my mind for quite a while now: >> I'm wondering why T/TCP ( RFC 1644 ) failed. I mean, nobody seems >> to use it. I believe someone explained this to me once (perhaps even >> on this list? but I couldn't find this in the archives...), saying that >> there >> were security concerns with it, but I don't remember any other details. > > > As the designer of T/TCP, I think I can answer this. There are three > reasons, I believe. > > (1) There are very few situations in which single-packet exchanges > are possible, so T/TCP is very seldom a significant performance > improvement. But it does have significant complexity. > > (2) Since the server is asked to do a perhaps signficant computation > before the 3WHS has completed, it is an open invitation to > DoS attacks. (This would be OK if you could assume that all > T/TCP clients were authenticated using IPsec,) Not just computation - also storage (of the data in the SYN). But I had thought the major issue was more with the sequence number - as discussed (among others) in Hannum as posted to the TCP-IMPL WG in 1996: http://tcp-impl.grc.nasa.gov/list/archive/1292.html Joe From perfgeek at mac.com Fri Mar 24 18:59:20 2006 From: perfgeek at mac.com (rick jones) Date: Fri, 24 Mar 2006 18:59:20 -0800 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> Message-ID: > (1) There are very few situations in which single-packet exchanges > are possible, so T/TCP is very seldom a significant performance > improvement. But it does have significant complexity. From what (little) I've understood from some recent conversations in dns-operations, the DNS folks may be having some "interesting" interactions between spoofed source IP addresses on queries, lack of pervasive BCP38, and open DNS servers that might benefit from being able to use TCP as a transport more efficiently than it can be today. rick jones there is no rest for the wicked, yet the virtuous have no pillows From huitema at windows.microsoft.com Fri Mar 24 22:54:51 2006 From: huitema at windows.microsoft.com (Christian Huitema) Date: Fri, 24 Mar 2006 22:54:51 -0800 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: Message-ID: <70C6EFCDFC8AAD418EF7063CD132D0641150CA@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> > From what (little) I've understood from some recent conversations in > dns-operations, the DNS folks may be having some "interesting" > interactions between spoofed source IP addresses on queries, lack of > pervasive BCP38, and open DNS servers that might benefit from being > able to use TCP as a transport more efficiently than it can be today. If you want some protection against spoofed source addresses, you need a three ways handshake. You definitely do not want to process data received in the first message of the three way handshake, because that is way too easy to spoof. So you want to use regular TCP, not T-TCP. (Insert here customary observation that botnets don't bother spoofing source addresses when engaging in DOS attacks.) -- Christian Huitema From michael.welzl at uibk.ac.at Sat Mar 25 03:14:54 2006 From: michael.welzl at uibk.ac.at (Michael Welzl) Date: Sat, 25 Mar 2006 12:14:54 +0100 Subject: [e2e] Can we revive T/TCP ? References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> Message-ID: <005901c64ffd$57c5aa50$0200a8c0@fun> Hi all, Thanks for the many answers to my question - in particular, of course, Bob's answer. Let me explain what I had in mind when I asked about T/TCP. I work on network improvements for the Grid - where people invoke procedure calls using SOAP over HTTP, yet have an interest in performance (I know that this is at odds :-) ). The delay of these function calls (which is apparently the result of SOAP processing more than anything else, but connection setup can also take a while if nodes are very far from each other - which, for instance, is true for some nodes in the EGEE Grid) limits the parallelization granularity in Grids - reducing it would be a real win in my opinion. In a Grid, nodes are (or can be) authenticated. Using IPSec is an option. There are lots of short function calls. So, I figured: why is it necessary to set up connections at all before doing the call? Then, I thought, heck, this question was asked before :) So I enquired about T/TCP. However, for my idea, reasons (1) and (3) > (1) There are very few situations in which single-packet exchanges > are possible, so T/TCP is very seldom a significant performance > improvement. But it does have significant complexity. > (3) I have heard rumors that someone has found an error in the > specific state transitions, of T/TCP although I have never seen > the details. don't apply, and Bob mentions IPSec in reason (2) > (2) Since the server is asked to do a perhaps signficant computation > before the 3WHS has completed, it is an open invitation to > DoS attacks. (This would be OK if you could assume that all > T/TCP clients were authenticated using IPsec,) - exactly my thinking. So skipping the handshake would make sense in such an environment, right? To me, there's just one open question. When all nodes authenticate themselves in a Grid, why don't they just set up and maintain TCP connections to each other forever? The UTO draft could help here. I've been told (by Grid people) that this is completely impossible because it's a big security problem. I fail to see why, and nobody ever explained it to me. I'd be thankful for your comments, and an answer to this open question in particular (remember, we're considering long lasting TCP connections in an authenticated environment, let's say with IPSec - then, why can this be a problem?). Cheers, Michael From jgh at wizmail.org Sun Mar 26 10:04:23 2006 From: jgh at wizmail.org (Jeremy Harris) Date: Sun, 26 Mar 2006 19:04:23 +0100 Subject: [e2e] end2end-interest Digest, Vol 25, Issue 26 In-Reply-To: References: Message-ID: <4426D7A7.8070903@wizmail.org> Michael Welzl wrote: >>(2) Since the server is asked to do a perhaps signficant computation >> before the 3WHS has completed, it is an open invitation to >> DoS attacks. (This would be OK if you could assume that all >> T/TCP clients were authenticated using IPsec,) > > > > - exactly my thinking. So skipping the handshake would make sense > in such an environment, right? > > To me, there's just one open question. When all nodes authenticate > themselves in a Grid, why don't they just set up and maintain TCP > connections to each other forever? Because processes come and go, I'd think. Plus, perhaps, a dose of "basic TCP can work to anywhere; it saves on management costs to use it everywhere". On the other side of the coin, in such a trusted environment, I don't see why you shouldn't send 1) -> SYN, query data, FIN 2) <- SYN, response data, FIN, ACK(SYN+query+FIN) 3) -> ACK(SYN+response+FIN) without going the whole hog on T/TCP. - Jeremy From zartash at lums.edu.pk Sun Mar 26 11:40:00 2006 From: zartash at lums.edu.pk (Zartash Afzal Uzmi) Date: Mon, 27 Mar 2006 00:40:00 +0500 Subject: [e2e] Use of RED in practice? Message-ID: Hi all, I wonder if some practitioners (or academics) can shed some light on the usage of RED (or a variant) in real networks. I recently ran into a situation where there was a difference in opinion whether RED is activated on deployed networks or not? My question is not about router manufacturers implementing RED in their routers? The question is about how many routers are enabled with RED (or a variant)? Does everyone use it? No one uses it in practice? Some Tier-x ISPs use it???? There is some material on Sally's webpage regarding implementation experiences (http://www.icir.org/floyd/red.html) but it is not clear if those implementations were done on an experimental basis or are currently used in live networks. For example, there is a link which says "WRED is enabled on Cisco GSRs on overloaded links at AS1 (Genuity), to reduce queueing delay. Experience has been positive." (reported back in 2000) but leaves me wondering if they still enable RED within networks of all (or some) ISPs? Please clarify if the answer depends upon the Tier level of the service provider. Thanks a lot. Regards, Zartash ==== Zartash Afzal Uzmi Associate Professor, Computer Science and Engineering Lahore University of Management Sciences (LUMS) Opp. Sector "U", D.H.A., Lahore Cantt. 54792, Pakistan. Phone: +92 42 572 2670-79 extension: 4425 Fax: +92 42 589 8315 From michael.welzl at uibk.ac.at Sun Mar 26 21:57:52 2006 From: michael.welzl at uibk.ac.at (Michael Welzl) Date: 27 Mar 2006 07:57:52 +0200 Subject: [e2e] end2end-interest Digest, Vol 25, Issue 26 In-Reply-To: <4426D7A7.8070903@wizmail.org> References: <4426D7A7.8070903@wizmail.org> Message-ID: <1143439071.4757.42.camel@lap10-c703.uibk.ac.at> > > To me, there's just one open question. When all nodes authenticate > > themselves in a Grid, why don't they just set up and maintain TCP > > connections to each other forever? > > Because processes come and go, I'd think. Plus, perhaps, a dose > of "basic TCP can work to anywhere; it saves on management costs > to use it everywhere". > > On the other side of the coin, in such a trusted environment, I > don't see why you shouldn't send > > 1) -> SYN, query data, FIN > 2) <- SYN, response data, FIN, ACK(SYN+query+FIN) > 3) -> ACK(SYN+response+FIN) > > without going the whole hog on T/TCP. Hm, isn't doing this type of communication what T/TCP is all about? With normal TCP, the host which is contacted in 1) would be allowed to receive the "query data" and buffer it somewhere, but not deliver it to the application before the handshake is over according to RFC 793. While this offers some protection against DoS attacks, I think we could drop this requirement in a trusted environment. The question is really whether this is a big issue for anything except my Grid scenario :-) , and if this particular scenario couldn't also be handled by maintaining connections instead of changing TCP... Cheers, Michael From touch at ISI.EDU Mon Mar 27 07:13:21 2006 From: touch at ISI.EDU (Joe Touch) Date: Mon, 27 Mar 2006 07:13:21 -0800 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: <005901c64ffd$57c5aa50$0200a8c0@fun> References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> <005901c64ffd$57c5aa50$0200a8c0@fun> Message-ID: <44280111.30205@isi.edu> Hi, Michael, Michael Welzl wrote: > Hi all, > > Thanks for the many answers to my question - in particular, > of course, Bob's answer. > > Let me explain what I had in mind when I asked about T/TCP. > I work on network improvements for the Grid - where people > invoke procedure calls using SOAP over HTTP, yet have an > interest in performance (I know that this is at odds :-) ). > The delay of these function calls (which is apparently the result > of SOAP processing more than anything else, but connection > setup can also take a while if nodes are very far from each other - > which, for instance, is true for some nodes in the EGEE Grid) > limits the parallelization granularity in Grids - reducing it would > be a real win in my opinion. If that's the case, it would be useful to reexamine the whole of the stack that's causing the problem, rather than trying to fix it at the most ubiquitous and otherwise stable (for the rest of the Internet) layer. > In a Grid, nodes are (or can be) authenticated. Using IPSec > is an option. There are lots of short function calls. So, I figured: > why is it necessary to set up connections at all before doing > the call? IPsec sets up security associations between endpoints, not connections. The larger issue is that you have multiple layers of connections that are working against - rather than with - each other. If you're doing short function calls, then why do you need TCP? If you want congestion control, have you considered DCCP? Or SCTP? ... > - exactly my thinking. So skipping the handshake would make sense > in such an environment, right? So would skipping shared state on a per-exchange basis. ;-) > To me, there's just one open question. When all nodes authenticate > themselves in a Grid, why don't they just set up and maintain TCP > connections to each other forever? The UTO draft could help here. > > I've been told (by Grid people) that this is completely impossible > because it's a big security problem. I fail to see why, and nobody > ever explained it to me. If they use IPsec, it'd be useful to understand the security problem that persistent TCP connections present. Joe From michael.welzl at uibk.ac.at Mon Mar 27 07:39:24 2006 From: michael.welzl at uibk.ac.at (Michael Welzl) Date: 27 Mar 2006 17:39:24 +0200 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: <44280111.30205@isi.edu> References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> <005901c64ffd$57c5aa50$0200a8c0@fun> <44280111.30205@isi.edu> Message-ID: <1143473964.7613.37.camel@lap10-c703.uibk.ac.at> > > Let me explain what I had in mind when I asked about T/TCP. > > I work on network improvements for the Grid - where people > > invoke procedure calls using SOAP over HTTP, yet have an > > interest in performance (I know that this is at odds :-) ). > > The delay of these function calls (which is apparently the result > > of SOAP processing more than anything else, but connection > > setup can also take a while if nodes are very far from each other - > > which, for instance, is true for some nodes in the EGEE Grid) > > limits the parallelization granularity in Grids - reducing it would > > be a real win in my opinion. > > If that's the case, it would be useful to reexamine the whole of the > stack that's causing the problem, rather than trying to fix it at the > most ubiquitous and otherwise stable (for the rest of the Internet) layer. I agree about that, and I'm also going in this direction (hence my older (sigh, I just checked - that was in December... my personal research is surely moving sloooowly these days :-( ) question about SOAP and persistent connections). But it seems to me that some things just can't be solved on top, and so I started questioning the usefulness of connection setup in authenticated environments. But I'm also questioning the problem with long lasting connections in such environments... I don't see a problem, but I'm not a security expert, and the fact that nobody has spoken up saying "sure, the problem is..." gives me some confidence. > > In a Grid, nodes are (or can be) authenticated. Using IPSec > > is an option. There are lots of short function calls. So, I figured: > > why is it necessary to set up connections at all before doing > > the call? > > IPsec sets up security associations between endpoints, not connections. I know - doesn't matter here, does it? > The larger issue is that you have multiple layers of connections that > are working against - rather than with - each other. Indeed! > If you're doing short function calls, then why do you need TCP? If you > want congestion control, have you considered DCCP? Or SCTP? We want reliability, so TCP or SCTP would be our protocols of choice. I don't see much benefit of SCTP's features for our scenario (but I'd be thankful for inspirational comments in this direction :) ). > > - exactly my thinking. So skipping the handshake would make sense > > in such an environment, right? > > So would skipping shared state on a per-exchange basis. ;-) What do you mean? Sounds like a hint in the right direction, but I can't solve the puzzle behind your words... > > To me, there's just one open question. When all nodes authenticate > > themselves in a Grid, why don't they just set up and maintain TCP > > connections to each other forever? The UTO draft could help here. > > > > I've been told (by Grid people) that this is completely impossible > > because it's a big security problem. I fail to see why, and nobody > > ever explained it to me. > > If they use IPsec, it'd be useful to understand the security problem > that persistent TCP connections present. This was a comment from an anonymous paper review, so I can't ask back :) and the reviewer might not have thought of IPSec, but using it is surely an option in Grids. Maybe it was just a misunderstanding - me considering a Grid with, and her/him considering a Grid without IPSec. Cheers, Michael From touch at ISI.EDU Mon Mar 27 07:59:18 2006 From: touch at ISI.EDU (Joe Touch) Date: Mon, 27 Mar 2006 07:59:18 -0800 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: <1143473964.7613.37.camel@lap10-c703.uibk.ac.at> References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> <005901c64ffd$57c5aa50$0200a8c0@fun> <44280111.30205@isi.edu> <1143473964.7613.37.camel@lap10-c703.uibk.ac.at> Message-ID: <44280BD6.8030000@isi.edu> Michael Welzl wrote: ... > But it seems to me that some things just can't be solved > on top, and so I started questioning the usefulness of > connection setup in authenticated environments. Security associations are different from transport connections. The former defines an identity and an agreement to accept authenticated/encrypted packets. The latter is for reliability, reordering, and congestion control. > But I'm > also questioning the problem with long lasting connections > in such environments... I don't see a problem, but I'm not > a security expert, and the fact that nobody has spoken up > saying "sure, the problem is..." gives me some confidence. If the IPsec SA persists, then any connection inside it is (by definition) permitted, including long-lived ones. Long-lived transport connections give a more accurate sense of RTT, congestion info, etc. The concern about long-lived connections of late is with on- and off-path attacks, both of which presume a LACK of IPsec-levels of protection. >> If you're doing short function calls, then why do you need TCP? If you >> want congestion control, have you considered DCCP? Or SCTP? > > We want reliability, so TCP or SCTP would be our protocols of > choice. I don't see much benefit of SCTP's features for our > scenario (but I'd be thankful for inspirational comments in > this direction :) ). You want short transactions. Things you would care about include: - multi-packet transactions (even if short) - reliability - congestion control UDP doesn't provide those; TCP with control block sharing (RFC2140) or using a central Congestion Manager would help (parts of 2140 are in Linux), but don't address the 3WHS delays. Pipelining at over a single connection would, but you need a muxing and chunking mechanism. You can use: per-transaction TCP connections BXXP directly over TCP SOAP over HTTP over TCP SCTP You might want to compare the delays of each. >>> - exactly my thinking. So skipping the handshake would make sense >>> in such an environment, right? >> So would skipping shared state on a per-exchange basis. ;-) > > What do you mean? Sounds like a hint in the right direction, > but I can't solve the puzzle behind your words... RPC over UDP. If you don't need the transport layer to manage state, use a stateless transport layer ;-) If you still want congestion control, RPC over DCCP ;-)) >>> To me, there's just one open question. When all nodes authenticate >>> themselves in a Grid, why don't they just set up and maintain TCP >>> connections to each other forever? The UTO draft could help here. >>> >>> I've been told (by Grid people) that this is completely impossible >>> because it's a big security problem. I fail to see why, and nobody >>> ever explained it to me. >> If they use IPsec, it'd be useful to understand the security problem >> that persistent TCP connections present. > > This was a comment from an anonymous paper review, so I can't > ask back :) and the reviewer might not have thought of IPSec, > but using it is surely an option in Grids. Maybe it was just > a misunderstanding - me considering a Grid with, and her/him > considering a Grid without IPSec. That sounds, IMO, more likely. Long-lived connections are worrisome for BGP of late, but that's in the absence of IPsec, or worries about the particular algorithm (e.g., MD5) being broken over time (which can be solved using 3DES, AES, etc.) Joe From michael.welzl at uibk.ac.at Mon Mar 27 08:16:39 2006 From: michael.welzl at uibk.ac.at (Michael Welzl) Date: 27 Mar 2006 18:16:39 +0200 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: <44280BD6.8030000@isi.edu> References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> <005901c64ffd$57c5aa50$0200a8c0@fun> <44280111.30205@isi.edu> <1143473964.7613.37.camel@lap10-c703.uibk.ac.at> <44280BD6.8030000@isi.edu> Message-ID: <1143476199.7613.69.camel@lap10-c703.uibk.ac.at> On Mon, 2006-03-27 at 17:59, Joe Touch wrote: > Michael Welzl wrote: > ... > > But it seems to me that some things just can't be solved > > on top, and so I started questioning the usefulness of > > connection setup in authenticated environments. > > Security associations are different from transport connections. The > former defines an identity and an agreement to accept > authenticated/encrypted packets. The latter is for reliability, > reordering, and congestion control. I think "the usefulness of connection setup" was too vague, or just misleading. What I meant was the usefulness of waiting for the handshake before exchanging data as opposed to a T/TCP like communication model > Pipelining at over a single connection would, but you need a muxing and > chunking mechanism. You can use: > per-transaction TCP connections > BXXP directly over TCP > SOAP over HTTP over TCP > SCTP > > You might want to compare the delays of each. Yep, that sounds like the right approach to me - using one connection that is kept throughout, with IPSec to make sure that it's secure, and the UTO draft implemented in case of TCP. Thanks! > RPC over UDP. If you don't need the transport layer to manage state, use > a stateless transport layer ;-) > > If you still want congestion control, RPC over DCCP ;-)) But any RPC code assumes reliable data delivery underneath?! Cheers, Michael From michael.welzl at uibk.ac.at Mon Mar 27 09:07:08 2006 From: michael.welzl at uibk.ac.at (Michael Welzl) Date: Mon, 27 Mar 2006 19:07:08 +0200 Subject: [e2e] end2end-interest Digest, Vol 25, Issue 26 In-Reply-To: <8F70EB4E-066C-4659-B5B6-C6613328116B@cisco.com> Message-ID: > If you're interested in maintaining connections, then why would you > not use SCTP? SCTP allows you to maintain an overall connection and > then do fast transaction-like sessions at will within the context. I didn't see the benefit for a single grid service call over a single connection - but if I maintain connections and MUX all over them, sure, the multiple stream feature of sctp is useful! Well, on second thought, parameters could be separate app chunks that could be delivered unordered! Hmmm ... I like SCTP a lot, anyway :) Thanks and cheers, Michael From touch at ISI.EDU Mon Mar 27 09:14:16 2006 From: touch at ISI.EDU (Joe Touch) Date: Mon, 27 Mar 2006 09:14:16 -0800 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: <1143476199.7613.69.camel@lap10-c703.uibk.ac.at> References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> <005901c64ffd$57c5aa50$0200a8c0@fun> <44280111.30205@isi.edu> <1143473964.7613.37.camel@lap10-c703.uibk.ac.at> <44280BD6.8030000@isi.edu> <1143476199.7613.69.camel@lap10-c703.uibk.ac.at> Message-ID: <44281D68.7000200@isi.edu> Michael Welzl wrote: ... >> If you still want congestion control, RPC over DCCP ;-)) > > But any RPC code assumes reliable data delivery underneath?! See RFC1831, sec 4: ...On the other hand, if it is running on top of an unreliable transport such as UDP [7], it must implement its own time-out, retransmission, and duplicate detection policies as the RPC protocol does not provide these services. The most recent version of NFS- v4 - specs the use of RPC over both TPC and UDP, and has its own locking and reordering mechanism (redundant if there's just one TCP connection) - to allow it to run over multiple TPC connections in parallel (to allow request pipelining while avoiding HOL blocking). Joe From michael.welzl at uibk.ac.at Mon Mar 27 10:10:46 2006 From: michael.welzl at uibk.ac.at (Michael Welzl) Date: Mon, 27 Mar 2006 20:10:46 +0200 Subject: [e2e] Can we revive T/TCP ? References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> <005901c64ffd$57c5aa50$0200a8c0@fun> <44280111.30205@isi.edu> <1143473964.7613.37.camel@lap10-c703.uibk.ac.at> <44280BD6.8030000@isi.edu> <1143476199.7613.69.camel@lap10-c703.uibk.ac.at> <44281D68.7000200@isi.edu> Message-ID: <000b01c651c9$c59af120$0200a8c0@fun> > >> If you still want congestion control, RPC over DCCP ;-)) > > > > But any RPC code assumes reliable data delivery underneath?! > > See RFC1831, sec 4: > > ...On the other hand, if it is running > on top of an unreliable transport such as UDP [7], it must implement > its own time-out, retransmission, and duplicate detection policies as > the RPC protocol does not provide these services. > > The most recent version of NFS- v4 - specs the use of RPC over both TPC > and UDP, and has its own locking and reordering mechanism (redundant if > there's just one TCP connection) - to allow it to run over multiple TPC > connections in parallel (to allow request pipelining while avoiding HOL > blocking). This sounds like it's duplicating SCTP functionality... But I have a more serious problem with this: it surely is overkill for a racing game! Is that only implemented in NFS Most Wanted, or also in the NFS Underground series? Never heard of a "v4", though ;-) Hm, if you think about it - aren't game designers better at picking version numbers than we are? Compare: Tahoe, Reno, NewReno, SACK and Vegas to Hot Pursuit, Road Challenge, Underground, Most Wanted The goal is the same: every version is supposed to be faster than the previous one. I propose "TCP ED" (Eat Dust) for the next version! Cheers, Michael From touch at ISI.EDU Mon Mar 27 10:29:17 2006 From: touch at ISI.EDU (Joe Touch) Date: Mon, 27 Mar 2006 10:29:17 -0800 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: <000b01c651c9$c59af120$0200a8c0@fun> References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> <005901c64ffd$57c5aa50$0200a8c0@fun> <44280111.30205@isi.edu> <1143473964.7613.37.camel@lap10-c703.uibk.ac.at> <44280BD6.8030000@isi.edu> <1143476199.7613.69.camel@lap10-c703.uibk.ac.at> <44281D68.7000200@isi.edu> <000b01c651c9$c59af120$0200a8c0@fun> Message-ID: <44282EFD.3070309@isi.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Michael Welzl wrote: >>>> If you still want congestion control, RPC over DCCP ;-)) >>> But any RPC code assumes reliable data delivery underneath?! >> See RFC1831, sec 4: >> >> ...On the other hand, if it is running >> on top of an unreliable transport such as UDP [7], it must implement >> its own time-out, retransmission, and duplicate detection policies as >> the RPC protocol does not provide these services. >> >> The most recent version of NFS- v4 - specs the use of RPC over both TPC >> and UDP, and has its own locking and reordering mechanism (redundant if >> there's just one TCP connection) - to allow it to run over multiple TPC >> connections in parallel (to allow request pipelining while avoiding HOL >> blocking). > > This sounds like it's duplicating SCTP functionality... To some extent, that can be said of many protocols, since SCTP appears to have incorporated so many ;-) There may also be also utility to a leaner, more efficient, and simpler to compliantly implement solution, though... > But I have a more serious problem with this: it surely is overkill for a > racing > game! Is that only implemented in NFS Most Wanted, or also in the > NFS Underground series? Never heard of a "v4", though ;-) Perhaps you've heard of the Network File System? Joe -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEKC78E5f5cImnZrsRAkZ7AJ4uA5V7NN3lD5TQJRcW+U0Xiyf+QwCaApoI KHNus3C7R6UCQgmnR+fo4Vc= =OhCA -----END PGP SIGNATURE----- From michael.welzl at uibk.ac.at Mon Mar 27 10:38:51 2006 From: michael.welzl at uibk.ac.at (Michael Welzl) Date: Mon, 27 Mar 2006 20:38:51 +0200 Subject: [e2e] Can we revive T/TCP ? References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> <005901c64ffd$57c5aa50$0200a8c0@fun> <44280111.30205@isi.edu> <1143473964.7613.37.camel@lap10-c703.uibk.ac.at> <44280BD6.8030000@isi.edu> <1143476199.7613.69.camel@lap10-c703.uibk.ac.at> <44281D68.7000200@isi.edu> <000b01c651c9$c59af120$0200a8c0@fun> <44282EFD.3070309@isi.edu> Message-ID: <001b01c651cd$b18189c0$0200a8c0@fun> > > But I have a more serious problem with this: it surely is overkill for a > > racing > > game! Is that only implemented in NFS Most Wanted, or also in the > > NFS Underground series? Never heard of a "v4", though ;-) > > Perhaps you've heard of the Network File System? Sure - I was trying to be funny Cheers, Michael From fred at cisco.com Mon Mar 27 08:47:14 2006 From: fred at cisco.com (Fred Baker) Date: Mon, 27 Mar 2006 08:47:14 -0800 Subject: [e2e] end2end-interest Digest, Vol 25, Issue 26 In-Reply-To: <1143439071.4757.42.camel@lap10-c703.uibk.ac.at> References: <4426D7A7.8070903@wizmail.org> <1143439071.4757.42.camel@lap10-c703.uibk.ac.at> Message-ID: <8F70EB4E-066C-4659-B5B6-C6613328116B@cisco.com> If you're interested in maintaining connections, then why would you not use SCTP? SCTP allows you to maintain an overall connection and then do fast transaction-like sessions at will within the context. http://www.ietf.org/rfc/rfc2960.txt 2960 Stream Control Transmission Protocol. R. Stewart, Q. Xie, K. Morneault, C. Sharp, H. Schwarzbauer, T. Taylor, I. Rytina, M. Kalla, L. Zhang, V. Paxson. October 2000. (Format: TXT=297757 bytes) (Updated by RFC3309) (Status: PROPOSED STANDARD) http://www.ietf.org/rfc/rfc3257.txt 3257 Stream Control Transmission Protocol Applicability Statement. L. Coene. April 2002. (Format: TXT=24198 bytes) (Status: INFORMATIONAL) http://www.ietf.org/rfc/rfc3286.txt 3286 An Introduction to the Stream Control Transmission Protocol (SCTP). L. Ong, J. Yoakum. May 2002. (Format: TXT=22644 bytes) (Status: INFORMATIONAL) On Mar 26, 2006, at 9:57 PM, Michael Welzl wrote: >>> To me, there's just one open question. When all nodes authenticate >>> themselves in a Grid, why don't they just set up and maintain TCP >>> connections to each other forever? >> >> Because processes come and go, I'd think. Plus, perhaps, a dose >> of "basic TCP can work to anywhere; it saves on management costs >> to use it everywhere". >> >> On the other side of the coin, in such a trusted environment, I >> don't see why you shouldn't send >> >> 1) -> SYN, query data, FIN >> 2) <- SYN, response data, FIN, ACK(SYN+query+FIN) >> 3) -> ACK(SYN+response+FIN) >> >> without going the whole hog on T/TCP. > > Hm, isn't doing this type of communication what T/TCP is > all about? > > With normal TCP, the host which is contacted in 1) would > be allowed to receive the "query data" and buffer it > somewhere, but not deliver it to the application before > the handshake is over according to RFC 793. While this > offers some protection against DoS attacks, I think we > could drop this requirement in a trusted environment. > > The question is really whether this is a big issue for > anything except my Grid scenario :-) , and if this > particular scenario couldn't also be handled by > maintaining connections instead of changing TCP... > > Cheers, > Michael From M.Handley at cs.ucl.ac.uk Mon Mar 27 08:48:24 2006 From: M.Handley at cs.ucl.ac.uk (Mark Handley) Date: Mon, 27 Mar 2006 16:48:24 +0000 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: <005901c64ffd$57c5aa50$0200a8c0@fun> References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> <005901c64ffd$57c5aa50$0200a8c0@fun> Message-ID: <84a612dd0603270848x28e1dc6fy7aae01139caff79f@mail.gmail.com> On 3/25/06, Michael Welzl wrote: > The delay of these function calls (which is apparently the result > of SOAP processing more than anything else, but connection > setup can also take a while if nodes are very far from each other - > which, for instance, is true for some nodes in the EGEE Grid) > limits the parallelization granularity in Grids - reducing it would > be a real win in my opinion. I think the right solution to the T/TCP problem would be to clone an existing connection. This would remove most of the security issues, and let you bootstrap the congestion control state. Suppose you have an existing connection from A port p1 to B port p2. A then wants a new connection, so it sends a message from A port p3 to B port p2, with the SYN bit set, and a "connection clone" option that gives port p1 and the current sequence number from the first connection, together with the first mss of data. Basically this is data that could have been sent on the first connection anyway, but demuxing it at the TCP layer makes more sense for many applications. As this is a cloned connection, the congestion control state in both directions can be copied from the first connection (duplicate the RTT, split cwnd between the two), so there's no need to slowstart, and no startup transients to disrupt other network traffic. You can also clone all negotiated options from the first connection, which is another problem with the original T/TCP. [Note: as described above, this wouldn't work too well though NATs. If we care about this, A simply needs a way to find out port p1 from B] - Mark From braden at ISI.EDU Tue Mar 28 09:43:53 2006 From: braden at ISI.EDU (Bob Braden) Date: Tue, 28 Mar 2006 09:43:53 -0800 (PST) Subject: [e2e] Can we revive T/TCP ? Message-ID: <200603281743.JAA17352@gra.isi.edu> *> *> I think the right solution to the T/TCP problem would be to clone an *> existing connection. This would remove most of the security issues, *> and let you bootstrap the congestion control state. Mark, I think you have just reinvented Bob Thomas' Reconnection Protocol for the ARPAnet NCP! See RFC 426, January 1973. Good ideas just never disappear, they only hide and reappear. Bob Braden From simon at limmat.switch.ch Tue Mar 28 11:53:33 2006 From: simon at limmat.switch.ch (Simon Leinen) Date: Tue, 28 Mar 2006 21:53:33 +0200 Subject: [e2e] Use of RED in practice? In-Reply-To: (Zartash Afzal Uzmi's message of "Mon, 27 Mar 2006 00:40:00 +0500") References: Message-ID: Zartash Afzal Uzmi writes: > My question is not about router manufacturers implementing RED in > their routers? The question is about how many routers are enabled > with RED (or a variant)? Does everyone use it? No one uses it in > practice? Some Tier-x ISPs use it???? I'm sure NOT everyone uses it, because it is usually not enabled by default on routers that are commonly used by ISPs, and enabling it is somewhat tricky because you have to define somewhat sane parameters for the RED drop characteristics. We used to use RED when we had overloaded (transatlantic) links, very successfully I might add - RED brought down peak-hour delays considerably, despite the relatively high base delay, without increasing loss rates or reducing link utilization. But several years ago we got into a regime where we would run all our links without noticeable queuing virtually all of the time. So we didn't bother configuring RED again. > There is some material on Sally's webpage regarding implementation > experiences (http://www.icir.org/floyd/red.html) but it is not clear > if those implementations were done on an experimental basis or are > currently used in live networks. For example, there is a link which > says "WRED is enabled on Cisco GSRs on overloaded links at AS1 > (Genuity), to reduce queueing delay. Experience has been positive." > (reported back in 2000) but leaves me wondering if they still enable > RED within networks of all (or some) ISPs? RED still makes as much sense as it did six years ago where there's persistent congestion of highly aggregated traffic - it can keep those links somewhat usable for interactive use. If you have such oversubscribed links in your network, I strongly encourage you to experiment with RED. > Please clarify if the answer depends upon the Tier level of the > service provider. Thanks a lot. We buy transit from three transit-free (Tier-1) providers, does that make us a Tier-2? No idea. Anyway, I think the answer (whether an ISP actually uses RED) won't depend much on Tier level, except at a Tier- ISP I would expect more people to be aware of RED; on the other hand I don't think Tier-1 ISPs typically run congested links anymore... maybe to customers (if the pricing structure encourages customers to oversubscribe their access links, which I consider a bad idea). -- Simon. From touch at ISI.EDU Tue Mar 28 16:45:21 2006 From: touch at ISI.EDU (Joe Touch) Date: Tue, 28 Mar 2006 16:45:21 -0800 Subject: [e2e] Can we revive T/TCP ? In-Reply-To: <84a612dd0603270848x28e1dc6fy7aae01139caff79f@mail.gmail.com> References: <5.2.1.1.2.20060324110234.00b118e0@boreas.isi.edu> <005901c64ffd$57c5aa50$0200a8c0@fun> <84a612dd0603270848x28e1dc6fy7aae01139caff79f@mail.gmail.com> Message-ID: <4429D8A1.2080003@isi.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Mark Handley wrote: > On 3/25/06, Michael Welzl wrote: >> The delay of these function calls (which is apparently the result >> of SOAP processing more than anything else, but connection >> setup can also take a while if nodes are very far from each other - >> which, for instance, is true for some nodes in the EGEE Grid) >> limits the parallelization granularity in Grids - reducing it would >> be a real win in my opinion. > > I think the right solution to the T/TCP problem would be to clone an > existing connection. This would remove most of the security issues, > and let you bootstrap the congestion control state. > > Suppose you have an existing connection from A port p1 to B port p2. A > then wants a new connection, so it sends a message from A port p3 to B > port p2, with the SYN bit set, and a "connection clone" option that > gives port p1 and the current sequence number from the first > connection, together with the first mss of data. > > Basically this is data that could have been sent on the first > connection anyway, but demuxing it at the TCP layer makes more sense > for many applications. > > As this is a cloned connection, the congestion control state in both > directions can be copied from the first connection (duplicate the RTT, > split cwnd between the two), so there's no need to slowstart, and no > startup transients to disrupt other network traffic. That works even for non-cloned connections - e.g., it's what was described in RFC-2140. ;-) However, why doesn't cloning require a handshake? I.e., if you send the data, don't you still need to cache it until the connection is confirmed? The only benefit seems to be with the sequence numbers; I recall some trick like this for T/TCP too... Joe -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEKdihE5f5cImnZrsRAhTzAJ0fdlpcpT4tT2xMfDkORlcnhRuOcgCcDobA veb//rovQ4ekcwacvkmaaT8= =vu3b -----END PGP SIGNATURE----- From Mikael.Latvala at nokia.com Wed Mar 29 06:05:33 2006 From: Mikael.Latvala at nokia.com (Mikael.Latvala@nokia.com) Date: Wed, 29 Mar 2006 17:05:33 +0300 Subject: [e2e] IP options over e2e path Message-ID: <326A98184EC61140A2569345DB3614C102AFBB5B@esebe105.NOE.Nokia.com> Hello, The IP option provides a convinient way to add additional information to the IP header. But what is the fate of an IP packet, which carries a relatively new IP option inserted by a source host and which is not recognized by most of the routers and/or middleboxes that the packet traverses through? RFC1812 says that "A router MUST ignore IP options which it does not recognize." However, some people I have talked to claim that such packets with a relatively unknown IP option have no chance of reaching the final destiny. Is this really the case? Do new/unrecognized IP options prevent an IP packet from reaching its final destination? Any research papers which would back up or contradict this claim? Or perhaps this is yet another undocumented NAT feature? /Mikael From lars.eggert at netlab.nec.de Wed Mar 29 06:40:59 2006 From: lars.eggert at netlab.nec.de (Lars Eggert) Date: Wed, 29 Mar 2006 16:40:59 +0200 Subject: [e2e] IP options over e2e path In-Reply-To: <326A98184EC61140A2569345DB3614C102AFBB5B@esebe105.NOE.Nokia.com> References: <326A98184EC61140A2569345DB3614C102AFBB5B@esebe105.NOE.Nokia.com> Message-ID: <7727B78E-C2E5-4852-90D1-7D27DCC3CD85@netlab.nec.de> On Mar 29, 2006, at 16:05, wrote: > But what is the fate of an IP packet, which carries a > relatively new IP option inserted by a source host and which is not > recognized by most of the routers and/or middleboxes that the packet > traverses through? They get dropped in roughly 70% of the cases. Alberto Medina, Mark Allman, Sally Floyd. Measuring the Evolution of Transport Protocols in the Internet. ACM Computer Communication Review, 35(2), April 2005. http://www.icir.org/mallman/papers/tcp-evo- ccr05.ps Lars -- Lars Eggert NEC Network Laboratories -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3686 bytes Desc: not available Url : http://www.postel.org/pipermail/end2end-interest/attachments/20060329/929c2fe4/smime.bin From fu at cs.uni-goettingen.de Wed Mar 29 06:44:16 2006 From: fu at cs.uni-goettingen.de (Xiaoming Fu) Date: Wed, 29 Mar 2006 16:44:16 +0200 Subject: [e2e] IP options over e2e path In-Reply-To: <326A98184EC61140A2569345DB3614C102AFBB5B@esebe105.NOE.Nokia.com> References: <326A98184EC61140A2569345DB3614C102AFBB5B@esebe105.NOE.Nokia.com> Message-ID: <442A9D40.8020506@cs.uni-goettingen.de> This is unfortunately (at least partially) happening: from measurements done by ICIR/BBN colleagues, http://www.icir.org/tbit/TCPevolution-Mar2005.pdf over 70% connection requests with an unknown IP option were lost, about 44% connections were broken if inserting the unknown IP option in the middle of transfer. Xiaoming Mikael.Latvala at nokia.com wrote: > Hello, > > The IP option provides a convinient way to add additional information to > the IP header. But what is the fate of an IP packet, which carries a > relatively new IP option inserted by a source host and which is not > recognized by most of the routers and/or middleboxes that the packet > traverses through? > > RFC1812 says that "A router MUST ignore IP options which it does not > recognize." > > However, some people I have talked to claim that such packets with a > relatively unknown IP option have no chance of reaching the final > destiny. > > Is this really the case? Do new/unrecognized IP options prevent an IP > packet from reaching its final destination? Any research papers which > would back up or contradict this claim? Or perhaps this is yet another > undocumented NAT feature? > > /Mikael From michael.welzl at uibk.ac.at Wed Mar 29 07:44:26 2006 From: michael.welzl at uibk.ac.at (Michael Welzl) Date: 29 Mar 2006 17:44:26 +0200 Subject: [e2e] IP options over e2e path In-Reply-To: <326A98184EC61140A2569345DB3614C102AFBB5B@esebe105.NOE.Nokia.com> References: <326A98184EC61140A2569345DB3614C102AFBB5B@esebe105.NOE.Nokia.com> Message-ID: <1143647066.4801.95.camel@lap10-c703.uibk.ac.at> Hi, "No chance" is a bit too extreme. Measurements can be found in "Measuring the Evolution of Transport Protocols in the Internet", available from http://www.icir.org/tbit/ Cheers, Michael On Wed, 2006-03-29 at 16:05, Mikael.Latvala at nokia.com wrote: > Hello, > > The IP option provides a convinient way to add additional information to > the IP header. But what is the fate of an IP packet, which carries a > relatively new IP option inserted by a source host and which is not > recognized by most of the routers and/or middleboxes that the packet > traverses through? > > RFC1812 says that "A router MUST ignore IP options which it does not > recognize." > > However, some people I have talked to claim that such packets with a > relatively unknown IP option have no chance of reaching the final > destiny. > > Is this really the case? Do new/unrecognized IP options prevent an IP > packet from reaching its final destination? Any research papers which > would back up or contradict this claim? Or perhaps this is yet another > undocumented NAT feature? > > /Mikael -- Michael Welzl University of Innsbruck From lynne at telemuse.net Wed Mar 29 10:33:06 2006 From: lynne at telemuse.net (Lynne Jolitz) Date: Wed, 29 Mar 2006 10:33:06 -0800 Subject: [e2e] IP options over e2e path In-Reply-To: <442A9D40.8020506@cs.uni-goettingen.de> Message-ID: <000b01c6535f$38b7b580$6e8944c6@telemuse.net> Since the RFC is very clear, the question becomes "why do they drop packets in contravention to the RFC instead of simply passing them through"? Is it a business reason - one of anticompetitive tactics (I heard this from a businessman)? Is it a deep technical reason, one of security, where, for example, such an ability allows one to pass out-of-band information which elides content filtering on firewalls? Or is it simply that product development groups, given a set of product reqs, tested only against a certain set of IP packets for compliance, and dropped things that didn't matter (and yes, it does happen)? In any case, the Red Queen's Race problem is magnified by this failure to comply with the RFC in the routers if new options are introduced. Lynne Jolitz. ---- We use SpamQuiz. If your ISP didn't make the grade try http://lynne.telemuse.net > -----Original Message----- > From: end2end-interest-bounces at postel.org > [mailto:end2end-interest-bounces at postel.org]On Behalf Of Xiaoming Fu > Sent: Wednesday, March 29, 2006 6:44 AM > To: Mikael.Latvala at nokia.com > Cc: end2end-interest at postel.org > Subject: Re: [e2e] IP options over e2e path > > > This is unfortunately (at least partially) happening: from measurements > done by ICIR/BBN colleagues, > http://www.icir.org/tbit/TCPevolution-Mar2005.pdf > over 70% connection requests with an unknown IP option were lost, > about 44% connections were broken if inserting the unknown IP option in > the middle of transfer. > Xiaoming > > Mikael.Latvala at nokia.com wrote: > > Hello, > > > > The IP option provides a convinient way to add additional information to > > the IP header. But what is the fate of an IP packet, which carries a > > relatively new IP option inserted by a source host and which is not > > recognized by most of the routers and/or middleboxes that the packet > > traverses through? > > > > RFC1812 says that "A router MUST ignore IP options which it does not > > recognize." > > > > However, some people I have talked to claim that such packets with a > > relatively unknown IP option have no chance of reaching the final > > destiny. > > > > Is this really the case? Do new/unrecognized IP options prevent an IP > > packet from reaching its final destination? Any research papers which > > would back up or contradict this claim? Or perhaps this is yet another > > undocumented NAT feature? > > > > /Mikael > From fu at cs.uni-goettingen.de Wed Mar 29 11:47:54 2006 From: fu at cs.uni-goettingen.de (Xiaoming Fu) Date: Wed, 29 Mar 2006 21:47:54 +0200 Subject: [e2e] IP options over e2e path In-Reply-To: <000b01c6535f$38b7b580$6e8944c6@telemuse.net> References: <000b01c6535f$38b7b580$6e8944c6@telemuse.net> Message-ID: <442AE46A.3070909@cs.uni-goettingen.de> Hi Lynne, Lynne Jolitz wrote: > Since the RFC is very clear, the question becomes "why do they drop > packets in contravention to the RFC instead of simply passing them through"? Due to historic reasons, I guess in practice some initial MUSTs are interpreted as SHOULD and vice versa; or updated/clarified by later RFCs. TCP's evolution probably also tells this (http://www.ietf.org/internet-drafts/draft-ietf-tcpm-tcp-roadmap-06.txt) > Is it a business reason - one of anticompetitive tactics (I heard > this from a businessman)? (As also told by some operator folks), if I am a network provider, why I should allow incoming traffic from another provider to potentially change some behaviors or collect some information of the routers in its own network? Certainly, for traffic traversing within my own network provider I probably don't care much whether an option is encountered and processed. > Is it a deep technical reason, one of security, where, for example, such an ability allows one to pass out-of-band information which elides content filtering on firewalls? There can be different implications for having an "unknown" option in different traffic, out-of-band (such as RSVP(TE) messages) or in-band (e.g., piggybacking options into data or connection setup packets by the tool TBIT which that paper uses). NATs and firewalls, for example, could perform different actions to different traffic. > Or is it simply that product development groups, given a set of product reqs, tested only against a certain set of IP packets for compliance, and dropped things that didn't matter (and yes, it does happen)? I also doudt whether there is something like a test specification for IPv4 router products? (I just know a little bit about v6 case: yes). > In any case, the Red Queen's Race problem is magnified by this > failure to comply with the RFC in the routers if new options are introduced. Xiaoming > Lynne Jolitz. From Mikael.Latvala at nokia.com Wed Mar 29 13:38:57 2006 From: Mikael.Latvala at nokia.com (Mikael.Latvala@nokia.com) Date: Thu, 30 Mar 2006 00:38:57 +0300 Subject: [e2e] IP options over e2e path In-Reply-To: <442AE46A.3070909@cs.uni-goettingen.de> Message-ID: <326A98184EC61140A2569345DB3614C102B84A05@esebe105.NOE.Nokia.com> Hello, First of all, thank you for the comments. http://www.icir.org/tbit/TCPevolution-Mar2005.pdf was indeed very interesting. Few comments about the paper. 1. It would be interesting to know whether routers along the packet path or the receiving server dropped the packets with different IP options. 2. I disagree with "One concern is that the use of IP options may significantly increase the overhead in routers, because in some cases packets with IP options are processed on the slow path of the forwarding engine." because as RFC1812 says that routers can ignore the IP option, excluding source route, if so configured. This type of logic is very easy to implement on the fast path. 3. I also disagree with "A third concern is that of possible denial of service attacks that may be caused by packets with invalid IP options going to network routers." as any well-desinged software should not be vulnerable to such attacks. 4. Finally, I cannot follow to logic in "These concerns, together with the fact that the generation and processing of IP options is nonmandatory at both the routers and the end hosts, have led routers, hosts, and middleboxes to simply drop packets with unknown IP options, or even to drop packets with standard and properly formed options." The fact that processing of IP options is nonmandatory does not justify the behavior where a router drops packets with IP options. Especially when this is very clearly spelled out in RFC1812 and has not been updated in any later RFC. >> Is it a business reason - one of anticompetitive tactics (I >heard this >> from a businessman)? >(As also told by some operator folks), if I am a network >provider, why I should allow incoming traffic from another >provider to potentially change some behaviors or collect some >information of the routers in its own network? Certainly, for >traffic traversing within my own network provider I probably >don't care much whether an option is encountered and processed. > It looks like the authors of RFC1812 though about this by making the processing of Recourd Route Option optional. So this sounds more like a FUD factor than a valid reason. > >> Or is it simply that product development groups, given a set of >product reqs, tested only against a certain set of IP packets >for compliance, and dropped things that didn't matter (and >yes, it does happen)? > It's somewhat sad to see this type of behavior in the Internet is silently approved. No wonder its hard to specify the Internet architecture, not to mention to fix some major flaws in it or introduce new features. /Mikael From touch at ISI.EDU Wed Mar 29 21:31:10 2006 From: touch at ISI.EDU (Joe Touch) Date: Wed, 29 Mar 2006 21:31:10 -0800 Subject: [e2e] IP options over e2e path In-Reply-To: <326A98184EC61140A2569345DB3614C102B84A05@esebe105.NOE.Nokia.com> References: <326A98184EC61140A2569345DB3614C102B84A05@esebe105.NOE.Nokia.com> Message-ID: <442B6D1E.209@isi.edu> Mikael.Latvala at nokia.com wrote: > Hello, > > First of all, thank you for the comments. > http://www.icir.org/tbit/TCPevolution-Mar2005.pdf was indeed very > interesting. Few comments about the paper. > > 1. It would be interesting to know whether routers along the packet path > or the receiving server dropped the packets with different IP options. > > 2. I disagree with "One concern is that the use of IP options may > significantly increase the overhead in routers, because > in some cases packets with IP options are processed on the slow path of > the forwarding engine." because as RFC1812 says that routers can ignore > the IP option, excluding source route, if so configured. This type of > logic is very easy to implement on the fast path. An IP packet with ANY options, i.e., one with a header length > 20, is considered an exception and typically is processed in the slow path. Slow-path processing embodies 'best effort' - it occurs only when the router's CPU isn't busy doing something more important, including running routing protocols, handling error messages, and monitoring operation. Granted, that accounts only for routers that drop 'some' or even 'most' packets with options. Routers that drop _all_ packets with options are either configured to do so or erroneous; it's hard to know for sure which is happening where, though. Joe From pdini at cisco.com Thu Mar 30 20:53:42 2006 From: pdini at cisco.com (Petre Dini (pdini)) Date: Thu, 30 Mar 2006 20:53:42 -0800 Subject: [e2e] =?iso-8859-1?q?ICISP_2006_=7C=7C_International_Conference_o?= =?iso-8859-1?q?n_Internet_Surveillance_and_Protection_=7C=7C_C=F4t?= =?iso-8859-1?q?e_d=27Azur=2C_France=2C_August_27-29=2C_2006?= Message-ID: <6B9C4B97B82F924485E26968EB05A6EE015EE342@xmb-sjc-224.amer.cisco.com> ============================== Apologies if you receive multiple copies LAST Call for Submissions ICISP 2006 , International Conference on Internet Surveillance and Protection C?te d'Azur, France, August 27-29, 2006 For submissions, go on the ICISP 2006 page at http://www.iaria.org/conferences/ICISP.htm and click Submit a paper Important deadlines: Full paper submission April 5, 2006 Authors Notification: April 25, 2006 Camera ready, full papers due: May 15, 2006 The International Conference on Internet Surveillance and Protection (ICISP 2006) initiates a series of special events targeting security, performance, vulnerabilities in Internet, as well as disaster prevention and recovery. Dedicated events focus on measurement, monitoring and lessons learnt in protecting the user. We solicit both academic, research, and industrial contributions. ICISP 2006 will offer tutorials, plenary sessions, and panel sessions. The ICISP 2006 Proceedings will be published by IEEE Computer Society, posted on Xplore IEEE system, and indexed by SCI. The conference has the following specialized events: TRASI 2006: Internet traffic surveillance and interception IPERF 2006: Internet performance RTSEC 2006: Security for Internet-based real-time systems SYNEV 2006: Systems and networks vulnerabilities DISAS 2006: Disaster prevention and recovery EMERG 2006: Networks and applications emergency services MONIT 2006: End-to-end sampling, measurement, and monitoring REPORT 2006: Experiences & lessons learnt in securing networks and applications USSAF 2006: User safety, privacy, and protection over Internet We welcome technical papers presenting research and practical results, position papers addressing the pros and cons of specific proposals, such as those being discussed in the standard fora or in industry consortia, survey papers addressing the key problems and solutions on any of the above topics short papers on work in progress, and panel proposals. The topics suggested by the conference can be discussed in term of concepts, state of the art, standards, implementations, running experiments and applications. Authors are invited to submit complete unpublished papers, which are not under review in any other conference or journal in the following, but not limited topic areas. Industrial presentations are not subject to these constraints. Tutorials on specific related topics and panels on challenging areas are encouraged. Regular papers Only .pdf or .doc files will be accepted for paper submission. All received papers will have a unique ID sent to the contact author by the EDAS system when submitting. Final author manuscripts will be 8.5" x 11" (two columns IEEE format), not exceeding 6 pages; max 4 extra pages allowed at additional cost. The formatting instructions can be found via anonymous FTP site at: ftp://pubftp.computer.org/Press/Outgoing/proceedings/8.5x11%20-%20Formatting%20files/instruct.pdf Once you receive the notification of paper acceptance, you will be provided by the IEEE CS Press an online author kit with all the steps an author needs to follow to submit the final version. The author kits URL will be included in the letter of acceptance. Technical marketing/business/positioning presentations The conference initiates a series of business, technical marketing, and positioning presentations on the same topics. Speakers must submit a 10-12 slide deck presentations with substantial notes accompanying the slides, in the .ppt format (.pdf-ed). The slide deck will be published in the conference's CD collection, together with the regular papers. Please send your presentations to petre at iaria.org. Tutorials Tutorials provide overviews of current high interest topics. Proposals can be for half or full day tutorials. Please send your proposals to petre at iaria.org Panel proposals: The organizers encourage scientists and industry leaders to organize dedicated panels dealing with controversial and challenging topics and paradigms. Panel moderators are asked to identify their guests and manage that their appropriate talk supports timely reach our deadlines. Moderators must specifically submit an official proposal, indicating their background, panelist names, their affiliation, the topic of the panel, as well as short biographies. For more information, petre at iaria.org Workshop proposals We welcome workshop proposals on issues complementary to the topics of this conference. Your requests should be forwarded to petre at iaria.org Committees: ICISP Advisory Committee: David Bonyuet, Delta Search Labs, USA Petre Dini, Cisco Systems Inc., USA // Concordia Univ., Canada Lothar Fritsch, Johann Wolfgang Goethe-University, Germany Stein Gjessing, Simula Research Laboratory, Norway Danielle Kaminsky, CERPAC, France John Kristoff, UltraDNS, USA Michael Logothetis, University of Patras, Greece Pascal Lorenz, University of Haute Alsace, France Bruce Maggs, Carnegie Mellon University and Akamai, USA Gerard Parr, University of Ulster Coleraine Campus, Northern Ireland Igor Podebrad, Commerzbank, Germany Raul Siles, Hewlett-Packard, USA Joseph (Joe) Touch, Information Sciences Institute, USA Henk Uijterwaal, RIPE, The Netherlands Rob van der Mei, Vrije Universiteit, The Netherlands ICISP 2006 Technical Program Committee: Ehab Al-Shaer, DePaul University, USA Ernst Biersack, Eurecom, France David Bonyuet, Delta Search Labs, USA Herbert Bos, VU Amsterdam, The Netherlands Wojciech Burakowski, Warsaw University of Technology, Poland Baek-Young Choi, University of Missouri-Kansas City, USA Benoit Claise, Cisco Systems, Inc., Belgium Petre Dini, Cisco Systems Inc., USA // Concordia Univ., Canada Thomas D?bendorfer, Google, Switzerland Nick Feamster, Georgia Tech, USA Lothar Fritsch, Johann Wolfgang Goethe-University, Germany Sorin Georgescu, Ericsson Research, Canada Stein Gjessing, Simula Research Laboratory, Norway Stefanos Gritzalis,University of the Aegean, Greece Fabrice Guillemin, France Telecom R&D, France Abdelhakim Hafid, University of Montreal, Canada Danielle Kaminsky, CERPAC, France Frank Hartung, Ericsson Research, Germany John Kristoff, UltraDNS, USA Pascal Lorenz, University of Haute Alsace, France Simon Leinen, Switch, Switherland Michael Logothetis, University of Patras, Greece Tulin Mangir, California State University at Long Beach, USA Tony McGregor, Waikato University, New Zealand Muthu Muthukrishnan, Rutgers Poytechnic Institute, USA Jaime Lloret Mauri, Universidad Polit?cnica de Valencia, Spain Javier Lopez, University of Malaga, Spain Ioannis Moscholios, University of Patras, Greece Philippe Owezarski, LAAS, France Dina Papagiannaki, Intel Research, UK Gerard Parr, University of Ulster Coleraine Campus, Northern Ireland Igor Podebrad, Commerzbank, Germany Reza Rejaie, University of Oregon, USA Fulvio Risso, Politecnico di Torino, Italy Heiko Rossnagel, Johan Wolfgang Goethe-University, Germany Matthew Roughan, University of Adelaide, Australia Kamil Sara?, The University of Texas at Dallas, USA Raul Siles, Hewlett-Packard, USA Charalabos Skianis, National Centre for Scientific Research Demokritos, Greece Joel Sommers, University of Wisconsin, USA Joseph (Joe) Touch, Information Sciences Institute, USA Steve Uhlig, Universit? catholique de Louvin, Belgium Henk Uijterwaal, RIPE, The Netherlands Rob van der Mei, Vrije Universiteit and CWI, Amsterdam, Netherlands Darryl Veitch, University of Melbourne, Australia Arno Wagner, ETH Zurich, Switzerland Paul Watters, MacQuarie University, Australia James Won-Ki Hong, POSTECH, Korea Weider D. Yu, San Jose State University, USA Zhi-Li Zhang, University of Minnesota, USA Tanja Zseby, Fraunhofer FOKUS, Germany Details: petre at iaria.org, conf at iaria.org, dumitru.roman at deri.org ============================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.postel.org/pipermail/end2end-interest/attachments/20060330/92b321d7/attachment-0001.html