From detlef.bosau at web.de Sat Jul 4 10:04:29 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 04 Jul 2009 19:04:29 +0200 Subject: [e2e] Some thoughts on WLAN etc., was: Re: RES: Why Buffering? In-Reply-To: <4A46A03B.3060903@web.de> References: <2ff1f08a0906191220x4de051d1w18e5285d3ef6abf5@mail.gmail.com> <4A40FC9F.9060504@web.de> <4A43C7D1.2040905@web.de> <4A466895.4010401@web.de> <4A46A03B.3060903@web.de> Message-ID: <4A4F8B9D.8010005@web.de> Hi to all. In the last two weeks, I got some first hand experience with WLAN in some educational setups, and perhaps it's interesting to share this experience. Perhaps the most astonishing experience for me was to see packet round trip times from my PC to a neighbour, with about 5 meters distance, about 80 (literally: eighty!) milliseconds. I well remember that sometimes attendees ridiculed about a talk where the speaker talked about RTT about 20 ms. Of course, this huge RTT value was a rare, extreme value. However, even a WLAN setup with about 10 people and, for educational purposes, in three infrastructure based networks causes extreme difficulties in communication. (In Germany, we have 13 WLAN channels AFAIK, so actually we can only run three WLANs in parallel in a classroom when the cells shall use disjoint frequency ranges.) Of course, in some WWAN setups we pursue a better frequency setup and a better antenna setup - but basically the problems are the same. 1. Obviously, depending on the setup we use, intercell interference may cause extreme difficulties in wireless communication. 2. I strongly guess, that this holds true not only for intercell interference but for intracell interference, interferences from external sources and other problems as well. 3. I don't expect any realistic possibility to forecast causes for interference for a "wireless channel". 4. Consequently, models which model a "wireless channel" as some memory less, well behaved and easy going entity may be a bit questionable and may be restricted for academic purpose, but do not really reflect the real world. Hence, some of my questions are: - What is a wireless "channel"? What is a wireless "connection"? I once was asked about the behaviour of some protocol in the presence of "short time disconnections". What is a "short time disconnection"? Or, the other way round, what is a connection all about in a wireless network? In my opinion, there simply are no connections in a packet switching network. Consequently, they are no disconnections in a packet switching networks. (And as always, this rule is strongly confirmed by quite a couple of exceptions. Exactly that's why it is a rule.) I would prefer the question: How likely is a packet to reach the receiver without corruption, once it has been sent? To the best of my knowledge, even adaptation schemes for wireless networks which make use of some kind of "pilot signal" only give a rough estimation of the "channel's quality". However, even when a channel with some well known AWGN in a lab will lead to a transport block corruption rate of, say, 10 %, there is absolutely no guarantee that this will hold in the real world. So, I think we should abandon the idea of a really useful estimation of something like a channel's block corruption ratio. We may have some reasonable guess - but not really more. Another question is: - How do we deal with the aim of resource fairness? In wireline networks, this is simple. In a wireline link, packet service times vary about the same average and are the same for all users. So, by Little's Law (and IMHO this is its correct place in this discussion) we have, that a flow's share of throughput on a wireline link is directly proportional to its share of space / resource consumption. As an immediate consequence, the TCP congestion algorithm (see congavoid paper) which targets at a fair distribution of ressources / spaces in the first, leads to a fair distribution of rates as a side effect. This hardly holds true in a network where Little's law does only apply as a long term average and where quite a number of packets are not successfully received, i.e a flow's throughput is not necessarily proportional to the number of served packets because some packets are sent once, others twice or more, and some packets are sent up to n times, n depending on the setup, and there is no successful transmission at all. Hence, the algorithm as stated in the congavoid paper will be of little use here, the same hold's true for the fairness consideration of Kelly's paper "Charging and rate control for elastic traffic" from 1997. From an end to end point of view, I would like to abandon terms like "channel" and "connection" in a wireless environment. A connection could only be defined as a logical association of a pair of peers. The term "channel" may reflect a lab setup with some sender placed in the one corner of the lab and a receiver in the opposite corner - and some well behaved source of noise in between. For the real world, this scenario is of limited use. (Please note: I don't say: of no use. It's limited use - as long as we deal with this model in a reasonable way.) We can well use adaptation mechanisms to choose appropriate line coding and channel coding schemes, however there's no way to forecast a flows corruption ratio within narrow limits, neither is there a forecast for a flow's achievable throughput or goodput. O.k., that's just my experience from the last week. And I would like to here some comments. Particularly as this view is not reflected in the text books I know so far. Detlef -- Detlef Bosau Galileistra?e 30 70565 Stuttgart phone: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 http://detlef.bosau at web.de From lachlan.andrew at gmail.com Sat Jul 4 11:31:22 2009 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sun, 5 Jul 2009 04:31:22 +1000 Subject: [e2e] Some thoughts on WLAN etc., was: Re: RES: Why Buffering? In-Reply-To: <4A4F8B9D.8010005@web.de> References: <2ff1f08a0906191220x4de051d1w18e5285d3ef6abf5@mail.gmail.com> <4A40FC9F.9060504@web.de> <4A43C7D1.2040905@web.de> <4A466895.4010401@web.de> <4A46A03B.3060903@web.de> <4A4F8B9D.8010005@web.de> Message-ID: 2009/7/5 Detlef Bosau : > > Hence, some of my questions are: > > - What is a wireless "channel"? What is a wireless "connection"? > > I once was asked about the behaviour of some protocol in the presence of > "short time disconnections". What is a "short time disconnection"? Or, the > other way round, what is a connection all about in a wireless network? There are many concepts of "connection" at different layers of the protocol stack which runs on top of a wireless physical layer, and many protocols/layers are not connection oriented. However, some are. In the e2e context, a "disconnection" of a lower layer roughly means "a period of time over which all packets are lost, which extends for more than a few average RTTs plus a few hundred milliseconds". That is what it means in a phrase like "transport protocols need to handle short time disconnections". > From an end to end point of view, I would like to abandon terms like > "channel" and "connection" in a wireless environment. We shouldn't throw the baby out with the bathwater. The concept of "a period in which most packets are delivered within a (vague) short period of time" is useful. If we don't call it a "connection", we'd come up with another word for it. Rather than pointing out weaknesses in current terminology, it may be better to propose a concept which better models (dis)connectivity, and then do a useful design/calculation which is possible using the new concept but was impossible with the old concepts. Without that validation, the new concept won't replace the old. Cheers, Lachlan -- Lachlan Andrew Centre for Advanced Internet Architectures (CAIA) Swinburne University of Technology, Melbourne, Australia Ph +61 3 9214 4837 From detlef.bosau at web.de Sat Jul 4 13:51:35 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 04 Jul 2009 22:51:35 +0200 Subject: [e2e] Some thoughts on WLAN etc., was: Re: RES: Why Buffering? In-Reply-To: References: <2ff1f08a0906191220x4de051d1w18e5285d3ef6abf5@mail.gmail.com> <4A40FC9F.9060504@web.de> <4A43C7D1.2040905@web.de> <4A466895.4010401@web.de> <4A46A03B.3060903@web.de> <4A4F8B9D.8010005@web.de> Message-ID: <4A4FC0D7.7040207@web.de> Lachlan Andrew wrote: > There are many concepts of "connection" at different layers of the > protocol stack which runs on top of a wireless physical layer, and > many protocols/layers are not connection oriented. However, some are. > > In the e2e context, a "disconnection" of a lower layer roughly means > "a period of time over which all packets are lost, which extends for > more than a few average RTTs plus a few hundred milliseconds". That > is what it means in a phrase like "transport protocols need to handle > short time disconnections". > Is'nt this exactly the problem discussed by Raj Jain and Lixia Zhang in the late eighties in their works on the weaknesses of timeouts? I totally agree with you here in the area of fixed networks, actually we use hello packets and the like in protocols like OSPF. But what about outliers in the RTT on wireless networks, like my 80 ms example? Most likely, the packet has seen a number of retransmissions and perhaps one or more of the transmission attempts has seen MAC latencies as well. (I did not have the appropriate instruments to see this.) Was there a "short time disconnection" then? Certainly not, because the system was busy to deliver the packet all the time. So the problem is not a "short time disconnection", the problem is that timeouts don't work - or at least suffer from some shortcomings. (Which was discussed in the aforementioned works by Lixia Zhang and Raj Jain.) Hence we see a "bogus disconnection", or to refer to the wonderful word used by Randy Katz, Reiner Ludwig, Andrej Gurtov and many others: "spurious disconnections". Actually, a too much delayed "hello reply" is a typical example of a spurious time out. Actually, e.g. in TCP, we don't deal with "short time disconnections" anyway. We use sufficiently large RTO values - and in the rare cases of an "accepted spurious timeout" (if you read the original work by Stephen W. Edge, you'll agree that the RTO is some kind of confidence interval which well accepts a certain residual possibility for spurious timeouts - it's a matter of engineering to find a suitable design for this value) we simply do a retransmission. Actually, the number of spurious timeouts is extremely low - if detectable at all. So, the basic strategy of "upper layers" to deal with short time disconnections, or latencies more than average, is simply not to deal with them - but to ignore them. What about a path change? Do we talk about a "short time disconnection" in TCP, when a link on the path fails and the flow is redirected then? We typically don't worry. To me, the problem is not the existence - or non existence - of short time disconnections at all but the question why we should _explicitly_ deal with a phenomenon where no one worries about? > > >> From an end to end point of view, I would like to abandon terms like >> "channel" and "connection" in a wireless environment. >> > > We shouldn't throw the baby out with the bathwater. The concept of "a > period in which most packets are delivered within a (vague) short > period of time" is useful. Certainly. However, the question is how we should deal with a packet which is not delivered within a certain period of time. I have to reread the "Freeze TCP" approach which was published a couple of years ago and which, IIRC, attempted to offer an explicit treatment for short time disconnections of links. The problem is, once more, an extremely basic problem well known to all kinds of science and therefore well discussed in epistemology. It's the basic problem to find the correct reason for an observed phenomenon. It's the basic problem of reasoning ex post. (A pretty well known error in wireless networking, which I made myself, is to conjecture heavy load from large delays. There are quite some reasons for large delays or large delay variations in wireless networks, e.g. varying noise / disturbance. Of course, varying load may lead to the same phenomenon. But it's imply impossible to reason which one of the several possible causes does apply from a single observation. Another instance of the same problem is the huge amount of loss differentiation literature, which fails for the same reason: Reasoning ex post. Actually, this problem should be overcome in high school and it's a matter of education. We shouldn't learn epistemology at university. We should learn this at school - and shouldn't attend a university without a solid education in epistemology at all. However, I'm quite ashamed when I see how often I made the aforementioned mistake myself.) Actually, the outlier from Wednesday was a minor one. There was an average RTT about 5 ms, the outlier was about 80 ms. In HSDPA, a variation of two or three orders of magnitude is by all means possible. So, does an outlier indicate a "disconnection" then? > If we don't call it a "connection", we'd > come up with another word for it. > > Isn't it sufficient to describe the corruption probability? Actually, some NOs suspend a line when the SNR grows too large. This is a disconnection "by definition" then. Admittedly, I have a problem with this attitude. As long as the user pays the service, it is up to the user to decide whether a packet should be sent even if the channel is bad. Of course, the bad channel will result in bad throughput. And of course the NO will so indicate to the user. However, the decision whether the line should be suspended or not is not up to the NO. > Rather than pointing out weaknesses in current terminology, it may be > better to propose a concept which better models (dis)connectivity, and > then do a useful design/calculation which is possible using the new > concept but was impossible with the old concepts. Without that > validation, the new concept won't replace the old. > > I totally agree. However, I did not propose a new concept, but I asked some questions. Detlef -- Detlef Bosau Galileistra?e 30 70565 Stuttgart phone: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 http://detlef.bosau at web.de From lachlan.andrew at gmail.com Sat Jul 4 14:33:43 2009 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sun, 5 Jul 2009 07:33:43 +1000 Subject: [e2e] Some thoughts on WLAN etc., was: Re: RES: Why Buffering? In-Reply-To: <4A4FC0D7.7040207@web.de> References: <2ff1f08a0906191220x4de051d1w18e5285d3ef6abf5@mail.gmail.com> <4A43C7D1.2040905@web.de> <4A466895.4010401@web.de> <4A46A03B.3060903@web.de> <4A4F8B9D.8010005@web.de> <4A4FC0D7.7040207@web.de> Message-ID: Greetings Detlef, 2009/7/5 Detlef Bosau : > Lachlan Andrew wrote: >> >> "a period of time over which all packets are lost, which extends for >> more than a few average RTTs plus a few hundred milliseconds". > > I totally agree with you here in the area of fixed networks, actually we use > hello packets and the like in protocols like OSPF. But what about outliers > in the RTT on wireless networks, like my 80 ms example? That is why I said "plus a few hundred milliseconds". You're right that outliers are common in wireless, which is why protocols to run over wireless need to be able to handle such things. > Was there a "short time disconnection" then? > Certainly not, because the system was busy to deliver the packet all the > time. >From the higher layer's point of view, it doesn't matter much whether the underlying system was working hard or not... If the outlier were more extreme, then I'd happily call it a short term disconnection, and say that the higher layers need to be able to handle it. > So the problem is not a "short time disconnection", the problem is that > timeouts don't work Timeouts are part of the problem. Another problem is reestablishing the ACK clock after the disconnection. > Actually, e.g. in TCP, we don't deal with "short time disconnections" There may not be an explicit mechanism to deal with them. I think that the earlier comment that they are more important than random losses is saying that we *should* perhaps deal with them (somehow), or at least include them in our models. > So, the basic strategy of "upper layers" to deal with short time > disconnections, or latencies more than average, is simply not to deal with > them - but to ignore them. > > What about a path change? Do we talk about a "short time disconnection" in > TCP, when a link on the path fails and the flow is redirected then? We > typically don't worry. Those delays are typically short enough that TCP handles them OK. If we were looking at deploying TCP in an environment with common slow redirections, then we should certainly check that it handles those short time disconnections. > To me, the problem is not the existence ?- or non existence - of short time > disconnections at all but the question why we should _explicitly_ deal with > a phenomenon where no one worries about? The protocol needn't necessarily deal with them explicitly, but we should explicitly make sure that it handles them OK. > Isn't it sufficient to describe the corruption probability? No, because that ignores the temporal correlation. You say that the Gilbert-Elliot model isn't good enough, but an IID model is orders of magnitude worse. Cheers, Lachlan -- Lachlan Andrew Centre for Advanced Internet Architectures (CAIA) Swinburne University of Technology, Melbourne, Australia Ph +61 3 9214 4837 From detlef.bosau at web.de Sun Jul 5 05:44:56 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 05 Jul 2009 14:44:56 +0200 Subject: [e2e] Some thoughts on WLAN etc., was: Re: RES: Why Buffering? In-Reply-To: References: <2ff1f08a0906191220x4de051d1w18e5285d3ef6abf5@mail.gmail.com> <4A43C7D1.2040905@web.de> <4A466895.4010401@web.de> <4A46A03B.3060903@web.de> <4A4F8B9D.8010005@web.de> <4A4FC0D7.7040207@web.de> Message-ID: <4A50A048.6080707@web.de> Lachlan Andrew wrote: > Greetings Detlef, > > 2009/7/5 Detlef Bosau : > >> Lachlan Andrew wrote: >> >>> "a period of time over which all packets are lost, which extends for >>> more than a few average RTTs plus a few hundred milliseconds". >>> >> I totally agree with you here in the area of fixed networks, actually we use >> hello packets and the like in protocols like OSPF. But what about outliers >> in the RTT on wireless networks, like my 80 ms example? >> > > That is why I said "plus a few hundred milliseconds". Now, how large is "a few"? Not to be misunderstood: There are well networks, where a link state can be determined. E.g.: - Ethernet, Normal Link Pulse, - ISDN, ATM, where we have a continuous bit flow, - HSDPA, where we have a continuous symbol flow on the pilot channel in downlink direction and responses from the mobile stations in uplink direction. In all these networks, we have continuous or short time periodic traffic on the link and this traffic is reflected by responses in a quite well known period of time. In addition, the behaviour of hello-response seems does not depend on any specific traffic. In Ethernet or ATM, a link, our a link outage respectively, is detected even when no traffic from upper layers exist. In some sense, this even holds true for HSDPA, when we define a HSDPA link to be "down", when the base station does not receive CQI indications any longer. I'm not quite sure (to be honest: I don't really know) whether similar mechanisms are available e.g. for Ad Hoc Networks. Particularly as we well know of hidden terminal / hidden station problems, where stations in a wireless network even see each other. > You're right > that outliers are common in wireless, which is why protocols to run > over wireless need to be able to handle such things. > > Exactly. So, we come to an important turn in the discussion. It's not only the question whether we can detect a link outage. The question is: How do we deal with a link outage? In wireline networks, link outages are supposed to be quite rare. (Nevertheless, the consequences may be painful.) In contrast to that, link outages are extremely common in MANETs. Actually, we have to ask what the term "link" and the term "link outage" or "disconnection" shall mean in MANETs. For example, think of TCP. How does TCP deal with a link outage? Now, if this were a German mailing list and I came from Cologne, I would write: "Es is wie es is und et k?tt wie et k?tt." More internationally spoken: "Don't worry, be happy." If the path is finally broken, the TCP flow is broken as well. If there is an alternative path and the routing is adjusted by some mechanism, the TCP flow will continue. Of course, there may be packet loss. So, TCP will do packet retransmissions. Of course, the path capacity may change. So, TCP will reassess the path capacity. Either by slow start or by one or several 3 D ACK / fast retransmit, fast recovery cycles. Of course, the throughput may change. Thats the least problem of all, because its automatically fixed by the ACK clocking mechanism. Of course, the RTT may change. So, the timers have to converge to a new expectation. There will me some rumbling, more or less, but afterwards, TCP will keep on going. Either way, there is no smart guy to tell TCP "there is a short time disconnection." Hence, there is no explicit mechanism in TCP to deal with short time disconnections. Because the TCP mechanisms as they are work fine - even when short time disconnections and path changes occur. There is no need for some "short time disconnection handling". Of course, this will rise the question whether TCP as is can be suitable for MANETs, because we can well put in question whether e.g. the RTO estimation and the CWND assessment algorithms in TCP will hold in the presence of volatile paths with volatile characteristics. TCP is supposed work with a connectionless packet transport mechanism with "reasonbly quasistationary characteristics" and a packet loss ratio, we can reasonably live with. Or for the people in Cologne: "Es is wie es is und et k?tt wie et k?tt." >> Was there a "short time disconnection" then? >> Certainly not, because the system was busy to deliver the packet all the >> time. >> > > From the higher layer's point of view, it doesn't matter much whether > the underlying system was working hard or not... Correct. From the higher layer's point of view, the questions are: - is the packet acknowledged at all? - is the round trip time "quasistationary" (=> Edge's paper). - is the packet order maintained or should we adapt the dupackthreshold? - more TCP specific: Is the MSS size appropriate or should it be changed? > If the outlier were > more extreme, then I'd happily call it a short term disconnection, and > say that the higher layers need to be able to handle it. > > Question: Should we _actively_ _handle_ it (e.g. Freeze TCP?) or should we build protocols sufficiently robust, so that protocols can implicitly cope with short time disconnections? >> So the problem is not a "short time disconnection", the problem is that >> timeouts don't work >> > > Timeouts are part of the problem. Another problem is reestablishing > the ACK clock after the disconnection. > > Hm. Where is the problem with the ACK clock? If, the problem could be (and I'm not quite sure about WLAN here) that a TCP downlink may use more than one paths in parallel. Hence, there may be three packets delivered along three different paths - and a sender in the wireline network sees three ACKs and hence sends three packets.... However, in the normal "single path scenario", I don't see a severe problem. Or do I miss something? >> Actually, e.g. in TCP, we don't deal with "short time disconnections" >> > > There may not be an explicit mechanism to deal with them. I think > that the earlier comment that they are more important than random > losses is saying that we *should* perhaps deal with them (somehow), or > at least include them in our models. > I'm actually not convinced that short time disconnections are more important than random losses. If this was the attitude of the reviewers who rejected my papers, I would suppose they would try to tease me. Of course, I could redefine any random loss to be a short time disconnection - hence there wouldn't be any random loss at all. However, this would be some nasty kind of hair splitting. I think, the perhaps most important lesson from my experience from last week is that we must not suppose one wireless problem to be more important than others. Of course this puts in question mainly the opportunistic scheduling work which assumes that there is only Rayleigh Fading and despite the useful, well behaved, periodic and predictable Rayleigh Fading for evenly moving mobiles, there is no other disturbance on the wireless channel. Of course, many students earn there "hats" that way, but the more I think about it, the less I believe that this really reflects reality. Detlef > >> So, the basic strategy of "upper layers" to deal with short time >> disconnections, or latencies more than average, is simply not to deal with >> them - but to ignore them. >> >> What about a path change? Do we talk about a "short time disconnection" in >> TCP, when a link on the path fails and the flow is redirected then? We >> typically don't worry. >> > > Those delays are typically short enough that TCP handles them OK. If > we were looking at deploying TCP in an environment with common slow > redirections, then we should certainly check that it handles those > short time disconnections. > > >> To me, the problem is not the existence - or non existence - of short time >> disconnections at all but the question why we should _explicitly_ deal with >> a phenomenon where no one worries about? >> > > The protocol needn't necessarily deal with them explicitly, but we > should explicitly make sure that it handles them OK. > > >> Isn't it sufficient to describe the corruption probability? >> > > No, because that ignores the temporal correlation. You say that the > Gilbert-Elliot model isn't good enough, but an IID model is orders of > magnitude worse. > > Cheers, > Lachlan > > -- Detlef Bosau Galileistra?e 30 70565 Stuttgart phone: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 http://detlef.bosau at web.de From dpreed at reed.com Mon Jul 6 06:55:58 2009 From: dpreed at reed.com (David P. Reed) Date: Mon, 06 Jul 2009 09:55:58 -0400 Subject: [e2e] Some thoughts on WLAN etc., was: Re: RES: Why Buffering? In-Reply-To: <4A50A048.6080707@web.de> References: <2ff1f08a0906191220x4de051d1w18e5285d3ef6abf5@mail.gmail.com> <4A43C7D1.2040905@web.de> <4A466895.4010401@web.de> <4A46A03B.3060903@web.de> <4A4F8B9D.8010005@web.de> <4A4FC0D7.7040207@web.de> <4A50A048.6080707@web.de> Message-ID: <4A52026E.90507@reed.com> I think you are confusing link troubles with "network" troubles. (and WLAN's are just multi-user links, pretty much). Part of the architecture of some link layers is a "feature" that is designed to declare the link "down" by some kind of measure. Now this is clearly a compromise - the link in most cases is only temporarily "down", depending on the tolerance of end-to-end apps for delay. In an 802.11* LAN (using standard WiFi MAC protocols), there is one of these"declared down", whether you are using APs or Virtual LANs or AdHoc mode (so called) or even 802.11s mesh. Since 802.11 doesn't take input from the IETF, it has no notion of meeting the needs of end-to-end protocols for a *useful* declaration. Instead, by studying their navels, the 802.11 guys wait a really long (and therefore relatively useless in terms of semantics) before declaring a WLAN "link" down. Of course that is a "win" if your goal is just managing the link layer. What would be useful to the end-to-end protocol is a meaningful assessment of the likelihood that a packet will be deliverable over that link as a function of time as it goes into the future. This would let the end-to-end protocol decide whether to tear down the TCP circuit and inform the app, or just wait, if the app is not delay sensitive in the time frame of interest. Unfortunately, TCP's default is typically 30 seconds long - far too long for a typical interactive app. And in some ways that's right: an app can implement a shorter-term "is the link alive" by merely using an app layer handshake at a relevant rate, and declaring the e2e circuit down if too many handshakes are not delivered. If you think about it, this is probably optimal, because otherwise the end-to-end app will have to have a language to express its desire to every possible link along the way, and also to the "rerouting" algorithms that might preserve end-to-end connectivity by "routing around" the slow or intermittent link. Recognize the "end to end argument" in that last paragraph? It says: we can't put the function of determining "app layer circuit down" into the different kinds of elements that make up the Internet links. Therefore we need to do an end-to-end link down determination. And in fact, if we have that, we don't need the link layer to tell the ends when they are down. So the function of "app layer circuit down" should NOT be required of the network elements. What should we put in the network? Well, we can definitely improve matters for lots of protocols by "routing around" crappy wireless connectivity quickly. So the routing algorithms should probably avoid buffering lots of data to be sent over a degraded wireless link, and start routing that traffic over an alternative path, if there is one. This increases the chance that higher levels will experience no interruption. And if buffers are not allowed to build up in the process (which means signalling to the endpoints to slow down via head drops or ECN) one can avoid congestion in the network by reflecting it out the endpoints quickly enough as the overall capacity degrades. Thus the network shouldn't spend its time holding onto packets in buffers. Instead it should push the problem to the endpoints as quickly as possible. Unfortunately, the link layer designers, whether of DOCSIS modems or 802.11 stacks, have it in their heads that reliable delivery is more important than the cost to endpoints of deep buffering. DOCSIS 2 modems have multiple seconds of buffer, and many WLANs will retransmit a packet up to 255 times before giving up! These are not a useful operational platform for TCP. It's not TCP that's broken, but the attempt to maximize link capacity, rather than letting routers and endpoints work to fix the problem at a higher level. On 07/05/2009 08:44 AM, Detlef Bosau wrote: > Lachlan Andrew wrote: >> Greetings Detlef, >> >> 2009/7/5 Detlef Bosau : >>> Lachlan Andrew wrote: >>>> "a period of time over which all packets are lost, which extends for >>>> more than a few average RTTs plus a few hundred milliseconds". >>> I totally agree with you here in the area of fixed networks, >>> actually we use >>> hello packets and the like in protocols like OSPF. But what about >>> outliers >>> in the RTT on wireless networks, like my 80 ms example? >> >> That is why I said "plus a few hundred milliseconds". > > Now, how large is "a few"? > > Not to be misunderstood: There are well networks, where a link state > can be determined. > E.g.: > - Ethernet, Normal Link Pulse, > - ISDN, ATM, where we have a continuous bit flow, > - HSDPA, where we have a continuous symbol flow on the pilot channel > in downlink direction and responses from the mobile stations in uplink > direction. > > In all these networks, we have continuous or short time periodic > traffic on the link and this traffic is reflected by responses in a > quite well known period of time. In addition, the behaviour of > hello-response seems does not depend on any specific traffic. In > Ethernet or ATM, a link, our a link outage respectively, is detected > even when no traffic from upper layers exist. > > In some sense, this even holds true for HSDPA, when we define a HSDPA > link to be "down", when the base station does not receive CQI > indications any longer. > > I'm not quite sure (to be honest: I don't really know) whether similar > mechanisms are available e.g. for Ad Hoc Networks. > Particularly as we well know of hidden terminal / hidden station > problems, where stations in a wireless network even see each other. > > >> You're right >> that outliers are common in wireless, which is why protocols to run >> over wireless need to be able to handle such things. >> > Exactly. > > So, we come to an important turn in the discussion. It's not only the > question whether we can detect a link outage. > The question is: How do we deal with a link outage? > > In wireline networks, link outages are supposed to be quite rare. > (Nevertheless, the consequences may be painful.) > In contrast to that, link outages are extremely common in MANETs. > Actually, we have to ask what the term "link" and the term "link > outage" or "disconnection" shall mean in MANETs. > > For example, think of TCP. How does TCP deal with a link outage? > > > Now, if this were a German mailing list and I came from Cologne, I > would write: "Es is wie es is und et k?tt wie et k?tt." > More internationally spoken: "Don't worry, be happy." > > If the path is finally broken, the TCP flow is broken as well. > > If there is an alternative path and the routing is adjusted by some > mechanism, the TCP flow will continue. > > Of course, there may be packet loss. So, TCP will do packet > retransmissions. > Of course, the path capacity may change. So, TCP will reassess the > path capacity. Either by slow start or by one or several 3 D ACK / > fast retransmit, fast recovery cycles. > Of course, the throughput may change. Thats the least problem of all, > because its automatically fixed by the ACK clocking mechanism. > Of course, the RTT may change. So, the timers have to converge to a > new expectation. > > There will me some rumbling, more or less, but afterwards, TCP will > keep on going. > > Either way, there is no smart guy to tell TCP "there is a short time > disconnection." Hence, there is no explicit mechanism in TCP to deal > with short time disconnections. Because the TCP mechanisms as they are > work fine - even when short time disconnections and path changes > occur. There is no need for some "short time disconnection handling". > > Of course, this will rise the question whether TCP as is can be > suitable for MANETs, because we can well put in question whether e.g. > the RTO estimation and the CWND assessment algorithms in TCP will hold > in the presence of volatile paths with volatile characteristics. > > TCP is supposed work with a connectionless packet transport mechanism > with "reasonbly quasistationary characteristics" and a packet loss > ratio, we can reasonably live with. > > Or for the people in Cologne: "Es is wie es is und et k?tt wie et k?tt." > >>> Was there a "short time disconnection" then? >>> Certainly not, because the system was busy to deliver the packet all >>> the >>> time. >> >> From the higher layer's point of view, it doesn't matter much whether >> the underlying system was working hard or not... > > Correct. From the higher layer's point of view, the questions are: > - is the packet acknowledged at all? > - is the round trip time "quasistationary" (=> Edge's paper). > - is the packet order maintained or should we adapt the dupackthreshold? > - more TCP specific: Is the MSS size appropriate or should it be changed? >> If the outlier were >> more extreme, then I'd happily call it a short term disconnection, and >> say that the higher layers need to be able to handle it. >> > > Question: Should we _actively_ _handle_ it (e.g. Freeze TCP?) or > should we build protocols sufficiently robust, so that protocols can > implicitly cope with short time disconnections? >>> So the problem is not a "short time disconnection", the problem is that >>> timeouts don't work >> >> Timeouts are part of the problem. Another problem is reestablishing >> the ACK clock after the disconnection. >> > > Hm. Where is the problem with the ACK clock? > > If, the problem could be (and I'm not quite sure about WLAN here) that > a TCP downlink may use more than one paths in parallel. Hence, there > may be three packets delivered along three different paths - and a > sender in the wireline network sees three ACKs and hence sends three > packets.... > > However, in the normal "single path scenario", I don't see a severe > problem. Or do I miss something? > >>> Actually, e.g. in TCP, we don't deal with "short time disconnections" >> >> There may not be an explicit mechanism to deal with them. I think >> that the earlier comment that they are more important than random >> losses is saying that we *should* perhaps deal with them (somehow), or >> at least include them in our models. > > I'm actually not convinced that short time disconnections are more > important than random losses. > > If this was the attitude of the reviewers who rejected my papers, I > would suppose they would try to tease me. > > Of course, I could redefine any random loss to be a short time > disconnection - hence there wouldn't be any random loss at all. > > However, this would be some nasty kind of hair splitting. > > I think, the perhaps most important lesson from my experience from > last week is that we must not suppose > one wireless problem to be more important than others. > > Of course this puts in question mainly the opportunistic scheduling > work which assumes that there is only Rayleigh Fading > and despite the useful, well behaved, periodic and predictable > Rayleigh Fading for evenly moving mobiles, there is no other > disturbance on the wireless channel. > > Of course, many students earn there "hats" that way, but the more I > think about it, the less I believe that this really reflects reality. > > Detlef >>> So, the basic strategy of "upper layers" to deal with short time >>> disconnections, or latencies more than average, is simply not to >>> deal with >>> them - but to ignore them. >>> >>> What about a path change? Do we talk about a "short time >>> disconnection" in >>> TCP, when a link on the path fails and the flow is redirected then? We >>> typically don't worry. >> >> Those delays are typically short enough that TCP handles them OK. If >> we were looking at deploying TCP in an environment with common slow >> redirections, then we should certainly check that it handles those >> short time disconnections. >> >>> To me, the problem is not the existence - or non existence - of >>> short time >>> disconnections at all but the question why we should _explicitly_ >>> deal with >>> a phenomenon where no one worries about? >> >> The protocol needn't necessarily deal with them explicitly, but we >> should explicitly make sure that it handles them OK. >> >>> Isn't it sufficient to describe the corruption probability? >> >> No, because that ignores the temporal correlation. You say that the >> Gilbert-Elliot model isn't good enough, but an IID model is orders of >> magnitude worse. >> >> Cheers, >> Lachlan >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090706/47b1b157/attachment.html From perfgeek at mac.com Mon Jul 6 08:21:19 2009 From: perfgeek at mac.com (rick jones) Date: Mon, 06 Jul 2009 08:21:19 -0700 Subject: [e2e] Some thoughts on WLAN etc., was: Re: RES: Why Buffering? In-Reply-To: <4A52026E.90507@reed.com> References: <2ff1f08a0906191220x4de051d1w18e5285d3ef6abf5@mail.gmail.com> <4A43C7D1.2040905@web.de> <4A466895.4010401@web.de> <4A46A03B.3060903@web.de> <4A4F8B9D.8010005@web.de> <4A4FC0D7.7040207@web.de> <4A50A048.6080707@web.de> <4A52026E.90507@reed.com> Message-ID: <24C22A0B-F0DE-4952-86B6-3FFEF322D61F@mac.com> On Jul 6, 2009, at 6:55 AM, David P. Reed wrote: > In an 802.11* LAN (using standard WiFi MAC protocols), there is one > of these"declared down", whether you are using APs or Virtual LANs > or AdHoc mode (so called) or even 802.11s mesh. Since 802.11 > doesn't take input from the IETF, it has no notion of meeting the > needs of end-to-end protocols for a *useful* declaration. Instead, > by studying their navels, the 802.11 guys wait a really long (and > therefore relatively useless in terms of semantics) before declaring > a WLAN "link" down. Of course that is a "win" if your goal is just > managing the link layer. > > What would be useful to the end-to-end protocol is a meaningful > assessment of the likelihood that a packet will be deliverable over > that link as a function of time as it goes into the future. This > would let the end-to-end protocol decide whether to tear down the > TCP circuit and inform the app, or just wait, if the app is not > delay sensitive in the time frame of interest. > > Unfortunately, TCP's default is typically 30 seconds long - far too > long for a typical interactive app. And in some ways that's right: > an app can implement a shorter-term "is the link alive" by merely > using an app layer handshake at a relevant rate, and declaring the > e2e circuit down if too many handshakes are not delivered. If you > think about it, this is probably optimal, because otherwise the end- > to-end app will have to have a language to express its desire to > every possible link along the way, and also to the "rerouting" > algorithms that might preserve end-to-end connectivity by "routing > around" the slow or intermittent link. > > Recognize the "end to end argument" in that last paragraph? It > says: we can't put the function of determining "app layer circuit > down" into the different kinds of elements that make up the Internet > links. Therefore we need to do an end-to-end link down > determination. And in fact, if we have that, we don't need the link > layer to tell the ends when they are down. So the function of "app > layer circuit down" should NOT be required of the network elements. Why is TCP's waiting too long in some ways right, but 802.11's waiting too long relatively useless? Both seem to be letting those above them have time to make their own decisions? rick jones > Wisdom teeth are impacted, people are affected by the effects of > events From dpreed at reed.com Mon Jul 6 09:12:58 2009 From: dpreed at reed.com (David P. Reed) Date: Mon, 06 Jul 2009 12:12:58 -0400 Subject: [e2e] Some thoughts on WLAN etc., was: Re: RES: Why Buffering? In-Reply-To: <24C22A0B-F0DE-4952-86B6-3FFEF322D61F@mac.com> References: <2ff1f08a0906191220x4de051d1w18e5285d3ef6abf5@mail.gmail.com> <4A43C7D1.2040905@web.de> <4A466895.4010401@web.de> <4A46A03B.3060903@web.de> <4A4F8B9D.8010005@web.de> <4A4FC0D7.7040207@web.de> <4A50A048.6080707@web.de> <4A52026E.90507@reed.com> <24C22A0B-F0DE-4952-86B6-3FFEF322D61F@mac.com> Message-ID: <4A52228A.3030402@reed.com> A quick answer: On 07/06/2009 11:21 AM, rick jones wrote: > > Why is TCP's waiting too long in some ways right, but 802.11's waiting > too long relatively useless? Both seem to be letting those above them > have time to make their own decisions? > TCP's waiting too long doesn't hide degradation of the end-to-end channel from the app, *because the app can close the circuit, which halts any further retransmission*. 802.11's trying at the link layer for too long has no way for the higher layer to tell it to stop trying. If the higher layer can change the lower layer's behavior based on its requirements, that's an improvement. But please, please, please don't mistake this observation for a claim that either TCP or 802.11 are designed to take into account the variable needs of end-to-end apps for semantics other than maximal-throughput FTP - that's one reason I fought for UDP (along with the others who did as well), which exposes raw IP through a host-mux/demux interface based on ports. From dpreed at reed.com Mon Jul 6 10:28:09 2009 From: dpreed at reed.com (David P. Reed) Date: Mon, 06 Jul 2009 13:28:09 -0400 Subject: [e2e] Some thoughts on WLAN etc., was: Re: RES: Why Buffering? In-Reply-To: <4A52228A.3030402@reed.com> References: <2ff1f08a0906191220x4de051d1w18e5285d3ef6abf5@mail.gmail.com> <4A43C7D1.2040905@web.de> <4A466895.4010401@web.de> <4A46A03B.3060903@web.de> <4A4F8B9D.8010005@web.de> <4A4FC0D7.7040207@web.de> <4A50A048.6080707@web.de> <4A52026E.90507@reed.com> <24C22A0B-F0DE-4952-86B6-3FFEF322D61F@mac.com> <4A52228A.3030402@reed.com> Message-ID: <4A523429.9000905@reed.com> And just to clarify a too-hasty answer, to stop excess retransmission quickly, the app needs to force an "unclean" TCP close, because the standard TCP CLOSE command gets slowed by any accumulated buffering in the network. This just reminds us all that TCP was designed with the assumption that the network didn't go out of its way to hold stuff in buffers to be "helpful" - the underlying idea was "best efforts"(e.g., drop packets early), not heroic efforts. Meanwhile, the router and link marketing guys beaver away trying to be "helpful" and "add value" by offering offline buffering services that apps don't need. What if we had let them use whole disk drives? On 07/06/2009 12:12 PM, David P. Reed wrote: > A quick answer: > > On 07/06/2009 11:21 AM, rick jones wrote: >> >> Why is TCP's waiting too long in some ways right, but 802.11's >> waiting too long relatively useless? Both seem to be letting those >> above them have time to make their own decisions? >> > TCP's waiting too long doesn't hide degradation of the end-to-end > channel from the app, *because the app can close the circuit, which > halts any further retransmission*. > > 802.11's trying at the link layer for too long has no way for the > higher layer to tell it to stop trying. > > If the higher layer can change the lower layer's behavior based on its > requirements, that's an improvement. > > But please, please, please don't mistake this observation for a claim > that either TCP or 802.11 are designed to take into account the > variable needs of end-to-end apps for semantics other than > maximal-throughput FTP - that's one reason I fought for UDP (along > with the others who did as well), which exposes raw IP through a > host-mux/demux interface based on ports. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090706/e99b7902/attachment.html From detlef.bosau at web.de Mon Jul 6 11:54:46 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 06 Jul 2009 20:54:46 +0200 Subject: [e2e] Some thoughts on WLAN etc., was: Re: RES: Why Buffering? In-Reply-To: <4A52026E.90507@reed.com> References: <2ff1f08a0906191220x4de051d1w18e5285d3ef6abf5@mail.gmail.com> <4A43C7D1.2040905@web.de> <4A466895.4010401@web.de> <4A46A03B.3060903@web.de> <4A4F8B9D.8010005@web.de> <4A4FC0D7.7040207@web.de> <4A50A048.6080707@web.de> <4A52026E.90507@reed.com> Message-ID: <4A524876.5050001@web.de> David P. Reed wrote: > I think you are confusing link troubles with "network" troubles. (and > WLAN's are just multi-user links, pretty much). No way. Did I use the word "trouble" even once? (No kidding.) > Part of the architecture of some link layers is a "feature" that is > designed to declare the link "down" by some kind of measure. O.k., than this feature is, at least in some sense, a "declaration". > Now this is clearly a compromise - the link in most cases is only > temporarily "down", depending on the tolerance of end-to-end apps for > delay. For delay and, of course, for reliability. However, that's a minor point. At least the term "temporarily" is quite sensitive here. When a link is "temporarily down", one would expect the link to be "up again" at some time. Particularly in the literature on opportunistic scheduling you always will find the attitude (if unspoken): "When the link is bad - we'll send it later, when the link is better." _Will_ a bad link ever be better? I'm not thinking of troubles. Actually, this question reflects my personal life experience. How often did I waste years of my life - just waiting, for something to become better. (Perhaps, this kind of thinking is some kind of midlife crisis.... May be. But if the consequence is to stop waiting and to seize the day instead - than this is the right consequence.) When there is evidence, that bad link conditions will be overcome in some realistic period of time, it makes sense to defer sending. No discussion about that. But what should be done in cases where such an evidence doesn't exist? When a wireline link is "down", this may mean different things: 1. The cabling is broken. No discussion, the cabling must be fixed and if possible, an alternative routing should be chosen until the problem is fixed. 2. Power supply, Router, Switch, Hub or any other malware made of steel, copper and plastic is broken. Same discussion as 1. 3. Referring to the sense of "temporarily down": Routing protocols may assess load metrics or other cost metrics - and may declare a link "down", i.e. not feasible, when the load becomes to high or the link becomes to expensive. However, I would ask the same question here: Is there any evidence, that the load / costs /... will drop down some time? Of course, these are two different scenarios. i) "Permanently down until some thechnical problem is fixed." ii) "Temporarily down, until the link becomes feasible again." Both in common, I made the quiet assumption, that an alternative route exist. Which may not be the case e.g. in WWWAN scenarios. > In an 802.11* LAN (using standard WiFi MAC protocols), there is one of > these"declared down", whether you are using APs or Virtual LANs or > AdHoc mode (so called) or even 802.11s mesh. Since 802.11 doesn't > take input from the IETF, it has no notion of meeting the needs of > end-to-end protocols for a *useful* declaration. Instead, by studying > their navels, the 802.11 guys wait a really long (and therefore > relatively useless in terms of semantics) before declaring a WLAN > "link" down. Of course that is a "win" if your goal is just managing > the link layer. Hm. One thing, I learned from my CS colleagues is that in CS we have a quite strong "top down view", i.e. it would be helpful to respect the application layers needs for a "link down declaration". And these may well depend on the scenario. I well remember an RFC draft submitted by Martina Zitterbart et al. some years ago which proposed a "limited effort class" in the context of QoS and DiffServ. Just to mention an example that users don't necessarily require a high throughput link. So, in my opinion, a "useful declaration" for "link declared down" should respect the user's requirements. Perhaps it would even be possible to use a wireless link for those applications, for which the minimum requirements for a link are met and not to use the same link for other applications. > > What would be useful to the end-to-end protocol is a meaningful > assessment of the likelihood that a packet will be deliverable over > that link as a function of time as it goes into the future. Indeed. However, I'm not quite sure if this is possible. I'm looking for something like this for some years now, however the more I understand wireless networks, the less optimistic I become in this respect. > This would let the end-to-end protocol decide whether to tear down the > TCP circuit and inform the app, or just wait, if the app is not delay > sensitive in the time frame of interest. > Of course. However, this is one example for common principle "prediction is hard, especially of the future". (Credited to numerous people.) > Unfortunately, TCP's default is typically 30 seconds long - far too > long for a typical interactive app. In some GPRS standard, even latencies of 600 seconds were accepted. This may be quite long for quite a few apps. However: 30 seconds, even 0.30 secodns, are "ages" in the context of wireless networks. I don't know any reasonable CE guy who would make a forecast for a wireless channel's quality, or a packet corruption ratio, for 30 seconds, despite of lab scenarios. > And in some ways that's right: an app can implement a shorter-term "is > the link alive" by merely using an app layer handshake at a relevant > rate, and declaring the e2e circuit down if too many handshakes are > not delivered. If you think about it, this is probably optimal, > because otherwise the end-to-end app will have to have a language to > express its desire to every possible link along the way, and also to > the "rerouting" algorithms that might preserve end-to-end connectivity > by "routing around" the slow or intermittent link. > > Recognize the "end to end argument" in that last paragraph? It says: > we can't put the function of determining "app layer circuit down" into > the different kinds of elements that make up the Internet links. Absolutely. However, what I think about is: If there _is_ no alternative path, so rerouting will not solve the problem, how can we make an application _live_ with the situation? Sometimes, we get the advice: "Love it, change it or leave it." Now, wireless channels may only offer the possibility to love them, because we cannot change them and because there is no alternative, we cannot leave them. So, the alternative is only: "take it - or leave it." Basically, that is in short what we talked about before: You mentioned that IEEE 802.11 does not take input from IETF. So, wireless networks will perhaps not obey QoS requirements or the like from upper layers. CE guys tell me, that a forecast for wireless channel properties is more than difficult and perhaps simply not possible - hence application adaptation may stay a dream. > Therefore we need to do an end-to-end link down determination. And ideally some determination when a link will be up again.... > Thus the network shouldn't spend its time holding onto packets in > buffers. Instead it should push the problem to the endpoints as > quickly as possible. Unfortunately, the link layer designers, whether > of DOCSIS modems or 802.11 stacks, have it in their heads that > reliable delivery is more important than the cost to endpoints of deep > buffering. DOCSIS 2 modems have multiple seconds of buffer, and many > WLANs will retransmit a packet up to 255 times before giving up! I well remember your advice: "Not more than three attempts." And I think, I'm with you here, because I think that the overlong latencies accepted by some standards are a consequence of a far too low packet corruption probability. With respect to the GPRS standard mentioned above, the overlong tolerated latencies appear in conjunction with a packet corruption ratio 10^-9, which is simply nonsense for many wireless links. And that's one of the arguments I often have with colleagues, that one does not believe me that wireless links are typically _lossy_. And when wireless links appear to be _not_ _lossy_, this is often a consequence of a far too high number of retransmissions. Hence, a small number of retransmissions (maximum 3) would result exactly in what you propose: On a noisy link, packets will see a high corruption ratio and hence, the end-to-end application sees an important packet loss and has to deal with this. And perhaps that's not so far from my way of thinking. I simply don't want to declare a link "down" because of a high packet corruption ratio. I would like to use the link with the packet corruption ratio it can yield - and then, it's up to the application with this. > These are not a useful operational platform for TCP. It's not TCP > that's broken, but the attempt to maximize link capacity, rather than > letting routers and endpoints work to fix the problem at a higher level. > I did not say that TCP is broken. However, I think some algorithms in TCP could be made more robust against lossy channels in some scenarios. Nevertheless, TCP is neither the Torah nor the Holy Bible nor the Quran. Perhaps, for some scenarios there may exist reasonable alternatives to TCP for reliable packet transfer. Detlef -- Detlef Bosau Galileistra?e 30 70565 Stuttgart phone: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 http://detlef.bosau at web.de From matthias.krause at philips.com Fri Jul 31 01:38:58 2009 From: matthias.krause at philips.com (Krause, Matthias) Date: Fri, 31 Jul 2009 10:38:58 +0200 Subject: [e2e] TCP fairness Message-ID: <66D4D6C711E011498906F791297A7A063E98E647F7@NLCLUEXM08.connect1.local> Dear community, I'm afraid I'm stuck with some artifacts I have to be able to explain and maybe even control. I appreciate every small hint in a good direction either towards a paper or to the right existing reference. While experimenting with video streaming over TCP in equilibrium situations, I found several artifacts. The environment was as follows. 3 video streaming servers (Windows XP) based on TCP were streaming to 3 receivers in a local network over a bottleneck node. Variations on the this setup were {wireless | wired}, {receiver Linux| receiver WinXP}, distance from base station (in wireless), number of network nodes in between, inclusion/exclusion of a token bucket filter node for bandwidth limitation. In all the experiments, I permutated the receivers, and run each permutation for 20 to 50 times. The streamed video sequence was 6 Mbit/s and 560 seconds long. The artifacts I achieved were: * In sum, the bitrates achieved were not fair (as in 'close to equal') * The bit rate ratios stayed stable most of the time. * Sometimes ratios switch, turning the best performer into the worst performer. I wonder if these phenomena are known and studied or not even worth the effort thinking of it. Thank you very much for your time and consideration and I'm looking forward to hearing from you. Yours sincerely, Matthias Krause Philips Research ________________________________ The information contained in this message may be confidential and legally protected under applicable law. The message is intended solely for the addressee(s). If you are not the intended recipient, you are hereby notified that any use, forwarding, dissemination, or reproduction of this message is strictly prohibited and may be unlawful. If you are not the intended recipient, please contact the sender by return e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090731/a74ea38a/attachment.html