From francesco at net.infocom.uniroma1.it Thu Feb 1 02:24:50 2007 From: francesco at net.infocom.uniroma1.it (Francesco Vacirca) Date: Thu, 01 Feb 2007 11:24:50 +0100 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <1170314055.4775.12.camel@lap10-c703.uibk.ac.at> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> <200702010141.BAA15760@cisco.com> <1170314055.4775.12.camel@lap10-c703.uibk.ac.at> Message-ID: <45C1BFF2.9080802@net.infocom.uniroma1.it> Some link layers use a strongest FEC to protect header. E.g. in some UMTS coding scheme the link layer employs a 1/3 codification for RLC header, whereas the payload can use a different scheme (e.g. from 4/5 to 1/3)... Maybe it could be applied also to TCP. Note that this can decrease the goodput in case of non lossy links... obviously it depends on the ratio between useful bits and transmitted bits. In the 802.11 standard some part of the packet (MAC header) is sent with a different rate to be more protected against channel impairments and also for compatibility purposes. A cross layer approach could adopt low rate also for TCP header (also IP obviously)... but I do not think that the benefits are more than disadvantages. One more thing... in case of Michael experiments, are the packet losses on the channel due to SNR fluctuations or due to MAC collisions? In the second case, it is quite normal that the whole packet (header+payload) is corrupted. Francesco Michael Welzl wrote: >>> On 31/01/07, Lloyd Wood wrote: >>>> It's possible for the sender to infer that an ack has been lost, based on subsequent receiver behaviour in sending a cumulative ack including packets received that the sender didn't get individual acks for. >>> No, that was my point. We can't distinguish between ACKs which are >>> lost and those which are never sent in the first place. >> Yes, we can. If a SACK block is present, it tells you which datagrams were and weren't received. >> >> If a datagram was received, an ack was sent (modulo the delack mechanism), and the datagram will not be called out in the SACK block. >> >> If the datagram wasn't received, this will be reflected in the SACK block. >> >> >>> Also, having a unique identifier (like a timestamp) isn't the same as >>> having sequence numbers which can say "We're (not) consecutive". The >>> latter can detect loss but the former can't. >> If you have timestamps on every ack and packet, what's the difference? > > I think that these methods of ACK loss detection are interesting > ideas, and there might be a way to intelligently combine them > with what's already in > http://www.icir.org/floyd/papers/draft-floyd-tcpm-ackcc-00d.txt > > Cheers, > Michael > From michael.welzl at uibk.ac.at Thu Feb 1 02:33:16 2007 From: michael.welzl at uibk.ac.at (Michael Welzl) Date: 01 Feb 2007 11:33:16 +0100 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <45C1BFF2.9080802@net.infocom.uniroma1.it> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> <200702010141.BAA15760@cisco.com> <1170314055.4775.12.camel@lap10-c703.uibk.ac.at> <45C1BFF2.9080802@net.infocom.uniroma1.it> Message-ID: <1170325996.4775.129.camel@lap10-c703.uibk.ac.at> Hi, I think you're referring to a different email that I sent out to ietf tsvwg, dccp, tcpm and irtf iccrg yesterday. I suggest to move this discussion to iccrg, where we have discussed corruption based rate reaction in the past. See below for an answer to your question: On Thu, 2007-02-01 at 11:24, Francesco Vacirca wrote: > Some link layers use a strongest FEC to protect header. E.g. in some > UMTS coding scheme the link layer employs a 1/3 codification for RLC > header, whereas the payload can use a different scheme (e.g. from 4/5 to > 1/3)... Maybe it could be applied also to TCP. Note that this can > decrease the goodput in case of non lossy links... obviously it depends > on the ratio between useful bits and transmitted bits. > > In the 802.11 standard some part of the packet (MAC header) is sent with > a different rate to be more protected against channel impairments and > also for compatibility purposes. A cross layer approach could adopt low > rate also for TCP header (also IP obviously)... but I do not think that > the benefits are more than disadvantages. > > One more thing... in case of Michael experiments, are the packet losses > on the channel due to SNR fluctuations or due to MAC collisions? > In the second case, it is quite normal that the whole packet > (header+payload) is corrupted. They are generally due to SNR fluctuations, by transmitting between two notebooks in ad hoc mode (the only way that disabling the CRC worked for us). I remember that Mattia also did a quick check with one more notebook, to see if MAC influences the result, but it didn't seem to play a role (I don't remember if this test made it into the document in the end... I think not). It would be interesting to know why errors occur in the fashion that we saw; we measured, but don't really have an explanation. Perhaps it's the PHY coding. Anyway, for those of you in who didn't see my previous email and are confused, this link should explain it: http://www.welzl.at/research/projects/corruption/index.html Cheers, Michael From francesco at net.infocom.uniroma1.it Thu Feb 1 02:39:11 2007 From: francesco at net.infocom.uniroma1.it (Francesco Vacirca) Date: Thu, 01 Feb 2007 11:39:11 +0100 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <1170325996.4775.129.camel@lap10-c703.uibk.ac.at> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> <200702010141.BAA15760@cisco.com> <1170314055.4775.12.camel@lap10-c703.uibk.ac.at> <45C1BFF2.9080802@net.infocom.uniroma1.it> <1170325996.4775.129.camel@lap10-c703.uibk.ac.at> Message-ID: <45C1C34F.6010804@net.infocom.uniroma1.it> I'm sorry, I'm quite confused... ;-) I'll forward to tcpm and iccrg f. Michael Welzl wrote: > Hi, > > I think you're referring to a different email that I sent > out to ietf tsvwg, dccp, tcpm and irtf iccrg yesterday. > I suggest to move this discussion to iccrg, where we have > discussed corruption based rate reaction in the past. > > See below for an answer to your question: > > > On Thu, 2007-02-01 at 11:24, Francesco Vacirca wrote: >> Some link layers use a strongest FEC to protect header. E.g. in some >> UMTS coding scheme the link layer employs a 1/3 codification for RLC >> header, whereas the payload can use a different scheme (e.g. from 4/5 to >> 1/3)... Maybe it could be applied also to TCP. Note that this can >> decrease the goodput in case of non lossy links... obviously it depends >> on the ratio between useful bits and transmitted bits. >> >> In the 802.11 standard some part of the packet (MAC header) is sent with >> a different rate to be more protected against channel impairments and >> also for compatibility purposes. A cross layer approach could adopt low >> rate also for TCP header (also IP obviously)... but I do not think that >> the benefits are more than disadvantages. >> >> One more thing... in case of Michael experiments, are the packet losses >> on the channel due to SNR fluctuations or due to MAC collisions? >> In the second case, it is quite normal that the whole packet >> (header+payload) is corrupted. > > They are generally due to SNR fluctuations, by transmitting > between two notebooks in ad hoc mode (the only way that > disabling the CRC worked for us). I remember that Mattia > also did a quick check with one more notebook, to see if > MAC influences the result, but it didn't seem to play a role > (I don't remember if this test made it into the document in > the end... I think not). > > It would be interesting to know why errors occur in the > fashion that we saw; we measured, but don't really have > an explanation. Perhaps it's the PHY coding. > > Anyway, for those of you in who didn't see my previous > email and are confused, this link should explain it: > http://www.welzl.at/research/projects/corruption/index.html > > Cheers, > Michael > From detlef.bosau at web.de Thu Feb 1 03:37:07 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Thu, 01 Feb 2007 12:37:07 +0100 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <200701312243.WAA09233@cisco.com> References: <200701312243.WAA09233@cisco.com> Message-ID: <45C1D0E3.4020206@web.de> Lloyd Wood wrote: > At Wednesday 31/01/2007 20:23 +0000, Jon Crowcroft wrote: > >> its clear we should devise a schmee for disguising data packets as acks >> > > which is what piggybacking acks on data packets already does. > > Huh? To my understanding, Jon proposes piggybacking the other way round :-) Piggypabkcing data on acks to take the full advantage of ACK-transfer :-) Cumulative data transfer.... reminds me on code punturcing used with some convolutional code ;-) Ok, ok., I?m going to shut up :-) From touch at ISI.EDU Thu Feb 1 06:05:43 2007 From: touch at ISI.EDU (Joe Touch) Date: Thu, 01 Feb 2007 06:05:43 -0800 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <200701312241.WAA09104@cisco.com> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> Message-ID: <45C1F3B7.70001@isi.edu> Lloyd Wood wrote: > At Wednesday 31/01/2007 16:02 -0500, Sushant Rewaskar wrote: >> Hi, >> I agree with Lachlan. In TCP there is no way to know when an ack is lost as >> it carries no "sequence number" of its own. > > It can - timestamps are used for disambiguation, and they > disambiguate the acks. They can act as unique sequence numbers. How so? RFC 1323, Sec 4.2.2: "Based upon these considerations, we choose a timestamp clock frequency in the range 1 ms to 1 sec per tick. " That suggests that timestamps need not be unique by themselves. Joe -- ---------------------------------------- Joe Touch Sr. Network Engineer, USAF TSAT Space Segment -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070201/18f49a8f/signature.bin From L.Wood at surrey.ac.uk Thu Feb 1 07:54:56 2007 From: L.Wood at surrey.ac.uk (Lloyd Wood) Date: Thu, 01 Feb 2007 15:54:56 +0000 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <45C1F3B7.70001@isi.edu> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> <45C1F3B7.70001@isi.edu> Message-ID: <200702011555.PAA00102@cisco.com> At Thursday 01/02/2007 06:05 -0800, Joe Touch wrote: >Lloyd Wood wrote: >> At Wednesday 31/01/2007 16:02 -0500, Sushant Rewaskar wrote: >>> Hi, >>> I agree with Lachlan. In TCP there is no way to know when an ack is lost as >>> it carries no "sequence number" of its own. >> >> It can - timestamps are used for disambiguation, and they >> disambiguate the acks. They can act as unique sequence numbers. > >How so? RFC 1323, Sec 4.2.2: > >"Based upon these considerations, we choose a timestamp clock > frequency in the range 1 ms to 1 sec per tick. " > >That suggests that timestamps need not be unique by themselves. Any implementer would be stupid to actually follow RFC1323 and send actual timestamp values in the packet - that's a massive DoS hole just waiting to be exploited. Instead (as I mentioned previously in this thread), the sender would have a table of timestamps and associated unique keys for each packet sent out. You'd send the key in a packet, and on receiving the same key reflected back you'd do a match and lookup. This closes the DoS hole - matching a specific key value exactly is much harder than just faking a timestamp value to lead to spurious RTT estimates, and not giving away internal timestamp values also prevents the kind of how-long-as-this-box-been-up profiling done by e.g. Netcraft. The timestamps don't have to be unique. The keys to the timestamp values do. This code is probably more straightforward to implement than, oh, SACK scoreboarding. L. Joe - the Touchidic scholar of RFCs, always ready with chapter and verse. >Joe > >-- >---------------------------------------- >Joe Touch >Sr. Network Engineer, USAF TSAT Space Segment > > > >*** END PGP VERIFIED MESSAGE *** From dpreed at reed.com Thu Feb 1 08:17:19 2007 From: dpreed at reed.com (David P. Reed) Date: Thu, 01 Feb 2007 11:17:19 -0500 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <200701312241.WAA09104@cisco.com> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> Message-ID: <45C2128F.5020103@reed.com> What standard puts timestamps in each TCP packet? What standard says that they can be viewed as meaningful by a receiver? Or is this TCP as it might have been but isn't? Timestamps are options in IP, but options in IP are not used by TCP that I know of as information bearing elements - they aren't even guaranteed to be preserved end-to-end (they are not part of the "virtual header" that is end-to-end checksummed). Lloyd Wood wrote: > At Wednesday 31/01/2007 16:02 -0500, Sushant Rewaskar wrote: > >> Hi, >> I agree with Lachlan. In TCP there is no way to know when an ack is lost as >> it carries no "sequence number" of its own. >> > > It can - timestamps are used for disambiguation, and they disambiguate the acks. They can act as unique sequence numbers. > > (In fact, you wouldn't naively issue a timestamp, and expect the other end to copy and reflect it in an ack, as that's open to a variety of DoS attacks. The sender would have a table of timestamp times, with unique keys for each timestamp, and the sender would send out and look for the key in the timestamp option field. To get a better understanding of these issues you may want to read RFC1323.) > > It's possible for the sender to infer that an ack has been lost, based on subsequent receiver behaviour in sending a cumulative ack including packets received that the sender didn't get individual acks for. > > Stupid question: why is a missing ack presumed to automatically be due to congestion, rather than link errors along the path? > > L. > > > > > >> (so in fact not only it is not >> done but it cannot be easily done in the current set-up). >> >> To get a better understanding of these issues you may want to read the >> string of papers and RFC on Datagram Congestion Control Protocol (DCCP) >> (http://www.read.cs.ucla.edu/dccp/ ) >> >> >> Take care, >> Sushant Rewaskar >> ----------------------------- >> UNC Chapel Hill >> www.cs.unc.edu/~rewaskar >> >> >> -----Original Message----- >> From: end2end-interest-bounces at postel.org >> [mailto:end2end-interest-bounces at postel.org] On Behalf Of Lachlan Andrew >> Sent: Monday, January 29, 2007 5:18 PM >> To: Detlef Bosau >> Cc: end2end-interest at postel.org >> Subject: Re: [e2e] Stupid Question: Why are missing ACKs not considered >> asindicator for congestion? >> >> Greetings Detlef, >> >> On 29/01/07, Detlef Bosau wrote: >> >>> In TCP, lost / dropped packets are recognised as congestion indicator. >>> We don4t do so with missing ACKs. >>> >>> If a TCP packet is dropped, this is reckognized as congestion >>> indication. Shouldn4t be a dropped ACK packet seen as congestion >>> indication as well? >>> >> Because ACKs are cumulative, we don't know that separate ACKs were >> sent for each packet. >> >> For example, high-end NICs typically have "interrupt coalescence", >> which delivers a large bunch of packets simultaneously to reduce CPU >> overhead. A single "fat ACK" is sent which cumulatively acknowledges >> all of these packets. This happens even when the receiver is not >> congested. >> >> >> Another factor is that ACKs are typically small compared with data >> packets. The total network throughput is much greater if we throttle >> only the sources contributing most to a given link's congestion, >> namely those sending full data packets over the link. >> >> Cheers, >> Lachlan >> >> -- >> Lachlan Andrew Dept of Computer Science, Caltech >> 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA >> Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 >> > > > From dpreed at reed.com Thu Feb 1 08:27:19 2007 From: dpreed at reed.com (David P. Reed) Date: Thu, 01 Feb 2007 11:27:19 -0500 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <200702010141.BAA15760@cisco.com> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> <200702010141.BAA15760@cisco.com> Message-ID: <45C214E7.90500@reed.com> The design rule of *all* Internet protocols from the beginning is that packets can arrive duplicated and out-of-order from the order sent. Inference about the state of the sender must presume that an incoming packet only tells you something about what was true at some point in the past - and lost acks could just be long-delayed acks. ack reordering, duplication, and frank loss can be inferred only at the end of time (or after all potentially misrouted packets would have been lost due to TTL expiry). What can be inferred on occasion is *probable* loss. This requires a probabilistic model of the end-to-end IP channel, which can only be constructed by experiment and bayesian inference from priors. Probable loss can be used to tweak performance, of course. But the quality of such results cannot be handwaved by an analysis that ignores the reliability of the inference mechanism or its underlying assumptions. Lloyd Wood wrote: > At Wednesday 31/01/2007 16:34 -0800, Lachlan Andrew wrote: > >> Greetings Lloyd, >> >> On 31/01/07, Lloyd Wood wrote: >> >>> It's possible for the sender to infer that an ack has been lost, based on subsequent receiver behaviour in sending a cumulative ack including packets received that the sender didn't get individual acks for. >>> >> No, that was my point. We can't distinguish between ACKs which are >> lost and those which are never sent in the first place. >> > > Yes, we can. If a SACK block is present, it tells you which datagrams were and weren't received. > > If a datagram was received, an ack was sent (modulo the delack mechanism), and the datagram will not be called out in the SACK block. > > If the datagram wasn't received, this will be reflected in the SACK block. > > > >> Also, having a unique identifier (like a timestamp) isn't the same as >> having sequence numbers which can say "We're (not) consecutive". The >> latter can detect loss but the former can't. >> > > If you have timestamps on every ack and packet, what's the difference? > > > > >> Cheers, >> Lachlan >> >> -- >> Lachlan Andrew Dept of Computer Science, Caltech >> 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA >> Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 >> > > > From touch at ISI.EDU Thu Feb 1 09:53:14 2007 From: touch at ISI.EDU (Joe Touch) Date: Thu, 01 Feb 2007 09:53:14 -0800 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <200702011555.PAA00102@cisco.com> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> <45C1F3B7.70001@isi.edu> <200702011555.PAA00102@cisco.com> Message-ID: <45C2290A.1070005@isi.edu> Lloyd Wood wrote: > At Thursday 01/02/2007 06:05 -0800, Joe Touch wrote: > >> Lloyd Wood wrote: >>> At Wednesday 31/01/2007 16:02 -0500, Sushant Rewaskar wrote: >>>> Hi, >>>> I agree with Lachlan. In TCP there is no way to know when an ack is lost as >>>> it carries no "sequence number" of its own. >>> It can - timestamps are used for disambiguation, and they >>> disambiguate the acks. They can act as unique sequence numbers. >> How so? RFC 1323, Sec 4.2.2: >> >> "Based upon these considerations, we choose a timestamp clock >> frequency in the range 1 ms to 1 sec per tick. " >> >> That suggests that timestamps need not be unique by themselves. > > Any implementer would be stupid to actually follow RFC1323 and send > actual timestamp values in the packet - that's a massive DoS hole just > waiting to be exploited. > > Instead (as I mentioned previously in this thread), the sender would > have a table of timestamps and associated unique keys for each packet > sent out. You'd send the key in a packet, and on receiving the same key > reflected back you'd do a match and lookup. This closes the DoS hole - > matching a specific key value exactly is much harder than just faking a > timestamp value to lead to spurious RTT estimates, and not giving away > internal timestamp values also prevents the kind of > how-long-as-this-box-been-up profiling done by e.g. Netcraft. Time is NOT a key. If I can spoof ACKs, there are a lot of other DoS holes around. This one is esoteric at best. >> The timestamps don't have to be unique. The keys to the timestamp >> values do. Only if you spec them that way - as keys. >> This code is probably more straightforward to implement than, oh, >> SACK scoreboarding. Can we please stop trying to use everything except authentication for authentication? Joe -- ---------------------------------------- Joe Touch Sr. Network Engineer, USAF TSAT Space Segment -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070201/940f7187/signature-0001.bin From L.Wood at surrey.ac.uk Thu Feb 1 09:52:33 2007 From: L.Wood at surrey.ac.uk (Lloyd Wood) Date: Thu, 01 Feb 2007 17:52:33 +0000 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <45C2128F.5020103@reed.com> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> <45C2128F.5020103@reed.com> Message-ID: <200702011753.RAA10069@cisco.com> At Thursday 01/02/2007 11:17 -0500, David P. Reed wrote: >What standard puts timestamps in each TCP packet? RFC1323 does not preclude including the TCP Timestamp option in each packet sent. (And there are cases where disambiguation of segments/ack would be a useful benefit from doing this. How you compute RTO given more sample information and whether that's just diminishing returns for most common use is a separate issue that is open to question.) The TCP Timestamp option is checksummed by the TCP checksum. >What standard says that they can be viewed as meaningful by a receiver? Well, gee, RFC1323 is only a proposed standard. But attackers will take any hints about internal state that they are given, and they don't pay that much attention to standards. >Or is this TCP as it might have been but isn't? Timestamps are options in IP, Is IP option 4 even used by anything? >but options in IP are not used by TCP that I know of as information bearing elements - they aren't even guaranteed to be preserved end-to-end (they are not part of the "virtual header" that is end-to-end checksummed). > >Lloyd Wood wrote: >>At Wednesday 31/01/2007 16:02 -0500, Sushant Rewaskar wrote: >> >>>Hi, >>>I agree with Lachlan. In TCP there is no way to know when an ack is lost as >>>it carries no "sequence number" of its own. >> >>It can - timestamps are used for disambiguation, and they disambiguate the acks. They can act as unique sequence numbers. >> >>(In fact, you wouldn't naively issue a timestamp, and expect the other end to copy and reflect it in an ack, as that's open to a variety of DoS attacks. The sender would have a table of timestamp times, with unique keys for each timestamp, and the sender would send out and look for the key in the timestamp option field. To get a better understanding of these issues you may want to read RFC1323.) >> >>It's possible for the sender to infer that an ack has been lost, based on subsequent receiver behaviour in sending a cumulative ack including packets received that the sender didn't get individual acks for. >> >>Stupid question: why is a missing ack presumed to automatically be due to congestion, rather than link errors along the path? >> >>L. >> >> >> >> >> >>>(so in fact not only it is not >>>done but it cannot be easily done in the current set-up). >>>To get a better understanding of these issues you may want to read the >>>string of papers and RFC on Datagram Congestion Control Protocol (DCCP) >>>(http://www.read.cs.ucla.edu/dccp/ ) >>> >>> >>>Take care, >>>Sushant Rewaskar >>>----------------------------- >>>UNC Chapel Hill >>>www.cs.unc.edu/~rewaskar >>> >>>-----Original Message----- >>>From: end2end-interest-bounces at postel.org >>>[mailto:end2end-interest-bounces at postel.org] On Behalf Of Lachlan Andrew >>>Sent: Monday, January 29, 2007 5:18 PM >>>To: Detlef Bosau >>>Cc: end2end-interest at postel.org >>>Subject: Re: [e2e] Stupid Question: Why are missing ACKs not considered >>>asindicator for congestion? >>> >>>Greetings Detlef, >>> >>>On 29/01/07, Detlef Bosau wrote: >>> >>>>In TCP, lost / dropped packets are recognised as congestion indicator. >>>>We don4t do so with missing ACKs. >>>> >>>>If a TCP packet is dropped, this is reckognized as congestion >>>>indication. Shouldn4t be a dropped ACK packet seen as congestion >>>>indication as well? >>>> >>>Because ACKs are cumulative, we don't know that separate ACKs were >>>sent for each packet. >>> >>>For example, high-end NICs typically have "interrupt coalescence", >>>which delivers a large bunch of packets simultaneously to reduce CPU >>>overhead. A single "fat ACK" is sent which cumulatively acknowledges >>>all of these packets. This happens even when the receiver is not >>>congested. >>> >>> >>>Another factor is that ACKs are typically small compared with data >>>packets. The total network throughput is much greater if we throttle >>>only the sources contributing most to a given link's congestion, >>>namely those sending full data packets over the link. >>> >>>Cheers, >>>Lachlan >>> >>>-- >>>Lachlan Andrew Dept of Computer Science, Caltech >>>1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA >>>Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 >>> >> >> >> From david.borman at windriver.com Thu Feb 1 10:41:40 2007 From: david.borman at windriver.com (David Borman) Date: Thu, 1 Feb 2007 12:41:40 -0600 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <200702011555.PAA00102@cisco.com> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> <45C1F3B7.70001@isi.edu> <200702011555.PAA00102@cisco.com> Message-ID: <1A625E01-1959-4722-88D4-BCDC4452B2A1@windriver.com> On Feb 1, 2007, at 9:54 AM, Lloyd Wood wrote: > At Thursday 01/02/2007 06:05 -0800, Joe Touch wrote: > >> Lloyd Wood wrote: >>> At Wednesday 31/01/2007 16:02 -0500, Sushant Rewaskar wrote: >>>> Hi, >>>> I agree with Lachlan. In TCP there is no way to know when an ack >>>> is lost as >>>> it carries no "sequence number" of its own. >>> >>> It can - timestamps are used for disambiguation, and they >>> disambiguate the acks. They can act as unique sequence numbers. >> >> How so? RFC 1323, Sec 4.2.2: >> >> "Based upon these considerations, we choose a timestamp clock >> frequency in the range 1 ms to 1 sec per tick. " >> >> That suggests that timestamps need not be unique by themselves. > > Any implementer would be stupid to actually follow RFC1323 and send > actual timestamp values in the packet - that's a massive DoS hole > just waiting to be exploited. That's a bit harsh. :-) > > Instead (as I mentioned previously in this thread), the sender > would have a table of timestamps and associated unique keys for > each packet sent out. You'd send the key in a packet, and on > receiving the same key reflected back you'd do a match and lookup. > This closes the DoS hole - matching a specific key value exactly is > much harder than just faking a timestamp value to lead to spurious > RTT estimates, and not giving away internal timestamp values also > prevents the kind of how-long-as-this-box-been-up profiling done by > e.g. Netcraft. > > The timestamps don't have to be unique. The keys to the timestamp > values do. > > This code is probably more straightforward to implement than, oh, > SACK scoreboarding. If you are concerned about bogus Timestamps coming back, you don't have to hide them. Just as you suggest using unique keys, you would just keep track of the actual timestamp values that you have sent. It shouldn't be any more work to keep track of and match exact timestamp values than keeping track of and matching your unique key values. Besides, using keys instead of Timestamps will break PAWS and render your connection unusable if the value put into the Timestamps option ever goes backward. The unique key generation would need to follow the requirements in 1323 for filling in the Timestamps option; i.e. the sequence of unique keys would need to look something like a sequence of timestamps. Also, there is no requirement that the Timestamps values reflect actual clock values, or that they be consistent across connections. Each connection can start at a random value for the starting point for ticks on that connection. They can even represent different scales. PAWS doesn't need to know what the timestamp values represent, it only depends on the fact that the values are increasing over time. -David Borman From touch at ISI.EDU Thu Feb 1 10:45:58 2007 From: touch at ISI.EDU (Joe Touch) Date: Thu, 01 Feb 2007 10:45:58 -0800 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <1A625E01-1959-4722-88D4-BCDC4452B2A1@windriver.com> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> <45C1F3B7.70001@isi.edu> <200702011555.PAA00102@cisco.com> <1A625E01-1959-4722-88D4-BCDC4452B2A1@windriver.com> Message-ID: <45C23566.1080707@isi.edu> David Borman wrote: ... > Also, there is no requirement that the Timestamps values reflect actual > clock values, or that they be consistent across connections. Each > connection can start at a random value for the starting point for ticks > on that connection. They can even represent different scales. PAWS > doesn't need to know what the timestamp values represent, it only > depends on the fact that the values are increasing over time. They need to represent 'real time'; the offset is irrelevant (except for PAWS, as you note), but it needs to be incremented in a realistic way: The timestamp value to be sent in TSval is to be obtained from a (virtual) clock that we call the "timestamp clock". Its values must be at least approximately proportional to real time, in order to measure actual RTT. I don't see how they could represent different scales and be compliant with current spec, although, as you hint, the scale could be kept on a per connection basis, but that would be off-spec. Joe -- ---------------------------------------- Joe Touch Sr. Network Engineer, USAF TSAT Space Segment -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070201/ab29acc4/signature.bin From david.borman at windriver.com Thu Feb 1 11:03:15 2007 From: david.borman at windriver.com (David Borman) Date: Thu, 1 Feb 2007 13:03:15 -0600 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <45C23566.1080707@isi.edu> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> <45C1F3B7.70001@isi.edu> <200702011555.PAA00102@cisco.com> <1A625E01-1959-4722-88D4-BCDC4452B2A1@windriver.com> <45C23566.1080707@isi.edu> Message-ID: <1187CD8F-C37C-40E8-ACFF-FA21D87B5D3B@windriver.com> On Feb 1, 2007, at 12:45 PM, Joe Touch wrote: > > > David Borman wrote: > ... >> Also, there is no requirement that the Timestamps values reflect >> actual >> clock values, or that they be consistent across connections. Each >> connection can start at a random value for the starting point for >> ticks >> on that connection. They can even represent different scales. PAWS >> doesn't need to know what the timestamp values represent, it only >> depends on the fact that the values are increasing over time. > > They need to represent 'real time'; the offset is irrelevant > (except for > PAWS, as you note), but it needs to be incremented in a realistic way: > > The timestamp value to be sent in TSval is to be obtained from a > (virtual) clock that we call the "timestamp clock". Its values > must be at least approximately proportional to real time, in > order > to measure actual RTT. > > I don't see how they could represent different scales and be compliant > with current spec, although, as you hint, the scale could be kept on a > per connection basis, but that would be off-spec. The key point of this part of the RFC is "in order to measure actual RTT". Since it is the generator of the timestamp that does the measurement, what is required is that when the timestamp comes back to the sender, it knows what it represents and how to use that to calculate the actual RTT. At a minimum, the sender has to keep state about what scale is being used, since the RFC allows a whole range. There is no requirement that the timestamp value have any meaning to anyone except the sender of the timestamp, with the exception that for PAWS the values being put into the timestamps option have to be increasing over time, and not go backwards. If an implementor chooses to implement a mapping scheme that obfuscates the values that it puts into the Timestamps option, that does not violate the the spirit of the RFC as long as the obfuscated values kind of look like timestamps, i.e. they do not go backwards over time and go up by at least 1 for every wrap of the sequence space so that PAWS isn't broken. -David Borman From touch at ISI.EDU Thu Feb 1 11:26:20 2007 From: touch at ISI.EDU (Joe Touch) Date: Thu, 01 Feb 2007 11:26:20 -0800 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <1187CD8F-C37C-40E8-ACFF-FA21D87B5D3B@windriver.com> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> <45C1F3B7.70001@isi.edu> <200702011555.PAA00102@cisco.com> <1A625E01-1959-4722-88D4-BCDC4452B2A1@windriver.com> <45C23566.1080707@isi.edu> <1187CD8F-C37C-40E8-ACFF-FA21D87B5D3B@windriver.com> Message-ID: <45C23EDC.4000700@isi.edu> David Borman wrote: > > On Feb 1, 2007, at 12:45 PM, Joe Touch wrote: .... >> They need to represent 'real time'; the offset is irrelevant (except for >> PAWS, as you note), but it needs to be incremented in a realistic way: >> >> The timestamp value to be sent in TSval is to be obtained from a >> (virtual) clock that we call the "timestamp clock". Its values >> must be at least approximately proportional to real time, in order >> to measure actual RTT. >> >> I don't see how they could represent different scales and be compliant >> with current spec, although, as you hint, the scale could be kept on a >> per connection basis, but that would be off-spec. > > The key point of this part of the RFC is "in order to measure actual > RTT". ... > If an implementor chooses to implement a mapping scheme that obfuscates > the values that it puts into the Timestamps option, that does not > violate the the spirit of the RFC as long as the obfuscated values kind > of look like timestamps, i.e. they do not go backwards over time and go > up by at least 1 for every wrap of the sequence space so that PAWS isn't > broken. AOK - I guess the devil is in "proportional". The actual ratio can be kept private, but the trick of "use a table of nonces" doesn't work for the reasons you note. Joe -- ---------------------------------------- Joe Touch Sr. Network Engineer, USAF TSAT Space Segment -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070201/dfe47e11/signature-0001.bin From detlef.bosau at web.de Thu Feb 1 12:54:29 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Thu, 01 Feb 2007 21:54:29 +0100 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <45C1BFF2.9080802@net.infocom.uniroma1.it> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> <200702010141.BAA15760@cisco.com> <1170314055.4775.12.camel@lap10-c703.uibk.ac.at> <45C1BFF2.9080802@net.infocom.uniroma1.it> Message-ID: <45C25385.1060801@web.de> Hi Francesco, Francesco Vacirca wrote: > Some link layers use a strongest FEC to protect header. I heard about this and it is somewhat confusing. Particularly as I find it difficult to imagine that a sender switches between different convolutional codes within an IP packet. However, your post perhaps gives me a clue: > E.g. in some UMTS coding scheme the link layer employs a 1/3 > codification for RLC header, whereas the payload can use a different > scheme (e.g. from 4/5 to 1/3)... Are these really different coding schemes? Or is it the same convolutional code but differently punctured? So in fact, you start with a hardly punctured frame, thus a Viterbi decoder would hopefully produce only little bit errors, and afterwards (after the header) your frame is punctured more severely? Or do you, an alternate approach, use differently punctured RLP frames for an IP packet?s header and tail? > Maybe it could be applied also to TCP. Note that this can decrease the > goodput in case of non lossy links... obviously it depends on the > ratio between useful bits and transmitted bits. I heard of it in the context of VoIP over mobile wireless networks. (To tell my honest opinion on this one: That?s a hoax even not worth wasting a word on it.) > > In the 802.11 standard some part of the packet (MAC header) is sent > with a different rate to be more protected against channel impairments > and also for compatibility purposes. A cross layer approach could > adopt low rate also for TCP header (also IP obviously)... but I do not > think that the benefits are more than disadvantages. I think the question is: What?s the problem for this solution? WRT VoIP the mess is clear: For a voice stream, a media stream in general, you need three parts of information. You need to know - what to be played out - where and - when. In TDM, "where" and "when" are cared for by the scheduler and so you?re even free to accept errors in the "what". In VoIP over packet switching the "where" and the "when" suffers the same errors like all other data. From francesco at net.infocom.uniroma1.it Fri Feb 2 02:39:32 2007 From: francesco at net.infocom.uniroma1.it (Francesco Vacirca) Date: Fri, 02 Feb 2007 11:39:32 +0100 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: <45C25385.1060801@web.de> References: <45BE5DAF.5040701@web.de> <002c01c7457b$2580f210$7a850298@cs.unc.edu> <200701312241.WAA09104@cisco.com> <200702010141.BAA15760@cisco.com> <1170314055.4775.12.camel@lap10-c703.uibk.ac.at> <45C1BFF2.9080802@net.infocom.uniroma1.it> <45C25385.1060801@web.de> Message-ID: <45C314E4.10605@net.infocom.uniroma1.it> Detlef Bosau wrote: > Hi Francesco, > > Francesco Vacirca wrote: >> Some link layers use a strongest FEC to protect header. > > I heard about this and it is somewhat confusing. Particularly as I find > it difficult to imagine that a sender switches between different > convolutional codes within an IP packet. However, your post perhaps > gives me a clue: >> E.g. in some UMTS coding scheme the link layer employs a 1/3 >> codification for RLC header, whereas the payload can use a different >> scheme (e.g. from 4/5 to 1/3)... > > Are these really different coding schemes? Or is it the same > convolutional code but differently punctured? So in fact, you start with > a hardly punctured frame, thus a Viterbi decoder would hopefully produce > only little bit errors, and afterwards (after the header) your frame is > punctured more severely? I cannot find the reference to the UMTS specification, but in the EDGE system this is done by using the same convolutional code with different puncturing (see 3GPP TS 43.064). > > Or do you, an alternate approach, use differently punctured RLP frames > for an IP packet?s header and tail? >> Maybe it could be applied also to TCP. Note that this can decrease the >> goodput in case of non lossy links... obviously it depends on the >> ratio between useful bits and transmitted bits. > > I heard of it in the context of VoIP over mobile wireless networks. > > (To tell my honest opinion on this one: That?s a hoax even not worth > wasting a word on it.) > > >> >> In the 802.11 standard some part of the packet (MAC header) is sent >> with a different rate to be more protected against channel impairments >> and also for compatibility purposes. A cross layer approach could >> adopt low rate also for TCP header (also IP obviously)... but I do not >> think that the benefits are more than disadvantages. > > I think the question is: What?s the problem for this solution? > > WRT VoIP the mess is clear: For a voice stream, a media stream in > general, you need three parts of information. > You need to know > - what to be played out > - where and > - when. > > In TDM, "where" and "when" are cared for by the scheduler and so you?re > even free to accept errors in the "what". > > In VoIP over packet switching the "where" and the "when" suffers the > same errors like all other data. > > > From bala at research.att.com Thu Feb 1 10:26:33 2007 From: bala at research.att.com (Balachander Krishnamurthy) Date: Thu, 01 Feb 2007 13:26:33 -0500 Subject: [e2e] CFP: SRUTI'07 Message-ID: <200702011826.l11IQXA6010076@penguin.research.att.com> 3rd Workshop on Steps to Reducing Unwanted Traffic on the Internet (SRUTI '07) June 18, 2007 Santa Clara CA, USA Sponsored by USENIX Important Dates Submissions due: Tuesday, April 17, 2007, 0400 UTC Notification of acceptance: Saturday, May 5, 2007 Final papers due: Tuesday, May 15, 2007 Workshop Organizers Program Chair Steven M. Bellovin, Columbia University Program Committee Paul Barford University of Wisconsin Pei Cao Stanford University Richard Clayton University of Cambridge Bill Cheswick Nick Feamster Georgia tech Farnam Jahanian University of Michigan Tadayoshi Kohno University of Washington Athina Markopoulou University of California, Irvine Chris Morrow Verizon Business Sean Smith Dartmouth University Oliver Spatscheck AT&T Labs-Research Lakshmi Subramanian New York University Paul van Oorschot Carleton University Yi-Min Wang Microsoft Research Steering Committee Steven M. Bellovin, Columbia University Balachander Krishnamurthy, AT&T Labs--Research Ellie Young, USENIX Overview Attacks on the Internet continue apace, with unwanted traffic, such as phishing, spam, distributed denial of service attacks increasing steadily. Such unwanted traffic is seen in many protocols (IP, TCP, DNS, BGP, and HTTP) and applications (e.g., email, Web), with increasing economic motivation behind them. SRUTI seeks research on the unwanted traffic problem that explores the underground economy by examining commonalities in attacks and possible solutions. Original research, promising ideas, and possible solutions at all levels of the protocol stack are sought. We look for ideas in networking and systems, and insights from other areas such as economics. SRUTI seeks to foster better connections between academic and industrial research communities as well as those operating various pieces of the Internet infrastructure. SRUTI session chairs will play the role of a discussant, presenting a summary of the papers in the session and a state-of-the-art synopsis of the topic. The workshop will be interactive with time for questions and answers. Submissions must contribute to improving the current understanding of unwanted traffic and/or suggestions for reducing it. All submissions to SRUTI '07 will be via the Web submission form, which will be available here soon. The proceedings of the workshop will be published by Usenix. To ensure a productive workshop environment, attendance will be by invitation and/or acceptance of paper submission. Topics Relevant topics include: * Architectural solutions to the unwanted traffic problem * Scientific assessment of the spread and danger of the attacks * Practical countermeasures to various aspects of unwanted traffic (phishing, spam, DoS, etc.) * Cross-layer solutions and solutions to combination attacks * Attacks on emerging technologies (e.g., sensors, VOIP, PDAs) and possible countermeasures * Privacy and anonymity * Intrusion avoidance, detection, and response * All forms of malicious code * Analysis of protocols and systems vulnerabilities * Attacks on specific network technologies (e.g., wireless networks) * New types of solutions: incentive-based, economic, statistical, collaborative, etc. Paper Submissions All submissions must be in English and must include a title and the authors' names and affiliations. Submissions should be no more than six (6) 8.5" x 11" pages long and must be formatted in 2 columns, using 10 point Times Roman type on 12 point leading, in a text block of 6.5" by 9". Papers should be submitted in PDF or Postscript only. PDF users should use "Type 1" fonts instead of "Type 3," and should embed and subset all fonts. You can find instructions on how to do this at https://www.fastlane.nsf.gov/NSFHelp/Printdocs/FastLane_Help/pd_generate_pdf_files/pd_generate_pdf_files.pdf and http://ismir2005.ismir.net/pdf.html. Each submission should have a contact author who should provide full contact information (email, phone, fax, mailing address). One author of each accepted paper will be required to present the work at the workshop. Authors must submit their papers by 0400 UTC, Tuesday, April 17, 2007. This is a hard deadline---no extensions will be given. Final papers are due on Tuesday, June 6, 2007, to be included in the workshop proceedings. Simultaneous submission of the same work to multiple venues, submission of previously published work, and plagiarism constitute dishonesty or fraud. USENIX, like other scientific and technical conferences and journals, prohibits these practices and may, on the recommendation of a program chair, take action against authors who have committed them. In some cases, program committees may share information about submitted papers with other conference chairs and journal editors to ensure the integrity of papers under consideration. If a violation of these principles is found, sanctions may include, but are not limited to, barring the authors from submitting to or participating in USENIX conferences for a set period, contacting the authors' institutions, and publicizing the details of the case. Authors uncertain whether their submission meets USENIX's guidelines should contact the program chair at sruti07chair at usenix.org or the USENIX office, submissionspolicy at usenix.org. Accepted material may not be published in other conferences or journals for one year from the date of acceptance by USENIX. Papers accompanied by nondisclosure agreement forms will not be read or reviewed. All submissions will be held in confidence prior to publication of the technical program, both as a matter of policy and in accordance with the U.S. Copyright Act of 1976. How to Submit Authors are required to submit papers by 0400 UTC, Tuesday, April 17, 2007. This is a hard deadline -- no extensions will be given. All submissions to SRUTI '07 must be electronic, in PDF or PostScript, via a Web form, which will be available here soon. Authors will be notified of acceptance decisions via email by Saturday, May 5, 2007. If you do not receive notification by that date, contact the program chair at sruti07chair at usenix.org. From craig at aland.bbn.com Fri Feb 2 10:49:01 2007 From: craig at aland.bbn.com (Craig Partridge) Date: Fri, 02 Feb 2007 13:49:01 -0500 Subject: [e2e] network NIC paper in ASPLOS Message-ID: <20070202184901.D863C64@aland.bbn.com> Hi folks: Given that E2E has been interested for decades in NIC design, I thought I'd point out the recent paper at ASPLOS, "Integrated Network Interfaces for High-Bandwidth TCP/IP" by Blinkert, Saidi, and Reinhardt. Basically they are doing a WITLESS design where the NIC sits on the CPU chip. All good fun and basically the result you'd expect based on the WITLESS, Afterburner and UPENN NIC experience in the early 1990s. Nice to see good ideas validated in a new context. Craig PS: For folks who don't remember WITLESS, Afterburner and the UPENN work. WITLESS (a Workstation Interface that's Low-Cost, Efficient, Scalable and Stupid) was presented by Van Jacobson in various talks -- probably the most accessible is the tutorial notes from SIGCOMM '90. Afterburner -- Banks and Prudence, "A High Performance Network Architecture for a PA-RISC Workstation" IEEE JSAC, February 1993. UPENN - Traw and Smith, "Hardware/Software Performance Organization of a High-Performance ATM Host Interface," IEEE JSAC, February 1993. [The 1993 JSAC was a special issue on host interface design and has some other nifty papers, such as the Davie paper on OSIRIS]. From detlef.bosau at web.de Sun Feb 4 11:04:37 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 04 Feb 2007 20:04:37 +0100 Subject: [e2e] Stupid Question: Why are missing ACKs not considered as indicator for congestion? In-Reply-To: References: <45BE5DAF.5040701@web.de> Message-ID: <45C62E45.4050607@web.de> Hi Lachlan, Lachlan Andrew wrote: > > Because ACKs are cumulative, we don't know that separate ACKs were > sent for each packet. > it is interesting to see that this discussion goes into the direction "we have no mechanism to do so" or "we have a mechanism to do so". I think we have two issues here: 1. Is ACK drop an indicator for something? One contributor in this thread wrote something like: yes it is, but it?s not yet clear for what. 2. If ACK drops indicate anything, how can we detect them? However, for my own work the discussion turned into a different direction. At the moment, I?m working on a ACK pacing / ACK spacing approach. And there it is in fact a problem that ACKs are cumulative. When we do ACK spacing / spacing and our ACK packets are nicely spaced on their travel to a TCP sender, this helps nothing when there are large gaps in the ACK flow, i.e. although there is a number of consecutive ACK packets which acknowledge one MSS worth of data each - and then the next ACK packet acknowledges 100*MSS worth of data in one step. This would result in a large bunch of data sent by the TCP sender and most likely in congestion drops. In addition, ACK pacing or spacing respectiveley has to consider the rate of the ACK _numbers_ and not only the ACK packets without taking into account which sequence numbers are acknowledged. Otherwise an ACK pacing / spacing algorithm could be easily undermined by ACK drops. Basically, this is the rationale I was looking for :-) I found two drafts particularly helpful on that one: @misc{spacingdraft, author = "C.~Partridge", title = "{ACK} Spacing for High Delay-Bandwidth Paths with Insufficient Buffering", year = "1997", month = "July", howpublished = "IRTF End to End Research Group Draft" } and http://www.icir.org/floyd/papers/draft-floyd-tcpm-ackcc-00d.txt (many thanks to Sally for mailing this link to me). Detlef From doug.leith at nuim.ie Mon Feb 5 06:45:35 2007 From: doug.leith at nuim.ie (Douglas Leith) Date: Mon, 05 Feb 2007 14:45:35 +0000 Subject: [e2e] Why are missing ACKs not considered as indicator for congestion? In-Reply-To: References: Message-ID: Perhaps I can add my own question to this. The discussion so far has mostly centered on whether its possible to detect ack losses. But say we could measure these ack congestion/losses - what would the right thing be to do ? Should we treat loss of any packet (ack or data) as congestion, or just consider loss of data packets as being meangingful ? This isn't as abstract a question as it might at first seem. Most delay-based algorithms use two-way delay and so react to queueing of acks as well as data packets. That is, unlike loss based algorithms they *do* treat ack and data packets in similar ways for congestion control purposes. Is this a good thing or not ? Doug On 4 Feb 2007, at 20:00, end2end-interest-request at postel.org wrote: > Send end2end-interest mailing list submissions to > end2end-interest at postel.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.postel.org/mailman/listinfo/end2end-interest > or, via email, send a message with subject or body 'help' to > end2end-interest-request at postel.org > > You can reach the person managing the list at > end2end-interest-owner at postel.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of end2end-interest digest..." > > > Today's Topics: > > 1. Re: Stupid Question: Why are missing ACKs not considered as > indicator for congestion? (Detlef Bosau) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 04 Feb 2007 20:04:37 +0100 > From: Detlef Bosau > Subject: Re: [e2e] Stupid Question: Why are missing ACKs not > considered as indicator for congestion? > To: end2end-interest at postel.org > Cc: l.andrew at ieee.org > Message-ID: <45C62E45.4050607 at web.de> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > Hi Lachlan, > > Lachlan Andrew wrote: >> >> Because ACKs are cumulative, we don't know that separate ACKs were >> sent for each packet. >> > > it is interesting to see that this discussion goes into the direction > "we have no mechanism to do so" or "we have a mechanism to do so". > > I think we have two issues here: > 1. Is ACK drop an indicator for something? One contributor in this > thread wrote something like: yes it is, but it?s not yet clear for > what. > 2. If ACK drops indicate anything, how can we detect them? > > However, for my own work the discussion turned into a different > direction. At the moment, I?m working on a ACK pacing / ACK spacing > approach. And there it is in fact a problem that ACKs are cumulative. > When we do ACK spacing / spacing and our ACK packets are nicely spaced > on their travel to a TCP sender, this helps nothing when there are > large gaps in the ACK flow, i.e. although there is a number of > consecutive ACK packets which acknowledge one MSS worth of data each - > and then the next ACK packet acknowledges 100*MSS worth of data in one > step. This would result in a large bunch of data sent by the TCP > sender > and most likely in congestion drops. > > In addition, ACK pacing or spacing respectiveley has to consider the > rate of the ACK _numbers_ and not only the ACK packets without taking > into account which sequence numbers are acknowledged. Otherwise an ACK > pacing / spacing algorithm could be easily undermined by > ACK drops. > > Basically, this is the rationale I was looking for :-) > > I found two drafts particularly helpful on that one: > > @misc{spacingdraft, > author = "C.~Partridge", > title = "{ACK} Spacing for High Delay-Bandwidth Paths > with Insufficient Buffering", > year = "1997", > month = "July", > howpublished = "IRTF End to End Research Group Draft" > } > > and > > http://www.icir.org/floyd/papers/draft-floyd-tcpm-ackcc-00d.txt > > (many thanks to Sally for mailing this link to me). > > > Detlef > > > > > > > ------------------------------ > > _______________________________________________ > end2end-interest mailing list > end2end-interest at postel.org > http://mailman.postel.org/mailman/listinfo/end2end-interest > > > End of end2end-interest Digest, Vol 36, Issue 5 > *********************************************** From detlef.bosau at web.de Mon Feb 5 08:50:23 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 05 Feb 2007 17:50:23 +0100 Subject: [e2e] Why are missing ACKs not considered as indicator for congestion? In-Reply-To: References: Message-ID: <45C7604F.5030100@web.de> Douglas Leith wrote: > Perhaps I can add my own question to this. The discussion so far has > mostly centered on whether its possible to detect ack losses. But say > we could measure these ack congestion/losses - what would the right > thing be to do ? Should we treat loss of any packet (ack or data) as > congestion, or just consider loss of data packets as being meangingful ? > > This isn't as abstract a question as it might at first seem. Most > delay-based algorithms use two-way delay and so react to queueing of > acks as well as data packets. That is, unlike loss based algorithms > they *do* treat ack and data packets in similar ways for congestion > control purposes. Is this a good thing or not ? > > Doug > There is a very helpful drawing on this one in Reiner Ludwigs PhD dissertation. It is available online at ttp://iceberg.cs.berkeley.edu/papers/Ludwig-Diss00/index.html And particularly refer to Chapter 2 http://iceberg.cs.berkeley.edu/papers/Ludwig-Diss00/chapter2_background.pdf Figure 2-1 "The Ack Clock" There is a similar dawing in the congavoid paper. However, this drawing here exhibits the relationship between CWND and the number of ACK packets in flight. For simplicity, let?s assume that each TCP packet is acknowledged, i.e. we have no delayed ack. Let?s assume further, every TCP packet has maximum size, i.e. all TCP segements have length MSS. When we further neglect the processing time, the TCP and ACK packets form some kind of paternoster which runs around the path and has exactly CWND/MSS packets. When an ACK packet reaches the sender, a TCP packet is clocked out and vice versa, when a TCP packet reaches the receiver, an ACK packet is clocked out. So, if the semantics of CWND is "packets allowed to be in flight so that there is no problem", we would have to treat ACK losses exactly as TCP losses, i.e. as an indicator of congestion. If the semantices of CWND is "packets allowed to be in flight so that there is no TCP packet dropped" we could tend to ignore ACK losses. Personally, I have always the idea of a drawing like "The Ack Clock" in mind and that?s one of the reasons for my origial question. Because from that drawing, there is no difference between TCP-drop and ACK drop. From mascolo at poliba.it Mon Feb 5 09:00:07 2007 From: mascolo at poliba.it (Saverio Mascolo) Date: Mon, 5 Feb 2007 18:00:07 +0100 Subject: [e2e] Why are missing ACKs not considered as indicator for congestion? Message-ID: <011001c74947$1b423d20$723bccc1@HPSM> On 2/5/07, Douglas Leith wrote: Perhaps I can add my own question to this. The discussion so far has mostly centered on whether its possible to detect ack losses. But say we could measure these ack congestion/losses - what would the right thing be to do ? Should we treat loss of any packet (ack or data) as congestion, or just consider loss of data packets as being meangingful ? This isn't as abstract a question as it might at first seem. Most delay-based algorithms use two-way delay and so react to queueing of acks as well as data packets. That is, unlike loss based algorithms they *do* treat ack and data packets in similar ways for congestion control purposes. Is this a good thing or not ? Doug ------------------------------------------------------------------------------ of course it is not good. we had a paper on performance evaluation of NewReno, Vegas and TCP Westwood+ on ACM CCR 2004 showing that reverse traffic (i.e. ack queuing) shut down forward vegas traffic. the effect of reverse traffic was also shown for Fast TCP in a paper we had at pfldnet 06. to conclude, in case of delay based control you should measure forward delay, rrt is not enough. saverio -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070205/7e21cd1a/attachment.html From mallman at icir.org Tue Feb 6 09:46:59 2007 From: mallman at icir.org (Mark Allman) Date: Tue, 06 Feb 2007 12:46:59 -0500 Subject: [e2e] Are we doing sliding window in the Internet? In-Reply-To: <20070103214811.GA27322@grc.nasa.gov> Message-ID: <20070206174659.9541C1750E9@lawyers.icir.org> An embedded and charset-unspecified text was scrubbed... Name: not available Url: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070206/acce33f8/attachment.ksh -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 185 bytes Desc: not available Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070206/acce33f8/attachment.bin From garmitage at swin.edu.au Tue Feb 6 16:34:48 2007 From: garmitage at swin.edu.au (grenville armitage) Date: Wed, 07 Feb 2007 11:34:48 +1100 Subject: [e2e] CFP: Netgames 2007 - 6th Annual Workshop on Network and Systems Support for Games Message-ID: <45C91EA8.7080406@swin.edu.au> +++++++++++++++++++++ Netgames 2007 Call for Papers +++++++++++++++++++++++ 6th Annual Workshop on Network and Systems Support for Games: Netgames 2007 September 19th and 20th 2007, Melbourne, Australia http://caia.swin.edu.au/netgames2007/ OVERVIEW ======== The NetGames workshop brings together researchers and developers from academia and industry to present new research in understanding networked games and in enabling the next generation of them. Submissions are sought in any area related to networked games. In particular, topics of interest include (but are not limited to) game-related work in: * Network measurement, usage studies and traffic modeling * System benchmarking, performance evaluation, and provisioning * Latency issues and lag compensation techniques * Cheat detection and prevention * Service platforms, scalable system architectures, and middleware * Network protocol design * Multiplayer mobile and resource-constrained gaming systems * Augmented physical gaming systems * User and usability studies * Quality of service and content adaptation * Artificial intelligence * Security, authentication, accounting and digital rights management * Networks of sensors and actuators for games * Impact of online game growth on network infrastructure * Text and voice messaging in games SUBMISSIONS =========== We solicit submisisons of full papers with a limit of 6 pages (inclusive of all figures, references and appendices). Authors must submit their paper in PDF and use single-spaced, double column ACM format with 11pt font. Reviews will be single-blind, authors must include their names and affiliations on the first page. Papers will be judged on their relevance, technical content and correctness, and the clarity of presentation of the research. Accepted papers will be published in the workshop proceedings pending the participation of the authors in the workshop. Paper submissions will be done via the web. Detailed paper submission gudelines will be available at http://caia.swin.edu.au/netgames2007/submissions.html by the end of March 2007. COMMITTEE ========= WORKSHOP CHAIR: Grenville Armitage (Swinburne University of Technology, Australia) PROGRAM COMMITTEE: Philip Branch (Swinburne University of Technology, Australia) Adrian Cheok (National University of Singapore) Mark Claypool (Worcester Polytechnic Institute, USA) Jon Crowcroft (University of Cambridge, UK) Wu-chang Feng (Portland State University, USA) Carsten Griwodz (University of Oslo, Norway) Tristan Henderson (Dartmouth College, USA) Yutaka Ishibashi (Nagoya Institute of Technology, Japan) Yoshihiro Kawahara (The University of Tokyo, Japan) Martin Mauve (Heinrich-Heine-Universitat, Germany) John Miller (Microsoft Research, Cambridge, UK) Brian Levine (University of Massachusetts Amherst, USA) Wei Tsang Ooi (National University of Singapore) Lars Wolf (TU Braunschweig, Germany) Hartmut Ritter (Freie Universitat Berlin, Germany) Farzad Safaei (University of Wollongong, Australia) Jochen Schiller (Freie Universitat Berlin, Germany) Sebastian Zander (Swinburne University of Technology, Australia) WEBSITE / PUBLICITY: Lucas Parry (Swinburne University of Technology, Australia) LOCAL ARRANGEMENTS: Warren Harrop (Swinburne University of Technology, Australia) Lawrence Stewart (Swinburne University of Technology, Australia) Netgames 2007 will be held on September 19th and 20th 2007 in Melbourne, Australia. KEY DATES ========= Paper registration opens: March 25th, 2007 Paper registration closes: May 13th, 2007 (11:59pm New York) Full-paper submission: May 20th, 2007 (11:59pm New York) Notification to authors: July 6th, 2007 Early-bird and presenter registration opens: July 13th, 2007 Camera ready manuscript: August 10th, 2007 (11:59pm New York) Early-bird and presenter registration closes: August 10th, 2007 Workshop: September 19-20th, 2007 +++++++++++++++++++++ Netgames 2007 Call for Papers +++++++++++++++++++++++ From doug.leith at nuim.ie Sun Feb 11 04:26:06 2007 From: doug.leith at nuim.ie (Douglas Leith) Date: Sun, 11 Feb 2007 12:26:06 +0000 Subject: [e2e] benchmarking new tcp congestion control algorithms Message-ID: <77770EA8-CBA3-4690-A2E8-6AE24EB3089A@nuim.ie> I've put together a set of short ns scripts to carry out the tcp benchmark tests described in Experimental evaluation of high-speed congestion control protocols, Li, Y.T, Leith,D., Shorten,R. Transactions on Networking, 2007 (see http://www.hamilton.ie/net/eval/ToNfinal.pdf). The ns scripts are at http://www.hamilton.ie/net/eval/tcptesting.zip. Its now very easy to rerun these tests against proposed new congestion control algorithms. Baseline tests for standard tcp, high- speed tcp, scalable tcp, bic tcp, fast tcp and htcp are reported in http://www.hamilton.ie/net/eval/ToNfinal.pdf and these experimental measurements can be directly compared against the simulation results generated by the script. Let me know if you have any comments. Doug From francesco at net.infocom.uniroma1.it Sun Feb 11 19:55:48 2007 From: francesco at net.infocom.uniroma1.it (francesco@net.infocom.uniroma1.it) Date: Mon, 12 Feb 2007 04:55:48 +0100 Subject: [e2e] benchmarking new tcp congestion control algorithms In-Reply-To: <77770EA8-CBA3-4690-A2E8-6AE24EB3089A@nuim.ie> References: <77770EA8-CBA3-4690-A2E8-6AE24EB3089A@nuim.ie> Message-ID: <1171252548.45cfe5441b90d@net.infocom.uniroma1.it> Hi, I red the paper and I think it is a good starting point, however other metrics should be included. It seems that the focus is only on the capability of the algorithms to achieve the full link utilization (maintaining fairness) and not in the congestion control capabilities of the proposals (the capability to avoid congestion, i.e. not stressing a stressed networks?). The performance analysis should focus on the impact of new algorithms on the network and on other flows. **** BEGIN ADVERTISING **** For instance in our paper presented at pfldnet07 (http://wil.cs.caltech.edu/pfldnet2007/paper/YeAH_TCP.pdf), we defined the "Fair-to-Reno" factor as the ratio of the aggregated goodput of n Reno flows competing against a Reno flow, to the aggregated goodput of n Reno flows competing against the selected algorithm. This metric is good to indicate the effect that have new algorithms on standard Reno congestion control (persistent connections). We show that some algorithms steal a lot of bandwidth to Reno connections also in scenarios where Reno performs well... Is this what we expect from a new algorithm? **** END ADVERTISING **** But I think that this is not enough. We should also have a look on the effect of new proposals on Web traffic (and not only the viceversa). E.g. high congestion loss probabilities can block new web connections during the three way handshake and provoke problems to small window flows. In this sense, I think it is very important to look at the packet loss probability induced by the congestion control algorithm (both packet loss probability and number of losses per congestion event) and on the average and standard deviation of the queue utilization. As a second issue, to have comparable results we should converge also to define basic TCP parameters values (I did not yet have a look to the ns-2 scripts): - initial slow start threshold (I suggest using limited slow start). - TCP internal buffer sizes (I suggest 2*(bandwidth-delay product + bottleneck buffer sizes)) - etc.. As many of you have experienced, on highspeed links every single parameter can have a strong impact on the overall result. Francesco Quoting Douglas Leith : > I've put together a set of short ns scripts to carry out the tcp > benchmark tests described in > > Experimental evaluation of high-speed congestion control protocols, > Li, Y.T, Leith,D., Shorten,R. Transactions on Networking, 2007 (see > http://www.hamilton.ie/net/eval/ToNfinal.pdf). > > The ns scripts are at http://www.hamilton.ie/net/eval/tcptesting.zip. > > Its now very easy to rerun these tests against proposed new > congestion control algorithms. Baseline tests for standard tcp, high- > speed tcp, scalable tcp, bic tcp, fast tcp and htcp are reported in > http://www.hamilton.ie/net/eval/ToNfinal.pdf and these experimental > measurements can be directly compared against the simulation results > generated by the script. > > Let me know if you have any comments. > > Doug > ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. From hnw at cs.wm.edu Sun Feb 11 18:50:53 2007 From: hnw at cs.wm.edu (Haining Wang) Date: Sun, 11 Feb 2007 21:50:53 -0500 Subject: [e2e] IWQoS'2007 CFP Message-ID: <1EA8ABC9-B97F-4E02-9E14-2B5926447172@cs.wm.edu> /* Our apologies if you receive multiple copies of this message */ CALL FOR PAPERS The 15th IEEE International Workshop on Quality of Service (IWQoS) Chicago, IL, June 20-22, 2007 http://iwqos07.ece.ucdavis.edu/ Quality of Service principles can be applied to a large number of domains, and are particular relevant to the new NSF initiatives on Future Internet Design (FIND) and Global Environment for Network Innovations (GENI). The scope of the workshop will broadly cover all the important aspects of QoS research: in networking (wireline, wireless, and sensor networks), distributed systems, operating systems, servers, and middleware (such as grid computing and peer-to-peer systems). The workshop values both theoretical and practical research contributions and encourages multi-disciplinary approaches to QoS research. Besides traditional QoS topics, we welcome submissions on relevant technical issues, such as availability, reliability, security, pricing, incentive, resource management, and performance guarantees, in the context of networking and distributed systems. Topics of interest include (but are not limited to): QoS in the wide-area Internet, peer-to-peer, overlay QoS in large-scale distributed systems, including grid environments QoS in wireless, mobile, ad hoc, and sensor networks (including wireless mesh networks) QoS in intranets and VoIP systems QoS for web services and storage systems System dependability, availability, resilience, and robustness Security and privacy as QoS parameters QoS specification, translation, and adaptation QoS evaluation metrics and methodologies (measurements, verification, etc) QoS analysis and modeling QoS pricing and billing QoS architectures and protocols (including QoS routing) Programmability and language features supporting QoS Rationality, incentive, microeconomics, and self-interest in decentralized networks QoS and new media QoS and haptics , virtual environments QoS in business processes, workflows IWQoS invites submission of manuscripts that present original research results, and that have not been previously published or currently under review by another conference or journal. Any previous or simultaneous publication of related material should be explicitly noted in the submission. Submissions should be full-length papers that are no longer than 8 single-spaced, double-column pages with font sizes of 10 or larger, including all figures and references, and must include an abstract of 100 -- 150 words. We recommend that you use the IEEE transactions format. All papers must be submitted in the Adobe PDF format, and no other formats are accepted by the paper submission web site. Submissions will be judged on originality, significance, interest, clarity, relevance, and correctness. At least one of the authors of each accepted paper must present the paper at IWQoS 2007. IWQoS aims to allow rapid dissemination of research results and to provide fast turnaround. The deadline for papers is therefore as close to the conference (about 4-5 months apart). The workshop is a single-track forum spanning two and a half days. Award will be given at the workshop to the best student paper, whose first author is a current student. The best paper award will be chosen in consultation with the TPC members and will be based on the topic, technical contributions, review scores and comments of the paper. Important Dates Paper abstract deadline: February 16, 2007 Paper submission deadline: February 23, 2007 Notification of acceptance: April 6, 2007 Camera-ready papers due: May 4, 2007 Workshop dates: June 20-22, 2007 From rbriscoe at jungle.bt.co.uk Mon Feb 19 00:18:53 2007 From: rbriscoe at jungle.bt.co.uk (Bob Briscoe) Date: Mon, 19 Feb 2007 08:18:53 +0000 Subject: [e2e] why fair sharing? ( Are we doing sliding window in the Internet?) In-Reply-To: <45B7BFBE.7060505@reed.com> References: <5.2.1.1.2.20070124100306.01875a30@pop3.jungle.bt.co.uk> <5.2.1.1.2.20070124100306.01875a30@pop3.jungle.bt.co.uk> Message-ID: <5.2.1.1.2.20070216033408.025a9718@pop3.jungle.bt.co.uk> David, At 20:21 24/01/2007, David P. Reed wrote: >Bob - nice analysis, but beware of simple models being viewed as complete. Of course - but the goal here is to determine what is necessary at the (generic) network layer to be sufficiently resistant to gaming - ie basic microeconomics... in such a way that all the derivative economic tensions (and socially driven forms of fairness) have a chance to resolve themselves at high layers. >The end user values more than just transport, which is all that is modeled >in this notion of congestion. I guess I don't need to preach to you about where generic vs specific functions should be in the layering :) >"Choices" or "options" also matter to users - whether it is the perception >that there are "500 channels" a la the US cable system vs. the British >broadcasting model of a couple of gov't channels and a few more gov't >granted monopolies called private channels - users will pay for choices >that they may or may not exercise. > >This provides a value to "switching" functions in networks. The freedom >to channel surf, or the freedom to assemble a web page from many sources, >with a small switching latency matters. But congestion directly blocks >the ability to switch - it kills option value, and if option value is a >large part of customer value, then congestion means that greedy users who >don't value choice can kill value for other users. The point of Kelly's work was to ensure the weight of each privately held user utility was brought to bear against the social cost of congestion. If someone perceives value in switching, that value will weigh more heavily against the costs the greedy users cause. In the long run the increased demand that both represent would push into more network investment to satisfy them both. So it still seems nothing more is needed in the network layer to cater for options. >The other point is that network infrastructure is at scale a dynamically >priced thing. If you study the other literature on "real options" >(besides that which applies to R&D and network switching options) you will >find that options or contingent value analysis is crucial to pricing such >infrastructures as refineries, power plants, cable plants, etc. when faced >with variable costs such as tooling, plant construction costs (think >semiconductor fabs and Moore's Law estiimates of demand opportunity). > >So equilibrium economic models are helpful, but in fact contingent and >dynamic economic models are far more important than easy analyses like >these would imply. Certainly. But isn't this a second order concern that we would expect to be resolved at higher layers? At the moment the Internet is badly disconnected from the primary economic pressure it should be connected to. Isn't waiting for dynamic economic models a bit of fiddling while Rome burns? Or are you saying a dynamic economic model is essential before we can do anything? Finally, sorry for huge gap in this thread... Bob From francis at cs.cornell.edu Sun Feb 18 08:18:40 2007 From: francis at cs.cornell.edu (Paul Francis) Date: Sun, 18 Feb 2007 11:18:40 -0500 Subject: [e2e] Sigcomm Workshops (Deadlines kinda soon...) Message-ID: We would like to draw your attention to the workshops being hosted by Sigcomm 07 this year, to be held August 27 and 31 in Kyoto Japan (http://www.sigcomm.org/sigcomm2007/workshops.html). Submission deadlines vary, but at least one of them is as early as March 27. The workshops are: (W1) Mobility in the Evolving Internet Architecture (MobiArch) (W2) Large Scale Attack Defense (LSAD) (W3) Networked Systems for Developing Regions (NSDR) (W4) Internet Network Management 2007 (INM) (W5) Peer-to-Peer Streaming and IPTV Systems (P2P-TV) (W6) IPv6 and the Future of the Internet (IPv6) All of these are refereed and have published proceedings. They are a great opportunity to get early work published and noticed, to attend Sigcomm, and to enjoy the beautiful city of Kyoto. Paul Francis Hiroshi Esaki Sigcomm 2007 Workshop Chairs From detlef.bosau at web.de Tue Feb 20 14:45:18 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 20 Feb 2007 23:45:18 +0100 Subject: [e2e] Delays / service times / delivery times in wireless networks. Message-ID: <45DB79FE.2040800@web.de> Hi to all, I go in circles here and perhaps, someone can help me out. In networks like GPRS, IP packets can experience extremely large delivery times. The ETSI standard accepts up to 10 minutes. The effects of large and varying latencies on TCP are widely discussed. However, I still do not really understand where these delays come from. When delivery times vary from 1 ms to 10 minutes, we talk about 6 orders of magnitude. My question is: What is the reason for this? Possible delay sources (except processing, serialization, queieng, propagation) are: - recovery / retransmission, - roaming - scheduling. I hardly belive that recovery latencies are that large because I can hardly imagine that a packet in the whole or in pieces is repeated up to 1 million times. And even if this would be the case, it is interesting to see the variation of such latencies. In addition, I hardly believe in roaming latencies. Years ago, I was told extremely large latencies would result from roaming. However, if e.g. a mobile roams from one cell to another, to my understanding it keeps its SGSN in many cases and only the antenna station changes. So, this situation is basically not different from roaming with a normal voice stream. And that works transparent to the user. The only source where I ever read latencies and delay spikes of several seconds is the Globecom 2004 paper by Thierry Klein, which reported delay spikes of up to 2 seconds. And when I think about proportional fair scheduling, I think this _can_ be a source of large delay spikes. However, I do not have access to any reliable material here. The problem appears to be quite technical and not very much TCP related. However, I?m admittedly somewhat tired to think about dely variations and delay spikes and possible adverse consequences upon TCP without having really understood the reasons for large delays and delay spikes. I don?t know whether this topic is of intereset here, but I would greatly appreciate any discussion of this matter. If someone can help me on this one, he is welcome to contact me off list, if this topic is not of general interest here. However, I?m somewhat helpless here. Thanks. Detlef From dpreed at reed.com Tue Feb 20 20:10:00 2007 From: dpreed at reed.com (David P. Reed) Date: Tue, 20 Feb 2007 23:10:00 -0500 Subject: [e2e] Delays / service times / delivery times in wireless networks. In-Reply-To: <45DB79FE.2040800@web.de> References: <45DB79FE.2040800@web.de> Message-ID: <45DBC618.6090007@reed.com> An IP packet should be dropped quickly if it cannot be delivered quickly. What ETSI standard are you referring to? IETF has nothing to do with ETSI... Detlef Bosau wrote: > Hi to all, > > I go in circles here and perhaps, someone can help me out. > > In networks like GPRS, IP packets can experience extremely large > delivery times. The ETSI standard accepts up to 10 minutes. > > The effects of large and varying latencies on TCP are widely > discussed. However, I still do not really understand where these > delays come from. > When delivery times vary from 1 ms to 10 minutes, we talk about 6 > orders of magnitude. My question is: What is the reason for this? > > Possible delay sources (except processing, serialization, queieng, > propagation) are: > > - recovery / retransmission, > - roaming > - scheduling. > > I hardly belive that recovery latencies are that large because I can > hardly imagine that a packet in the whole or in pieces is repeated up > to 1 million times. And even if this would be the case, it is > interesting to see the variation of such latencies. > > In addition, I hardly believe in roaming latencies. Years ago, I was > told extremely large latencies would result from roaming. However, if > e.g. a mobile roams from one cell to another, to my understanding it > keeps its SGSN in many cases and only the antenna station changes. So, > this situation is basically not different from roaming with a normal > voice stream. And that works transparent to the user. > > The only source where I ever read latencies and delay spikes of > several seconds is the Globecom 2004 paper by Thierry Klein, which > reported delay spikes of up to 2 seconds. And when I think about > proportional fair scheduling, I think this _can_ be a source of large > delay spikes. > However, I do not have access to any reliable material here. > > The problem appears to be quite technical and not very much TCP > related. However, I?m admittedly somewhat tired to think about dely > variations and delay spikes and possible adverse consequences upon TCP > without having really understood the reasons for large delays and > delay spikes. > > I don?t know whether this topic is of intereset here, but I would > greatly appreciate any discussion of this matter. If someone can help > me on this one, he is welcome to contact me off list, if this topic is > not of general interest here. > > However, I?m somewhat helpless here. > > Thanks. > > Detlef > > > From andras.veres at ericsson.com Wed Feb 21 00:52:46 2007 From: andras.veres at ericsson.com (=?iso-8859-1?Q?Andr=E1s_Veres_=28IJ/ETH=29?=) Date: Wed, 21 Feb 2007 09:52:46 +0100 Subject: [e2e] Delays / service times / delivery times in wireless networks. In-Reply-To: <45DB79FE.2040800@web.de> Message-ID: Hi, The 10 minutes reference in GPRS never happens in reality. It is similar to RFC 793 where an upper limit of TTL = one minute is set for TCP packets. Neither 1 nor 10 minutes actually happen in real networks. Andras -----Original Message----- From: end2end-interest-bounces at postel.org [mailto:end2end-interest-bounces at postel.org] On Behalf Of Detlef Bosau Sent: Tuesday, February 20, 2007 11:45 PM To: e2e; Michael.kochte at gmx.net; frank.duerr Subject: [e2e] Delays / service times / delivery times in wireless networks. Hi to all, I go in circles here and perhaps, someone can help me out. In networks like GPRS, IP packets can experience extremely large delivery times. The ETSI standard accepts up to 10 minutes. The effects of large and varying latencies on TCP are widely discussed. However, I still do not really understand where these delays come from. When delivery times vary from 1 ms to 10 minutes, we talk about 6 orders of magnitude. My question is: What is the reason for this? Possible delay sources (except processing, serialization, queieng, propagation) are: - recovery / retransmission, - roaming - scheduling. I hardly belive that recovery latencies are that large because I can hardly imagine that a packet in the whole or in pieces is repeated up to 1 million times. And even if this would be the case, it is interesting to see the variation of such latencies. In addition, I hardly believe in roaming latencies. Years ago, I was told extremely large latencies would result from roaming. However, if e.g. a mobile roams from one cell to another, to my understanding it keeps its SGSN in many cases and only the antenna station changes. So, this situation is basically not different from roaming with a normal voice stream. And that works transparent to the user. The only source where I ever read latencies and delay spikes of several seconds is the Globecom 2004 paper by Thierry Klein, which reported delay spikes of up to 2 seconds. And when I think about proportional fair scheduling, I think this _can_ be a source of large delay spikes. However, I do not have access to any reliable material here. The problem appears to be quite technical and not very much TCP related. However, I?m admittedly somewhat tired to think about dely variations and delay spikes and possible adverse consequences upon TCP without having really understood the reasons for large delays and delay spikes. I don?t know whether this topic is of intereset here, but I would greatly appreciate any discussion of this matter. If someone can help me on this one, he is welcome to contact me off list, if this topic is not of general interest here. However, I?m somewhat helpless here. Thanks. Detlef From detlef.bosau at web.de Wed Feb 21 03:07:16 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 21 Feb 2007 12:07:16 +0100 Subject: [e2e] Delays / service times / delivery times in wireless networks. In-Reply-To: References: Message-ID: <45DC27E4.4010901@web.de> Andr?s Veres (IJ/ETH) wrote: > Hi, > > The 10 minutes reference in GPRS never happens in reality. It is similar to RFC 793 where an upper limit of TTL = one minute is set for TCP packets. Neither 1 nor 10 minutes actually happen in real networks. > O.k. Do 9 minutes 59 seconds ever happen? ;-) WRT David: You?re perfectly right that ETSI has nothing to do with IETF. Similar to the IEEE (industry standards) and similar to DigitalIntelXerox. However, TCP/IP sometimes runs on real networks and therefore must use network technologies, e.g. IEEE 802.3 / Ethernet VII AKA DIX or GPRS (and therefore about 50 or 100 ETSI standards ) ;-) Back to the subject. The GPRS standard specifies certain quantiles, a 0.5 quantile and a 0.9 quantile for delays. Perfect. But in my opinion, this is not sufficient for a scientific discussion of the suitability of mobile networks for end to end protocols _AND_ applications. Now, there is a huge amount of papers (I don?t even listen examples, there are literally hundreds of them, even in proceedings of highly visible IEEE or ACM conferences) which deal with - delay spikes - spurious timeouts - large Bandwidth Delay Products - adverse effects on TCP caused by proportional fair scheduling etc. in mobile networks. In fact, the whole discussion "is there a problem with TCP in mobile networks" deals exactly with these issues. So, my question is in other words: Are these issues substantiated? Or do we hunt phantoms here? And because I?m interested in finding an answer to this question, I?m interested to understand, where delay in mobile networks come from ;-) > Andras > > > -----Original Message----- > From: end2end-interest-bounces at postel.org [mailto:end2end-interest-bounces at postel.org] On Behalf Of Detlef Bosau > Sent: Tuesday, February 20, 2007 11:45 PM > To: e2e; Michael.kochte at gmx.net; frank.duerr > Subject: [e2e] Delays / service times / delivery times in wireless networks. > > Hi to all, > > I go in circles here and perhaps, someone can help me out. > > In networks like GPRS, IP packets can experience extremely large delivery times. The ETSI standard accepts up to 10 minutes. > > The effects of large and varying latencies on TCP are widely discussed. > However, I still do not really understand where these delays come from. > When delivery times vary from 1 ms to 10 minutes, we talk about 6 orders of magnitude. My question is: What is the reason for this? > > Possible delay sources (except processing, serialization, queieng, > propagation) are: > > - recovery / retransmission, > - roaming > - scheduling. > > I hardly belive that recovery latencies are that large because I can hardly imagine that a packet in the whole or in pieces is repeated up to > 1 million times. And even if this would be the case, it is interesting to see the variation of such latencies. > > In addition, I hardly believe in roaming latencies. Years ago, I was told extremely large latencies would result from roaming. However, if e.g. a mobile roams from one cell to another, to my understanding it keeps its SGSN in many cases and only the antenna station changes. So, this situation is basically not different from roaming with a normal voice stream. And that works transparent to the user. > > The only source where I ever read latencies and delay spikes of several seconds is the Globecom 2004 paper by Thierry Klein, which reported delay spikes of up to 2 seconds. And when I think about proportional fair scheduling, I think this _can_ be a source of large delay spikes. > However, I do not have access to any reliable material here. > > The problem appears to be quite technical and not very much TCP related. > However, I?m admittedly somewhat tired to think about dely variations and delay spikes and possible adverse consequences upon TCP without having really understood the reasons for large delays and delay spikes. > > I don?t know whether this topic is of intereset here, but I would greatly appreciate any discussion of this matter. If someone can help me on this one, he is welcome to contact me off list, if this topic is not of general interest here. > > However, I?m somewhat helpless here. > > Thanks. > > Detlef > > > From dpreed at reed.com Wed Feb 21 05:45:01 2007 From: dpreed at reed.com (David P. Reed) Date: Wed, 21 Feb 2007 08:45:01 -0500 Subject: [e2e] Delays / service times / delivery times in wireless networks. In-Reply-To: <45DC27E4.4010901@web.de> References: <45DC27E4.4010901@web.de> Message-ID: <45DC4CDD.7040500@reed.com> The delays come from buffering and retrying for long times, on the basic theory that the underlying network is trying to be "smart" and guarantee delivery in the face of congestion or errors. A decade ago this same issue came up running TCP over Frame Relay networks that were willing to buffer a minute or more's worth of traffic when congested if asked to do "reliable delivery". Who could say no to such a wonderful feature in the network? *Reliable Delivery*, wow, I want to build that in at the lowest layer, under IP (putting QoS in the lowest network layers is always good, as those who think the end-to-end argument is a religion keep saying). Of course the real meaning of that is "reliable delivery no matter how long it takes" so like certain countries' mail delivery, you could get your piece of mail after 10 years of sitting in a bin in some warehouse, but no one ever would throw out a single piece of mail (other countries mail service does that). Once the folks who ran IP networks over frame relay realized that you should never provision reliable delivery if you were running IP, this stopped happening. So the story is that GPRS can, if it tries to provide QoS in the form of never dropping a frame, screw up TCP. But this has nothing to do with mobility per se. It has to do with GPRS, just as the old problems had to do with Frame Relay, not with high speed data. The architecture of the GPRS network is too smart. Detlef Bosau wrote: > Andr?s Veres (IJ/ETH) wrote: >> Hi, >> >> The 10 minutes reference in GPRS never happens in reality. It is >> similar to RFC 793 where an upper limit of TTL = one minute is set >> for TCP packets. Neither 1 nor 10 minutes actually happen in real >> networks. > > O.k. Do 9 minutes 59 seconds ever happen? ;-) > > WRT David: You?re perfectly right that ETSI has nothing to do with > IETF. Similar to the IEEE (industry standards) and similar to > DigitalIntelXerox. However, TCP/IP sometimes runs on real networks and > therefore must use network technologies, e.g. IEEE 802.3 / Ethernet > VII AKA DIX or GPRS (and therefore about 50 or 100 ETSI standards ) ;-) > > Back to the subject. The GPRS standard specifies certain quantiles, a > 0.5 quantile and a 0.9 quantile for delays. > Perfect. > > But in my opinion, this is not sufficient for a scientific discussion > of the suitability of mobile networks for end to end protocols _AND_ > applications. > > Now, there is a huge amount of papers (I don?t even listen examples, > there are literally hundreds of them, even in proceedings of highly > visible IEEE or ACM conferences) which deal with > - delay spikes > - spurious timeouts > - large Bandwidth Delay Products > - adverse effects on TCP caused by proportional fair scheduling > etc. in mobile networks. > > In fact, the whole discussion "is there a problem with TCP in mobile > networks" deals exactly with these issues. > > So, my question is in other words: Are these issues substantiated? Or > do we hunt phantoms here? > > And because I?m interested in finding an answer to this question, I?m > interested to understand, where delay in mobile networks come from ;-) > >> Andras >> >> >> -----Original Message----- >> From: end2end-interest-bounces at postel.org >> [mailto:end2end-interest-bounces at postel.org] On Behalf Of Detlef Bosau >> Sent: Tuesday, February 20, 2007 11:45 PM >> To: e2e; Michael.kochte at gmx.net; frank.duerr >> Subject: [e2e] Delays / service times / delivery times in wireless >> networks. >> >> Hi to all, >> >> I go in circles here and perhaps, someone can help me out. >> >> In networks like GPRS, IP packets can experience extremely large >> delivery times. The ETSI standard accepts up to 10 minutes. >> >> The effects of large and varying latencies on TCP are widely >> discussed. However, I still do not really understand where these >> delays come from. >> When delivery times vary from 1 ms to 10 minutes, we talk about 6 >> orders of magnitude. My question is: What is the reason for this? >> >> Possible delay sources (except processing, serialization, queieng, >> propagation) are: >> >> - recovery / retransmission, >> - roaming >> - scheduling. >> >> I hardly belive that recovery latencies are that large because I can >> hardly imagine that a packet in the whole or in pieces is repeated up to >> 1 million times. And even if this would be the case, it is >> interesting to see the variation of such latencies. >> >> In addition, I hardly believe in roaming latencies. Years ago, I was >> told extremely large latencies would result from roaming. However, if >> e.g. a mobile roams from one cell to another, to my understanding it >> keeps its SGSN in many cases and only the antenna station changes. >> So, this situation is basically not different from roaming with a >> normal voice stream. And that works transparent to the user. >> >> The only source where I ever read latencies and delay spikes of >> several seconds is the Globecom 2004 paper by Thierry Klein, which >> reported delay spikes of up to 2 seconds. And when I think about >> proportional fair scheduling, I think this _can_ be a source of large >> delay spikes. >> However, I do not have access to any reliable material here. >> >> The problem appears to be quite technical and not very much TCP >> related. However, I?m admittedly somewhat tired to think about dely >> variations and delay spikes and possible adverse consequences upon >> TCP without having really understood the reasons for large delays and >> delay spikes. >> >> I don?t know whether this topic is of intereset here, but I would >> greatly appreciate any discussion of this matter. If someone can help >> me on this one, he is welcome to contact me off list, if this topic >> is not of general interest here. >> >> However, I?m somewhat helpless here. >> >> Thanks. >> >> Detlef >> >> >> > > > > From francesco at net.infocom.uniroma1.it Wed Feb 21 08:36:32 2007 From: francesco at net.infocom.uniroma1.it (Francesco Vacirca) Date: Wed, 21 Feb 2007 17:36:32 +0100 Subject: [e2e] Delays / service times / delivery times in wireless networks. In-Reply-To: <45DC4CDD.7040500@reed.com> References: <45DC27E4.4010901@web.de> <45DC4CDD.7040500@reed.com> Message-ID: <45DC7510.5060800@net.infocom.uniroma1.it> I think the comparison between FR and GPRS is not correct... It is different to wait in the buffer because the system waits for congestion ending and to wait in the buffer for link failure (retransmitting packets).... In the second case the delay cannot be avoided... if the wireless link is down for packet #1 (very high bit error rate), the link is down also for the next packets, so the delay cannot be avoided... and limiting the number of link layer retransmissions or avoid LL retransmissions does not benefit TCP. The difference is between avoiding packet losses due to congestion and packet losses due to link failures. Francesco David P. Reed wrote: > The delays come from buffering and retrying for long times, on the basic > theory that the underlying network is trying to be "smart" and guarantee > delivery in the face of congestion or errors. A decade ago this same > issue came up running TCP over Frame Relay networks that were willing to > buffer a minute or more's worth of traffic when congested if asked to do > "reliable delivery". > > Who could say no to such a wonderful feature in the network? *Reliable > Delivery*, wow, I want to build that in at the lowest layer, under IP > (putting QoS in the lowest network layers is always good, as those who > think the end-to-end argument is a religion keep saying). Of course > the real meaning of that is "reliable delivery no matter how long it > takes" so like certain countries' mail delivery, you could get your > piece of mail after 10 years of sitting in a bin in some warehouse, but > no one ever would throw out a single piece of mail (other countries > mail service does that). > > Once the folks who ran IP networks over frame relay realized that you > should never provision reliable delivery if you were running IP, this > stopped happening. > > So the story is that GPRS can, if it tries to provide QoS in the form of > never dropping a frame, screw up TCP. > > But this has nothing to do with mobility per se. It has to do with > GPRS, just as the old problems had to do with Frame Relay, not with high > speed data. The architecture of the GPRS network is too smart. > > Detlef Bosau wrote: >> Andr?s Veres (IJ/ETH) wrote: >>> Hi, >>> >>> The 10 minutes reference in GPRS never happens in reality. It is >>> similar to RFC 793 where an upper limit of TTL = one minute is set >>> for TCP packets. Neither 1 nor 10 minutes actually happen in real >>> networks. >> >> O.k. Do 9 minutes 59 seconds ever happen? ;-) >> >> WRT David: You?re perfectly right that ETSI has nothing to do with >> IETF. Similar to the IEEE (industry standards) and similar to >> DigitalIntelXerox. However, TCP/IP sometimes runs on real networks and >> therefore must use network technologies, e.g. IEEE 802.3 / Ethernet >> VII AKA DIX or GPRS (and therefore about 50 or 100 ETSI standards ) ;-) >> >> Back to the subject. The GPRS standard specifies certain quantiles, a >> 0.5 quantile and a 0.9 quantile for delays. >> Perfect. >> >> But in my opinion, this is not sufficient for a scientific discussion >> of the suitability of mobile networks for end to end protocols _AND_ >> applications. >> >> Now, there is a huge amount of papers (I don?t even listen examples, >> there are literally hundreds of them, even in proceedings of highly >> visible IEEE or ACM conferences) which deal with >> - delay spikes >> - spurious timeouts >> - large Bandwidth Delay Products >> - adverse effects on TCP caused by proportional fair scheduling >> etc. in mobile networks. >> >> In fact, the whole discussion "is there a problem with TCP in mobile >> networks" deals exactly with these issues. >> >> So, my question is in other words: Are these issues substantiated? Or >> do we hunt phantoms here? >> >> And because I?m interested in finding an answer to this question, I?m >> interested to understand, where delay in mobile networks come from ;-) >> >>> Andras >>> >>> >>> -----Original Message----- >>> From: end2end-interest-bounces at postel.org >>> [mailto:end2end-interest-bounces at postel.org] On Behalf Of Detlef Bosau >>> Sent: Tuesday, February 20, 2007 11:45 PM >>> To: e2e; Michael.kochte at gmx.net; frank.duerr >>> Subject: [e2e] Delays / service times / delivery times in wireless >>> networks. >>> >>> Hi to all, >>> >>> I go in circles here and perhaps, someone can help me out. >>> >>> In networks like GPRS, IP packets can experience extremely large >>> delivery times. The ETSI standard accepts up to 10 minutes. >>> >>> The effects of large and varying latencies on TCP are widely >>> discussed. However, I still do not really understand where these >>> delays come from. >>> When delivery times vary from 1 ms to 10 minutes, we talk about 6 >>> orders of magnitude. My question is: What is the reason for this? >>> >>> Possible delay sources (except processing, serialization, queieng, >>> propagation) are: >>> >>> - recovery / retransmission, >>> - roaming >>> - scheduling. >>> >>> I hardly belive that recovery latencies are that large because I can >>> hardly imagine that a packet in the whole or in pieces is repeated up to >>> 1 million times. And even if this would be the case, it is >>> interesting to see the variation of such latencies. >>> >>> In addition, I hardly believe in roaming latencies. Years ago, I was >>> told extremely large latencies would result from roaming. However, if >>> e.g. a mobile roams from one cell to another, to my understanding it >>> keeps its SGSN in many cases and only the antenna station changes. >>> So, this situation is basically not different from roaming with a >>> normal voice stream. And that works transparent to the user. >>> >>> The only source where I ever read latencies and delay spikes of >>> several seconds is the Globecom 2004 paper by Thierry Klein, which >>> reported delay spikes of up to 2 seconds. And when I think about >>> proportional fair scheduling, I think this _can_ be a source of large >>> delay spikes. >>> However, I do not have access to any reliable material here. >>> >>> The problem appears to be quite technical and not very much TCP >>> related. However, I?m admittedly somewhat tired to think about dely >>> variations and delay spikes and possible adverse consequences upon >>> TCP without having really understood the reasons for large delays and >>> delay spikes. >>> >>> I don?t know whether this topic is of intereset here, but I would >>> greatly appreciate any discussion of this matter. If someone can help >>> me on this one, he is welcome to contact me off list, if this topic >>> is not of general interest here. >>> >>> However, I?m somewhat helpless here. >>> >>> Thanks. >>> >>> Detlef >>> >>> >>> >> >> >> >> > From dpreed at reed.com Wed Feb 21 12:14:17 2007 From: dpreed at reed.com (David P. Reed) Date: Wed, 21 Feb 2007 15:14:17 -0500 Subject: [e2e] Delays / service times / delivery times in wireless networks. In-Reply-To: <45DC7510.5060800@net.infocom.uniroma1.it> References: <45DC27E4.4010901@web.de> <45DC4CDD.7040500@reed.com> <45DC7510.5060800@net.infocom.uniroma1.it> Message-ID: <45DCA819.1070805@reed.com> You are right that there is a difference between congestion and lossy links, but persisting for a long time to transmit a packet causes congestion back up the path from the source, and that backed up queueing can hurt TCP, because TCP depends on a fast end-to-end control loop to manage window size. So the FR and GPRS cases are different to some extent, I agree. Francesco Vacirca wrote: > I think the comparison between FR and GPRS is not correct... > > It is different to wait in the buffer because the system waits for > congestion ending and to wait in the buffer for link failure > (retransmitting packets).... > > In the second case the delay cannot be avoided... if the wireless link > is down for packet #1 (very high bit error rate), the link is down > also for the next packets, so the delay cannot be avoided... and > limiting the number of link layer retransmissions or avoid LL > retransmissions does not benefit TCP. > > The difference is between avoiding packet losses due to congestion and > packet losses due to link failures. > > Francesco > > > David P. Reed wrote: >> The delays come from buffering and retrying for long times, on the >> basic theory that the underlying network is trying to be "smart" and >> guarantee delivery in the face of congestion or errors. A decade >> ago this same issue came up running TCP over Frame Relay networks >> that were willing to buffer a minute or more's worth of traffic when >> congested if asked to do "reliable delivery". >> >> Who could say no to such a wonderful feature in the network? >> *Reliable Delivery*, wow, I want to build that in at the lowest >> layer, under IP (putting QoS in the lowest network layers is always >> good, as those who think the end-to-end argument is a religion keep >> saying). Of course the real meaning of that is "reliable delivery >> no matter how long it takes" so like certain countries' mail >> delivery, you could get your piece of mail after 10 years of sitting >> in a bin in some warehouse, but no one ever would throw out a single >> piece of mail (other countries mail service does that). >> >> Once the folks who ran IP networks over frame relay realized that you >> should never provision reliable delivery if you were running IP, this >> stopped happening. >> >> So the story is that GPRS can, if it tries to provide QoS in the form >> of never dropping a frame, screw up TCP. >> >> But this has nothing to do with mobility per se. It has to do with >> GPRS, just as the old problems had to do with Frame Relay, not with >> high speed data. The architecture of the GPRS network is too smart. >> >> Detlef Bosau wrote: >>> Andr?s Veres (IJ/ETH) wrote: >>>> Hi, >>>> >>>> The 10 minutes reference in GPRS never happens in reality. It is >>>> similar to RFC 793 where an upper limit of TTL = one minute is set >>>> for TCP packets. Neither 1 nor 10 minutes actually happen in real >>>> networks. >>> >>> O.k. Do 9 minutes 59 seconds ever happen? ;-) >>> >>> WRT David: You?re perfectly right that ETSI has nothing to do with >>> IETF. Similar to the IEEE (industry standards) and similar to >>> DigitalIntelXerox. However, TCP/IP sometimes runs on real networks >>> and therefore must use network technologies, e.g. IEEE 802.3 / >>> Ethernet VII AKA DIX or GPRS (and therefore about 50 or 100 ETSI >>> standards ) ;-) >>> >>> Back to the subject. The GPRS standard specifies certain quantiles, >>> a 0.5 quantile and a 0.9 quantile for delays. >>> Perfect. >>> >>> But in my opinion, this is not sufficient for a scientific >>> discussion of the suitability of mobile networks for end to end >>> protocols _AND_ applications. >>> >>> Now, there is a huge amount of papers (I don?t even listen examples, >>> there are literally hundreds of them, even in proceedings of highly >>> visible IEEE or ACM conferences) which deal with >>> - delay spikes >>> - spurious timeouts >>> - large Bandwidth Delay Products >>> - adverse effects on TCP caused by proportional fair scheduling >>> etc. in mobile networks. >>> >>> In fact, the whole discussion "is there a problem with TCP in mobile >>> networks" deals exactly with these issues. >>> >>> So, my question is in other words: Are these issues substantiated? >>> Or do we hunt phantoms here? >>> >>> And because I?m interested in finding an answer to this question, >>> I?m interested to understand, where delay in mobile networks come >>> from ;-) >>> >>>> Andras >>>> >>>> >>>> -----Original Message----- >>>> From: end2end-interest-bounces at postel.org >>>> [mailto:end2end-interest-bounces at postel.org] On Behalf Of Detlef Bosau >>>> Sent: Tuesday, February 20, 2007 11:45 PM >>>> To: e2e; Michael.kochte at gmx.net; frank.duerr >>>> Subject: [e2e] Delays / service times / delivery times in wireless >>>> networks. >>>> >>>> Hi to all, >>>> >>>> I go in circles here and perhaps, someone can help me out. >>>> >>>> In networks like GPRS, IP packets can experience extremely large >>>> delivery times. The ETSI standard accepts up to 10 minutes. >>>> >>>> The effects of large and varying latencies on TCP are widely >>>> discussed. However, I still do not really understand where these >>>> delays come from. >>>> When delivery times vary from 1 ms to 10 minutes, we talk about 6 >>>> orders of magnitude. My question is: What is the reason for this? >>>> >>>> Possible delay sources (except processing, serialization, queieng, >>>> propagation) are: >>>> >>>> - recovery / retransmission, >>>> - roaming >>>> - scheduling. >>>> >>>> I hardly belive that recovery latencies are that large because I >>>> can hardly imagine that a packet in the whole or in pieces is >>>> repeated up to >>>> 1 million times. And even if this would be the case, it is >>>> interesting to see the variation of such latencies. >>>> >>>> In addition, I hardly believe in roaming latencies. Years ago, I >>>> was told extremely large latencies would result from roaming. >>>> However, if e.g. a mobile roams from one cell to another, to my >>>> understanding it keeps its SGSN in many cases and only the antenna >>>> station changes. So, this situation is basically not different from >>>> roaming with a normal voice stream. And that works transparent to >>>> the user. >>>> >>>> The only source where I ever read latencies and delay spikes of >>>> several seconds is the Globecom 2004 paper by Thierry Klein, which >>>> reported delay spikes of up to 2 seconds. And when I think about >>>> proportional fair scheduling, I think this _can_ be a source of >>>> large delay spikes. >>>> However, I do not have access to any reliable material here. >>>> >>>> The problem appears to be quite technical and not very much TCP >>>> related. However, I?m admittedly somewhat tired to think about dely >>>> variations and delay spikes and possible adverse consequences upon >>>> TCP without having really understood the reasons for large delays >>>> and delay spikes. >>>> >>>> I don?t know whether this topic is of intereset here, but I would >>>> greatly appreciate any discussion of this matter. If someone can >>>> help me on this one, he is welcome to contact me off list, if this >>>> topic is not of general interest here. >>>> >>>> However, I?m somewhat helpless here. >>>> >>>> Thanks. >>>> >>>> Detlef >>>> >>>> >>>> >>> >>> >>> >>> >> > From detlef.bosau at web.de Fri Feb 23 03:03:06 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 23 Feb 2007 12:03:06 +0100 Subject: [e2e] Delays / service times / delivery times in wireless networks. In-Reply-To: <45DC4CDD.7040500@reed.com> References: <45DC27E4.4010901@web.de> <45DC4CDD.7040500@reed.com> Message-ID: <45DEC9EA.2010202@web.de> David P. Reed wrote: > Once the folks who ran IP networks over frame relay realized that you > should never provision reliable delivery if you were running IP, this > stopped happening. > > So the story is that GPRS can, if it tries to provide QoS in the form > of never dropping a frame, screw up TCP. > > But this has nothing to do with mobility per se. It has to do with > GPRS, just as the old problems had to do with Frame Relay, not with > high speed data. The architecture of the GPRS network is too smart. How smart is "too smart"? And how much smartness is necessary? Some authors note that the IP packet delivery time in mobile networks is in fact a random variable, because the information rate in wireless networs sometimes changes several times _within_ one packet. The reasons are manifold and as a computer scientist, I have only a rough understanding of some of the relevant issues here. To my understanding, the basic question is: Which packet corrution rate can be accepted by an IP network? This is perhaps no fixed number but there is some tolerance in it. However, I think we can agree that a packet corruption rate less or equal to 10^-3 does not really cause grief. On the other hand, when the rate of successful transmussions is less or equal to 10^-3, the network is quite unlikely to be used. So the truth is perhaps not out there but somewhere in between ;-) Detlef From detlef.bosau at web.de Fri Feb 23 07:14:16 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 23 Feb 2007 16:14:16 +0100 Subject: [e2e] Delays / service times / delivery times in wireless networks. In-Reply-To: <45DEECCD.3090603@cmu.edu> References: <45DC27E4.4010901@web.de> <45DC4CDD.7040500@reed.com> <45DEC9EA.2010202@web.de> <45DEECCD.3090603@cmu.edu> Message-ID: <45DF04C8.3010308@web.de> Srinivasan Seshan wrote: > Actually, how smart is not too hard to estimate - assuming that we are > really just doing this for TCP. > > As far as packet loss rates are concerned, the target probably > shouldn't be something fixed like 10^-3. It really depends on the link > speed. What you want to probably do is ensure that the packet > corruption rate is an order of magnitude less than the drop rate due > to congestion. O.k. So, that would be an API problem / implementation problem: The congestion drop rate may be flow dependent. Hence, the question is whether the link layer should act flow dependent here or whether "one size fits all". Particularly for GPRS the user is given the choice between two packet corruption rates IIRC: 10^-3 and 10^-9. And even that must be broken down in the parameters for lower layers like - coding scheme - maximum acceptable number of retransmissions in ARQ - etc. > You can get the congestion drop rate by estimating the average RTT for > flows and the raw speed of the link. Apply some TCP modeling magic and > you should be able to pull out a loss rate. Note that I assumed a > single flow here, which is the worst case. Multiple flows will raise > the congestion loss rate. So, if you can assume that you have more > flows, you can accommodate higher corruption loss rates. > O.k. However, my intention was somewhat different. I?m still with the myth / truth (perhaps, we should combine the words ;-)) that TCP experiences fundamental problems over wireless links. Just some hours ago I got a paper "Interaction between UMTS MAC Scheduling and TCP Flow Control Mechanisms" by Malkowski and Heier. One claim in this work is that the TCP flow control mechanism was to slow for a highly dynamic link. Hm. So, I learned a wireless link may change its rate perhaps a dozen time during one IP packet transmission or even more. So, what?s the problem? I don?t mind the link change its rate a million of times during one IP packet transmission, as long as the transmission time for the whole packet shows only little variation about some average value or some slow drift into the one or the other direction. Of course, pure observation of IP packet round trip times is much too coarse to estimate channel properties beginning at the 15 position after the decimal point. However, as long as this observation suffices to provide reasonable TCP throughput, I have no problem with that. So, from that point of view, we should debunk the religion "TCP does not work in mobile networks". There are several other myths from the same religion. Spurious timeouts is one myth - already debunked by Francesco Vacirca, Thomas Ziegler and Eduard Hasenleitner in 2005, "TCP Spurious Timeout estimation in an operational GPRS/UMTS network". It is said: "seek and you will find". So the authors sought - and did not find... ;-) Another myth could be ACK compression. Is it myth? Or truth? I don?t know. So, basically I?m interested in IP packet service times in mobile wireless networks as they are perceived by the upper layers. If there is anything what should be hidden from upper layers, one could talk about how this can be achieved. However, if there is no need to hide anything from upper layers - we could even dismiss this poor dead horse and dismount. What makes me mad is: There is a huge amount of papers - even those few ones I made hard copies from don?t fit into my appartement any longer - which claim there would be terrible problems with TCP over mobile networks. And I?m absolutely not convinced that even one of these claims will hold in reality. Perhaps, it?s my fault that I don?t see it, but it does not matter which problem I have a closer look at - each one I looked at during the last years broke into pieces. It is however always much more difficult to prove the absence of a problem than it?s existence. And an author would hardly gain anything from it. If there is a problem, one can present a solution. If there is no problem - there is no need for a solution. And who would ever submit a paper: "We did not find a problem, so we don?t present a solution"? Detlef From dpreed at reed.com Fri Feb 23 09:02:56 2007 From: dpreed at reed.com (David P. Reed) Date: Fri, 23 Feb 2007 12:02:56 -0500 Subject: [e2e] Delays / service times / delivery times in wireless networks. In-Reply-To: <45DEECCD.3090603@cmu.edu> References: <45DC27E4.4010901@web.de> <45DC4CDD.7040500@reed.com> <45DEC9EA.2010202@web.de> <45DEECCD.3090603@cmu.edu> Message-ID: <45DF1E40.9060907@reed.com> It's important to remember that TCP was never intended to be optimal in any scenario. E.g. it wasn't supposed to be a protcol that met the "Shannon Limit" for the weird and wonderful "catenet channel" that is created when when one tries to unify all networks on a "best efforts" basis. Of course, there will always be theorists who try to make silk purses out of decomposing sow's ears, and produce some wonderful algorithm that works based on some theoretical assumptions they can write down and convince a whole series of conferences (and DARPA) are the *one and only true way that networks work*. So I wouldn't spend a lot of time trying to "control the packet loss rate" in the GPRS channel to optimize TCP, as you seem to be suggesting here. A simple rule of thumb is that TCP likes small bandwidth x end-to-end delay, and it likes low error-caused packet loss rates. Thus, retransmission more than once on a link ( efforts that are >100% coding overhead) is probably too much effort to deal with link errors. It's just as reasonable to use FEC on the link, which has an overhead far less than 100%. It's sad that 802.11 frequently retries up to 256 times, but that's not comparable, exactly since it isn't bit errors, but arbitration errors that cause that. Srinivasan Seshan wrote: > Actually, how smart is not too hard to estimate - assuming that we are > really just doing this for TCP. > > As far as packet loss rates are concerned, the target probably > shouldn't be something fixed like 10^-3. It really depends on the link > speed. What you want to probably do is ensure that the packet > corruption rate is an order of magnitude less than the drop rate due > to congestion. You can get the congestion drop rate by estimating the > average RTT for flows and the raw speed of the link. Apply some TCP > modeling magic and you should be able to pull out a loss rate. Note > that I assumed a single flow here, which is the worst case. Multiple > flows will raise the congestion loss rate. So, if you can assume that > you have more flows, you can accommodate higher corruption loss rates. > > Srini > > Detlef Bosau wrote: >> David P. Reed wrote: >> >>> Once the folks who ran IP networks over frame relay realized that >>> you should never provision reliable delivery if you were running IP, >>> this stopped happening. >>> >>> So the story is that GPRS can, if it tries to provide QoS in the >>> form of never dropping a frame, screw up TCP. >>> >>> But this has nothing to do with mobility per se. It has to do with >>> GPRS, just as the old problems had to do with Frame Relay, not with >>> high speed data. The architecture of the GPRS network is too smart. >> >> >> How smart is "too smart"? >> And how much smartness is necessary? >> >> Some authors note that the IP packet delivery time in mobile networks >> is in fact a random variable, because the information rate in >> wireless networs sometimes changes several times _within_ one packet. >> The reasons are manifold and as a computer scientist, I have only a >> rough understanding of some of the relevant issues here. >> >> To my understanding, the basic question is: Which packet corrution >> rate can be accepted by an IP network? >> This is perhaps no fixed number but there is some tolerance in it. >> However, I think we can agree that a packet corruption rate less or >> equal to 10^-3 does not really cause grief. On the other hand, when >> the rate of successful transmussions is less or equal to 10^-3, the >> network is quite unlikely to be used. >> >> So the truth is perhaps not out there but somewhere in between ;-) >> >> Detlef > > From touch at ISI.EDU Fri Feb 23 10:49:20 2007 From: touch at ISI.EDU (Joe Touch) Date: Fri, 23 Feb 2007 10:49:20 -0800 Subject: [e2e] Delays / service times / delivery times in wireless networks. In-Reply-To: <45DF1E40.9060907@reed.com> References: <45DC27E4.4010901@web.de> <45DC4CDD.7040500@reed.com> <45DEC9EA.2010202@web.de> <45DEECCD.3090603@cmu.edu> <45DF1E40.9060907@reed.com> Message-ID: <45DF3730.4090709@isi.edu> David P. Reed wrote: > It's important to remember that TCP was never intended to be optimal in > any scenario. Well put. The only thing TCP is designed to do is make progress when progress can be made. Joe -- ---------------------------------------- Joe Touch Sr. Network Engineer, USAF TSAT Space Segment -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070223/cca81aac/signature.bin From L.Wood at surrey.ac.uk Fri Feb 23 12:52:55 2007 From: L.Wood at surrey.ac.uk (Lloyd Wood) Date: Fri, 23 Feb 2007 20:52:55 +0000 Subject: [e2e] Delays / service times / delivery times in wireless networks. In-Reply-To: <45DF1E40.9060907@reed.com> References: <45DC27E4.4010901@web.de> <45DC4CDD.7040500@reed.com> <45DEC9EA.2010202@web.de> <45DEECCD.3090603@cmu.edu> <45DF1E40.9060907@reed.com> Message-ID: <200702232053.UAA16069@cisco.com> At Friday 23/02/2007 12:02 -0500, David P. Reed wrote: >It's important to remember that TCP was never intended to be optimal in any scenario. Great. Perhaps we can we stop obsessing over TCP friendliness and 'optimal fairness' now, as unachievable goals. From yau at cs.purdue.edu Fri Feb 23 19:20:31 2007 From: yau at cs.purdue.edu (David K Y Yau) Date: Fri, 23 Feb 2007 22:20:31 -0500 (EST) Subject: [e2e] ICNP 2007 CFP Message-ID: Dear colleague: Our sincere apologies if you receive multiple copies of this announcement. -- ICNP 2007 organizers ICNP 2007 CALL FOR PAPER 15th IEEE International Conference on Network Protocols Beijing, CHINA, October 16-19, 2007 ICNP 2007, the fifteenth IEEE International Conference on Network Protocols, is a conference covering all aspects of network protocols, including design, analysis, specification, verification, implementation, and performance. ICNP 2007 will be held in Beijing, CHINA, October 16-19, 2007. Papers with significant research contributions to the field of network protocols are solicited for submission. Papers cannot be previously published nor under review by another conference or journal. Topics of interest include, but are not limited to: 1. Protocol testing, analysis, design and implementation 2. Measurement and monitoring of protocols 3. Protocols designed for specific functions, such as: routing, flow and congestion control, QoS, signaling, security, network management, or resiliency 4. Protocols designed for specific networks, such as: wireless and mobile networks, ad hoc and sensor networks, virtual networks, and ubiquitous networks Quality papers dealing specifically with protocols will be preferred over quality papers of a more general networking nature. ICNP will select an accepted full paper for the best paper award. IMPORTANT DATES Paper registration: April 8, 2007 Paper submission: April 15, 2007 Notification of acceptance: June 29, 2007 Camera ready version: July 20, 2007 For paper submission, please visit http://icnp2007.edu.cn/ ORGANIZING COMMITTEES EXECUTIVE COMMITTEE: David Lee, Ohio State University, USA (chair) Mostafa Ammar, Georgia Tech, USA Ken Calvert, University of Kentucky, USA Teruo Higashino, Osaka University, Japan Raymond Miller, University of Maryland, USA GENERAL CHAIR: Jianping Wu, Tsinghua University, CHINA John Lui, Chinese University of Hong Kong, CHINA VICE GENERAL CHAIR: Xia Yin,Tsinghua University, CHINA PROGRAM CHAIR: Kenneth L. Calvert, University of Kentucky, USA David Yau, Purdue University, USA WORKSHOP CHAIR: TBA LOCAL ARRANGEMENT CHAIR: Youjian Zhao,Tsinghua University, CHINA PUBLICATION CHAIR: Ke Xu,Tsinghua University, CHINA PUBLICITY CHAIR: Mingwei Xu,Tsinghua University, CHINA STEERING COMMITTEE: Simon Lam, University of Texas, USA (co-chair) David Lee, Ohio State University, USA (co-chair) Mostafa Ammar, Georgia Tech, USA Ken Calvert, University of Kentucky, USA Mohamed Gouda, University of Texas, USA Teruo Higashino, Osaka University, Japan Mike T. Liu, Ohio State University, USA Raymond Miller, University of Maryland, USA Krishan Sabnani, Bell Labs, USA PROGRAM COMMITTEE: See http://icnp2007.edu.cn/ FURTHER INFORMATION: Web site: http://icnp2007.edu.cn/ http://www.ieee-icnp.org/ E-mail: icnp2007 at tsinghua.edu.cn TEL: +86 10 62788109 FAX: +86 10 62771527 From francesco at net.infocom.uniroma1.it Mon Feb 26 02:04:24 2007 From: francesco at net.infocom.uniroma1.it (Francesco Vacirca) Date: Mon, 26 Feb 2007 11:04:24 +0100 Subject: [e2e] Delays / service times / delivery times in wireless networks. In-Reply-To: <45DF1E40.9060907@reed.com> References: <45DC27E4.4010901@web.de> <45DC4CDD.7040500@reed.com> <45DEC9EA.2010202@web.de> <45DEECCD.3090603@cmu.edu> <45DF1E40.9060907@reed.com> Message-ID: <45E2B0A8.1050102@net.infocom.uniroma1.it> Once more I'm not agree with your comparison between TCP over GPRS and 802.11: in the first scenario packet losses are due to causes independent of TCP, whereas in the second case, packet losses are mainly due to collisions, i.e. congestion... not retransmitting here (or delaying retransmissions) can benefit other TCP flows. In the GPRS case (I'm thinking about a GPRS dedicated channel), stop retransmitting a packet does not benefit TCP (I'm suggesting to use a large number for the maximum retry limit, that is the best adaptive number you can guess). Adding a fixed FEC adds a fixed overhead that decreases the bandwidth seen by the mobile terminals (often without any reasonable reason), adding an adaptive FEC can be a good solution, but if a packet is lost on the channel (nevertheless the FEC overhead), it has to be retransmitted by the link layer. Francesco David P. Reed wrote: > It's important to remember that TCP was never intended to be optimal in > any scenario. E.g. it wasn't supposed to be a protcol that met the > "Shannon Limit" for the weird and wonderful "catenet channel" that is > created when when one tries to unify all networks on a "best efforts" > basis. > > Of course, there will always be theorists who try to make silk purses > out of decomposing sow's ears, and produce some wonderful algorithm that > works based on some theoretical assumptions they can write down and > convince a whole series of conferences (and DARPA) are the *one and only > true way that networks work*. > > So I wouldn't spend a lot of time trying to "control the packet loss > rate" in the GPRS channel to optimize TCP, as you seem to be suggesting > here. > > A simple rule of thumb is that TCP likes small bandwidth x end-to-end > delay, and it likes low error-caused packet loss rates. Thus, > retransmission more than once on a link ( efforts that are >100% coding > overhead) is probably too much effort to deal with link errors. It's > just as reasonable to use FEC on the link, which has an overhead far > less than 100%. > > It's sad that 802.11 frequently retries up to 256 times, but that's not > comparable, exactly since it isn't bit errors, but arbitration errors > that cause that. > > Srinivasan Seshan wrote: >> Actually, how smart is not too hard to estimate - assuming that we are >> really just doing this for TCP. >> >> As far as packet loss rates are concerned, the target probably >> shouldn't be something fixed like 10^-3. It really depends on the link >> speed. What you want to probably do is ensure that the packet >> corruption rate is an order of magnitude less than the drop rate due >> to congestion. You can get the congestion drop rate by estimating the >> average RTT for flows and the raw speed of the link. Apply some TCP >> modeling magic and you should be able to pull out a loss rate. Note >> that I assumed a single flow here, which is the worst case. Multiple >> flows will raise the congestion loss rate. So, if you can assume that >> you have more flows, you can accommodate higher corruption loss rates. >> >> Srini >> >> Detlef Bosau wrote: >>> David P. Reed wrote: >>> >>>> Once the folks who ran IP networks over frame relay realized that >>>> you should never provision reliable delivery if you were running IP, >>>> this stopped happening. >>>> >>>> So the story is that GPRS can, if it tries to provide QoS in the >>>> form of never dropping a frame, screw up TCP. >>>> >>>> But this has nothing to do with mobility per se. It has to do with >>>> GPRS, just as the old problems had to do with Frame Relay, not with >>>> high speed data. The architecture of the GPRS network is too smart. >>> >>> >>> How smart is "too smart"? >>> And how much smartness is necessary? >>> >>> Some authors note that the IP packet delivery time in mobile networks >>> is in fact a random variable, because the information rate in >>> wireless networs sometimes changes several times _within_ one packet. >>> The reasons are manifold and as a computer scientist, I have only a >>> rough understanding of some of the relevant issues here. >>> >>> To my understanding, the basic question is: Which packet corrution >>> rate can be accepted by an IP network? >>> This is perhaps no fixed number but there is some tolerance in it. >>> However, I think we can agree that a packet corruption rate less or >>> equal to 10^-3 does not really cause grief. On the other hand, when >>> the rate of successful transmussions is less or equal to 10^-3, the >>> network is quite unlikely to be used. >>> >>> So the truth is perhaps not out there but somewhere in between ;-) >>> >>> Detlef >> >> > From dpreed at reed.com Mon Feb 26 09:27:35 2007 From: dpreed at reed.com (David P. Reed) Date: Mon, 26 Feb 2007 12:27:35 -0500 Subject: [e2e] Delays / service times / delivery times in wireless networks. In-Reply-To: <45E2B0A8.1050102@net.infocom.uniroma1.it> References: <45DC27E4.4010901@web.de> <45DC4CDD.7040500@reed.com> <45DEC9EA.2010202@web.de> <45DEECCD.3090603@cmu.edu> <45DF1E40.9060907@reed.com> <45E2B0A8.1050102@net.infocom.uniroma1.it> Message-ID: <45E31887.9040402@reed.com> No one can reason, a priori, why packet losses happen in either GPRS or 802.11. In many 802.11 scenarios, packets are lost because of noise, hidden terminal problems, etc. This is not congestion, per se. The only parallel is that if you really slow down all transmissions to very low rates, hidden terminal (arbitration failure) problems do get mitigated (probability of collision eventually drops if packets are really rare), but that is not a Little's Lemma queueing delay of the sort that TCP-in-the-theory-of-VJ manages through its rate control. But in fact, GPRS and 802.11 lose packets for all sorts of reasons, because the medium is not pristine and point-to-point. It is noisy (very much so), variable, and so forth. My point was that adding almost-unbounded queues in the path (which both technologies have tendencies to do) just makes things worse. Francesco Vacirca wrote: > Once more I'm not agree with your comparison between TCP over GPRS and > 802.11: > > in the first scenario packet losses are due to causes independent of > TCP, whereas in the second case, packet losses are mainly due to > collisions, i.e. congestion... not retransmitting here (or delaying > retransmissions) can benefit other TCP flows. > > In the GPRS case (I'm thinking about a GPRS dedicated channel), stop > retransmitting a packet does not benefit TCP (I'm suggesting to use a > large number for the maximum retry limit, that is the best adaptive > number you can guess). > Adding a fixed FEC adds a fixed overhead that decreases the bandwidth > seen by the mobile terminals (often without any reasonable reason), > adding an adaptive FEC can be a good solution, but if a packet is lost > on the channel (nevertheless the FEC overhead), it has to be > retransmitted by the link layer. > Francesco > > > > > > > David P. Reed wrote: >> It's important to remember that TCP was never intended to be optimal >> in any scenario. E.g. it wasn't supposed to be a protcol that met >> the "Shannon Limit" for the weird and wonderful "catenet channel" >> that is created when when one tries to unify all networks on a "best >> efforts" basis. >> >> Of course, there will always be theorists who try to make silk purses >> out of decomposing sow's ears, and produce some wonderful algorithm >> that works based on some theoretical assumptions they can write down >> and convince a whole series of conferences (and DARPA) are the *one >> and only true way that networks work*. >> >> So I wouldn't spend a lot of time trying to "control the packet loss >> rate" in the GPRS channel to optimize TCP, as you seem to be >> suggesting here. >> >> A simple rule of thumb is that TCP likes small bandwidth x end-to-end >> delay, and it likes low error-caused packet loss rates. Thus, >> retransmission more than once on a link ( efforts that are >100% >> coding overhead) is probably too much effort to deal with link >> errors. It's just as reasonable to use FEC on the link, which has >> an overhead far less than 100%. >> >> It's sad that 802.11 frequently retries up to 256 times, but that's >> not comparable, exactly since it isn't bit errors, but arbitration >> errors that cause that. >> >> Srinivasan Seshan wrote: >>> Actually, how smart is not too hard to estimate - assuming that we >>> are really just doing this for TCP. >>> >>> As far as packet loss rates are concerned, the target probably >>> shouldn't be something fixed like 10^-3. It really depends on the >>> link speed. What you want to probably do is ensure that the packet >>> corruption rate is an order of magnitude less than the drop rate due >>> to congestion. You can get the congestion drop rate by estimating >>> the average RTT for flows and the raw speed of the link. Apply some >>> TCP modeling magic and you should be able to pull out a loss rate. >>> Note that I assumed a single flow here, which is the worst case. >>> Multiple flows will raise the congestion loss rate. So, if you can >>> assume that you have more flows, you can accommodate higher >>> corruption loss rates. >>> >>> Srini >>> >>> Detlef Bosau wrote: >>>> David P. Reed wrote: >>>> >>>>> Once the folks who ran IP networks over frame relay realized that >>>>> you should never provision reliable delivery if you were running >>>>> IP, this stopped happening. >>>>> >>>>> So the story is that GPRS can, if it tries to provide QoS in the >>>>> form of never dropping a frame, screw up TCP. >>>>> >>>>> But this has nothing to do with mobility per se. It has to do >>>>> with GPRS, just as the old problems had to do with Frame Relay, >>>>> not with high speed data. The architecture of the GPRS network >>>>> is too smart. >>>> >>>> >>>> How smart is "too smart"? >>>> And how much smartness is necessary? >>>> >>>> Some authors note that the IP packet delivery time in mobile >>>> networks is in fact a random variable, because the information rate >>>> in wireless networs sometimes changes several times _within_ one >>>> packet. The reasons are manifold and as a computer scientist, I >>>> have only a rough understanding of some of the relevant issues here. >>>> >>>> To my understanding, the basic question is: Which packet corrution >>>> rate can be accepted by an IP network? >>>> This is perhaps no fixed number but there is some tolerance in it. >>>> However, I think we can agree that a packet corruption rate less or >>>> equal to 10^-3 does not really cause grief. On the other hand, >>>> when the rate of successful transmussions is less or equal to >>>> 10^-3, the network is quite unlikely to be used. >>>> >>>> So the truth is perhaps not out there but somewhere in between ;-) >>>> >>>> Detlef >>> >>> >> >