From fu at cs.uni-goettingen.de Sun Feb 1 16:45:52 2009 From: fu at cs.uni-goettingen.de (Xiaoming Fu) Date: Mon, 02 Feb 2009 01:45:52 +0100 Subject: [e2e] Call for Papers: ICNP 2009 Message-ID: <49864240.10105@cs.uni-goettingen.de> [Sincere apologies if you receive multiple copies of this CFP.] CALL FOR PAPERS ICNP 2009 17th IEEE International Conference on Network Protocols October 13-16, 2009 Princeton, New Jersey, USA http://www.ieee-icnp.org/2009 ICNP, the IEEE International Conference on Network Protocols is the premier conference covering all aspects of network protocols. ICNP will be held in its seventeenth year at Princeton, New Jersey from October 13-16th, 2009. Papers with significant research contributions on network protocols are solicited for submission. Papers related to all aspects of network protocols including design, implementation, analysis, performance and specification are relevant. Papers must not be previously published nor under review in any other conference or journal. Only original papers that have not been published previously or submitted for publication and under review by another conference or journal elsewhere can be submitted. Papers containing plagiarized material will be subject to the IEEE Plagiarism policy and will be rejected without review. Topics of interest include, but are not limited to: ? Protocol design, implementation, testing and analysis. ? Measurement studies of protocol performance. ? Protocols for wireless ad hoc and sensor networks. ? Protocols for peer-to-peer networks. ? Protocols for specific functions such as routing, flow/congestion control, network management. ? Protocols for security, survivability and fault-tolerance. Papers must be directly relevant to protocols. Papers on general networking where protocols are only a secondary focus will be considered only if they are of exceptionally high quality. Papers will be subject to a double-blind review process. The identity of authors and referees will not be revealed to each other. To ensure blind reviewing, the authors' names and affiliations should not appear in the paper. Bibliographic references should protect the authors' anonymity. Papers should adhere to the IEEE format and should not be more than 10 pages. The font size should not be smaller than 10pt. Further instructions will be posted on the ICNP website (http://www.ieee-icnp.org/2009). Authors of accepted papers will be expected to register for the conference and the paper must be presented at the conference in order for the paper to appear in the conference proceedings. Important Dates: Abstract Submission: April 10, 2009 8:59 P.M. EDT Paper Submission: April 17, 2009 8:59 P.M. EDT (Hard Deadline) Notification of Acceptance: July 28, 2009 Camera Ready Version Due: September 4, 2009 Organizing Committee: Steering Committee: David Lee (Chair) Ohio State University, USA Ken Calvert (Vice Chair) University of Kentucky, USA Kevin Almeroth University of California, Santa Barbara Mostafa Ammar Georgia Tech, USA Sonia Fahmy Purdue University, USA Mohamed Gouda University of Texas, Austin, USA Teruo Higashino Osaka University, Japan Krishan Sabnani Bell Labs Research, USA Advisory Board: Simon Lam University of Texas, Austin, USA Mike T. Liu Ohio State University, USA Raymond Miller University of Maryland, USA General Chairs: K.K. Ramakrishnan AT & T Labs Research, USA Henning Schulzrinne Columbia University, USA Program Chairs: Timothy Griffin University of Cambridge, U.K. Srikanth V. Krishnamurthy University of California, Riverside, USA For more details, visit the conference webpage: http://www.ieee-icnp.org/2009 Regards, Xiaoming Fu IEEE ICNP'09 Publicity Chair From tang at cs.montana.edu Thu Feb 5 20:52:35 2009 From: tang at cs.montana.edu (Jian (Neil) Tang) Date: Thu, 5 Feb 2009 21:52:35 -0700 Subject: [e2e] =?utf-8?q?CFP=3A_IEEE_IWQoS_2009_=28Submission_deadline_ext?= =?utf-8?q?ended_to_Feb_25=2C_2009=29?= Message-ID: <200902052152347964798@cs.montana.edu> We apologize if you receive this announcement multiple times ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ IEEE IWQoS 2009 17th IEEE International Workshop on Quality of Service July 13-15, 2009 Charleston, South Carolina http://iwqos09.cse.sc.edu +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Sixteen years since the inauguration of IWQoS, the workshop has become the premier forum to present original, novel ideas on all research subjects related to quality of service provisioning at end systems or in a networked environment. Recent technological advances in broadband networks, peer-to-peer networks, wireless networks, grid computing, and Internet-based social networks have led to new research challenges, such as providing QoS support for multimedia applications on the future Internet that seamlessly integrates wired, wireless, and overlay networks. Building on the previous success, the objective of this workshop is to bring together researchers, developers, and practitioners working in this area to discuss recent and innovative results, and to identify future directions. The scope of the workshop covers all aspects of QoS research, including related issues such as availability, reliability, security, pricing, resource management, and performance guarantees. Topics of interest include (but are not limited to): * QoS on the Internet * QoS in distributed systems, including grid computing systems * QoS in wireless ad hoc, mesh, and sensor networks * QoS in operating system design * QoS support in middleware * QoS for web services and storage systems * QoS for overlay networks * System dependability, availability, resilience, and robustness * Security and privacy as QoS parameters * Adaptive QoS in a dynamic environment * QoS evaluation metrics and methodologies * QoS analysis and modeling * QoS pricing and billing * QoS architectures and protocols * QoS routing algorithms * Programmability and language features supporting QoS * Rationality, incentive, microeconomics, and self-interest in decentralized networks * QoS in business processes, workflows * Policy-based QoS management * Last-mile QoS at wireless edge * QoS assurance under DoS or DoQ (denial of quality) attacks * QoS design for the future Internet. IWQoS invites submission of manuscripts with original research results that have not been previously published or currently under review by another conference or journal. Submissions will be judged based on originality, significance, interest, clarity, relevance, and correctness. Paper submissions should be no longer than 9 single-spaced, double-column pages with reasonable margins and font sizes of 10 or larger. All accepted papers, will be included in the conference proceedings. At least one of the authors of each accepted paper must present the paper at IWQoS 2009. IWQoS aims at rapid dissemination of research results. For fast turnaround, a short review and publication cycle is designed, with the submission deadline as close to the workshop as the publisher allows. The workshop is a single-track forum spanning two and a half days. Award will be given at the workshop to the best paper. ------------------------------------------------------------------- Important Dates ------------------------------------------------------------------- Paper submission deadline: February 25, 2009 Notification of acceptance: April 6, 2009 Camera-ready papers due: May 4, 2009 Workshop dates: July 13-15, 2009 ------------------------------------------------------------------- Organizing Committees ------------------------------------------------------------------- * Steering Committee: Yan Chen, Northwestern University, USA Chen-Nee Chuah, University of California-Davis, USA Georgios Karagiannis, University of Twente, Netherlands Gunnar Karlsson, Royal Institute of Technology, Sweden Yang Richard Yang, Yale University, USA David Yau, Purdue University, USA * Program Co-Chairs Shigang Chen, University of Florida, Gainesville, FL, USA Srihari Nelakuditi, University of South Carolina, Columbia, SC, USA * Finance Chair Weiyi Zhang, North Dakota State University, Fargo, ND, USA * Publicity Chair Jian Tang, Montana State University, Bozeman, MT, USA * Local Chair Kuang-Ching Wang, Clemson University, Clemson, SC, USA * Webmaster Maliek Mcknight, University of South Carolina, Columbia, SC, USA From guylzodi at gmail.com Mon Feb 9 00:57:20 2009 From: guylzodi at gmail.com (Guy - Alain Lusilao-Zodi) Date: Mon, 9 Feb 2009 10:57:20 +0200 Subject: [e2e] packet pair code Message-ID: <55c1a2ac0902090057j12fb239dyd45fe153bc8a04c6@mail.gmail.com> Hi all, I am looking for the c/c++ code of the packet pair techniques? Is there anyone who can assist? Regards -- ---------------------------------------------------------- G.-A. Lusilao-Zodi voice : 0216554019 PhD in Video streaming cell :082687993 Communication group guylzodi at gmail.com university of cape-Town ----------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090209/2812aa81/attachment.html From detlef.bosau at web.de Mon Feb 9 05:09:54 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 09 Feb 2009 14:09:54 +0100 Subject: [e2e] TCP Loss Differentiation Message-ID: <49902B22.80403@web.de> Hi. Some years ago, the issue of packet loss differentation in TCP was a big issue. Does somebody happen to know the state of the art in this area? I'm particularly interested in those cases were we do _not_ have a reliable knowledge about the loss rate on a link. (So, particularly the CETEN approach by Allman and Eddy cannot be easily applied.) Thanks. Detlef -- Detlef Bosau Mail: detlef.bosau at web.de Galileistrasse 30 Web: http://www.detlef-bosau.de 70565 Stuttgart Skype: detlef.bosau Mobile: +49 172 681 9937 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3351 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20090209/a852efb8/smime.bin From dpreed at reed.com Mon Feb 9 08:24:53 2009 From: dpreed at reed.com (David P. Reed) Date: Mon, 09 Feb 2009 11:24:53 -0500 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <49902B22.80403@web.de> References: <49902B22.80403@web.de> Message-ID: <499058D5.9040807@reed.com> Before going too far in this direction, one should note that unicast traffic on layer 2 transports commonly used in practice for Internet transport has negligible loss rates, even on wireless networks such as 802.11. The problem of differentiation arises when attempting to elide layer 2 functionality and run "TCP/IP on bare PHY". Otherwise "link loss rate" is a concept without much reality at layer 3. We don't run TCP/IP on bare PHY layers. We run it on layer 2 protocol, over PHY layers, which protocols always have high reliability today. Some multicast layer 3 protocols run on unreliable layer 2 multicast protocols (such as 802.11 multicast), but TCP/IP never uses multicast. Layer 3 losses are nearly always the result of *only* 2 very different phenomena: 1) buffer overflow drops due to router/switch congestion queue management or 2) layer 2 breaks in connectivity. Thinking about "link loss rates" is a nice academic math modeling exercise for a world that doesn't exist, but perhaps the practical modeling differentiation should focus on these two phenomena, rather than focusing on "link loss rates". The "connectivity break" case (which shows up in 802.11 when the NIC retransmits some number of times - 255?) doesn't have very good statistical models, certainly not the kind of models that can be baked into TCP's congestion/rate control algorithms. And that model is not likely to be poisson, or any distribution easily characterized by a "rate parameter". Detlef Bosau wrote: > Hi. > > Some years ago, the issue of packet loss differentation in TCP was a > big issue. Does somebody happen to know the state of the art in this > area? > > I'm particularly interested in those cases were we do _not_ have a > reliable knowledge about the loss rate on a link. (So, particularly > the CETEN > approach by Allman and Eddy cannot be easily applied.) > > Thanks. > > Detlef > From fahad.dogar at gmail.com Mon Feb 9 09:43:37 2009 From: fahad.dogar at gmail.com (Fahad Dogar) Date: Mon, 9 Feb 2009 12:43:37 -0500 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <499058D5.9040807@reed.com> References: <49902B22.80403@web.de> <499058D5.9040807@reed.com> Message-ID: On Mon, Feb 9, 2009 at 11:24 AM, David P. Reed wrote: > Before going too far in this direction, one should note that unicast > traffic on layer 2 transports commonly used in practice for Internet > transport has negligible loss rates, even on wireless networks such as > 802.11. > I guess you are restricting yourself to 'well behaved' 802.11 settings. Multi-hop networks (with outdoor links) and mobility scenarios (such as wifi from moving cars) do experience losses even with link layer reliability and no loss of connection. > The problem of differentiation arises when attempting to elide layer 2 > functionality and run "TCP/IP on bare PHY". Otherwise "link loss rate" is a > concept without much reality at layer 3. We don't run TCP/IP on bare PHY > layers. We run it on layer 2 protocol, over PHY layers, which protocols > always have high reliability today. Some multicast layer 3 protocols run > on unreliable layer 2 multicast protocols (such as 802.11 multicast), but > TCP/IP never uses multicast. > > > Layer 3 losses are nearly always the result of *only* 2 very different > phenomena: 1) buffer overflow drops due to router/switch congestion queue > management or 2) layer 2 breaks in connectivity. > > Thinking about "link loss rates" is a nice academic math modeling exercise > for a world that doesn't exist, but perhaps the practical modeling > differentiation should focus on these two phenomena, rather than focusing > on "link loss rates". The "connectivity break" case (which shows up in > 802.11 when the NIC retransmits some number of times - 255?) doesn't have > very good statistical models, certainly not the kind of models that can be > baked into TCP's congestion/rate control algorithms. And that model is not > likely to be poisson, or any distribution easily characterized by a "rate > parameter" Fahad > > > Detlef Bosau wrote: > >> Hi. >> >> Some years ago, the issue of packet loss differentation in TCP was a big >> issue. Does somebody happen to know the state of the art in this area? >> >> I'm particularly interested in those cases were we do _not_ have a >> reliable knowledge about the loss rate on a link. (So, particularly the >> CETEN >> approach by Allman and Eddy cannot be easily applied.) >> >> Thanks. >> >> Detlef >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090209/953e264a/attachment.html From dpreed at reed.com Mon Feb 9 11:29:21 2009 From: dpreed at reed.com (David P. Reed) Date: Mon, 09 Feb 2009 14:29:21 -0500 Subject: [e2e] TCP Loss Differentiation In-Reply-To: References: <49902B22.80403@web.de> <499058D5.9040807@reed.com> Message-ID: <49908411.1000800@reed.com> Fahad Dogar wrote: > I guess you are restricting yourself to 'well behaved' 802.11 > settings. Multi-hop networks (with outdoor links) and mobility > scenarios (such as wifi from moving cars) do experience losses even > with link layer reliability and no loss of connection. > I was (perhaps not very clearly) including multihop and mobility in "loss of connection" cases. I meant loss of PHY or layer 2 connectivity - not "session connectivity." Those situations do, as you suggest, drop packets when "link layer reliability" fails - but I would call the cause of that loss process a "loss of connectivity" however transient or healable. My main point was that these loss processes are not characterizable by a "link loss rate". They are not like Poisson losses at all, which are statistically a single parameter (called "rate"), memoryless distribution. They are causal, correlated, memory-full processes. And more importantly, one end or the other of the relevant link experiences a directly sensed "loss of connectivity" event. Thus my point: one SHOULD NOT model practical TCP/IP congestion/flow control based on an assumption of "links" with "loss rates" as if they were Poisson loss processes. One should instead focus on modeling loss processes that come from congestion and from loss of link connectivity or route changes arising from responses to connectivity loss in the appropriate ways that reflect practical reality. From rik at rikwade.com Mon Feb 9 12:57:52 2009 From: rik at rikwade.com (Richard Wade) Date: Tue, 10 Feb 2009 09:57:52 +1300 Subject: [e2e] packet pair code In-Reply-To: <55c1a2ac0902090057j12fb239dyd45fe153bc8a04c6@mail.gmail.com> References: <55c1a2ac0902090057j12fb239dyd45fe153bc8a04c6@mail.gmail.com> Message-ID: On Mon, Feb 9, 2009 at 9:57 PM, Guy - Alain Lusilao-Zodi wrote: > Hi all, > > I am looking for the c/c++ code of the packet pair techniques? Is there > anyone who can assist? > Probably the first port of call when looking at Packet Pair for bottleneck bandwidth probing is Keshav's work ( http://www.cs.cornell.edu/skeshav/papers.html ). This will give you an overview of the technique, its assumptions, and some areas which may require further study. >From memory, there is some code available for the NS2 simulator which implements Keshav's algorithms in a basic form. I don't know whether this is part of the current NS2 distribution, but it's easy enough to search through the distribution. Let me know if you have problems finding it. I also did some work with Packet Pair algorithms and implemented some code in NS2. I'll dig out the code and send it to you. In the meantime, the papers and thesis on this work are available at http://www.rikwade.com. -- r -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090210/33396182/attachment.html From jsj at ieee.org Mon Feb 9 17:56:03 2009 From: jsj at ieee.org (Scott Johnson) Date: Mon, 09 Feb 2009 20:56:03 -0500 Subject: [e2e] packet pair code In-Reply-To: References: <55c1a2ac0902090057j12fb239dyd45fe153bc8a04c6@mail.gmail.com> Message-ID: <4990DEB3.6070009@ieee.org> Perhaps pchar would be of use? http://www.kitchenlab.org/www/bmah/Software/pchar/ Richard Wade wrote: > On Mon, Feb 9, 2009 at 9:57 PM, Guy - Alain Lusilao-Zodi > > wrote: > > Hi all, > > I am looking for the c/c++ code of the packet pair techniques? Is > there anyone who can assist? > > > Probably the first port of call when looking at Packet Pair for > bottleneck bandwidth probing is Keshav's work > ( http://www.cs.cornell.edu/skeshav/papers.html ). This will give you > an overview of the technique, its assumptions, and some areas which > may require further study. > > From memory, there is some code available for the NS2 simulator which > implements Keshav's algorithms in a basic form. I don't know > whether this is part of the current NS2 distribution, but it's easy > enough to search through the distribution. Let me know if you have > problems finding it. > > I also did some work with Packet Pair algorithms and implemented some > code in NS2. I'll dig out the code and send it to you. In the > meantime, the papers and thesis on this work are available at > http://www.rikwade.com. > -- > r -- Regards, Scott Johnson jsj at ieee.org From detlef.bosau at web.de Mon Feb 9 19:30:26 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 10 Feb 2009 04:30:26 +0100 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <49908411.1000800@reed.com> References: <49902B22.80403@web.de> <499058D5.9040807@reed.com> <49908411.1000800@reed.com> Message-ID: <4990F4D2.6050102@web.de> David P. Reed wrote: > I was (perhaps not very clearly) including multihop and mobility in > "loss of connection" cases. I meant loss of PHY or layer 2 > connectivity - not "session connectivity." Those situations do, as you > suggest, drop packets when "link layer reliability" fails - but I > would call the cause of that loss process a "loss of connectivity" > however transient or healable. So, the question is: What is "loss of connection"? In cellular networks, this could mean a mobile is detached from its base station. It could mean as well: A mobile is not served by its base station. Allegedly, in HSDPA a transport block (L1, L2) is not scheduled if a channel's quality is too bad. So, a "bad channel" can introduce an unpredictable delay, because in general we cannot predict when a channel's quality will become "good" again, if at all. In case a block is scheduled despite the "bad channel", the block may be eventually lost. In both cases, the underlying problem is the "bad channel" which we can neither predict nor prevent. > > My main point was that these loss processes are not characterizable by > a "link loss rate". They are not like Poisson losses at all, which > are statistically a single parameter (called "rate"), memoryless > distribution. They are causal, correlated, memory-full processes. Differently put, perhpas more simple: Our knowledge about a wireless channel is extremely small. And from what I've seen so far even in scientific papers, we tend to use extremely simplified channel models. E.g. _pure_ Rayleigh channels. Or we _only_ consider distance based loss. (of signal strength). > And more importantly, one end or the other of the relevant link > experiences a directly sensed "loss of connectivity" event. In cellular networks, this may cause a mobile to attach to another base station. This process itself may cause random loss. This loss is not due to packet (block) corruption but due to roaming, if the technology in use does not provide a handover process. Hence, there is actually more than one reason for packet loss in mobile networks which is _not_ caused by congestion. > > Thus my point: one SHOULD NOT model practical TCP/IP congestion/flow > control based on an assumption of "links" with "loss rates" as if they > were Poisson loss processes. One should instead focus on modeling > loss processes that come from congestion and from loss of link > connectivity or route changes arising from responses to connectivity > loss in the appropriate ways that reflect practical reality. Where a "route change" (which covers my comment on roaming) may result not only in packet loss but in a sudden change of path capacity as well. BTW: If HSDPA does not support handover, HSDPA is not very well suited for media streaming. This is off topic in this post, but I just think about it because a huge number of papers talks about media streaming in HSDPA. This may work - as long as the mobile is not mobile ;-) However, what are the consequences from an end to end point of view? Do you agree that in cellular mobile networks, there is significant packet loss which is not congestion related? And which may lead to throughput degradation? Thanks Detlef -- Detlef Bosau Mail: detlef.bosau at web.de Galileistrasse 30 Web: http://www.detlef-bosau.de 70565 Stuttgart Skype: detlef.bosau Mobile: +49 172 681 9937 From detlef.bosau at web.de Mon Feb 9 19:33:45 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 10 Feb 2009 04:33:45 +0100 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <49908411.1000800@reed.com> References: <49902B22.80403@web.de> <499058D5.9040807@reed.com> <49908411.1000800@reed.com> Message-ID: <4990F599.7020501@web.de> (I apologize for sending this twice, I did not pay attention to the receiver addresses.) David P. Reed wrote: > I was (perhaps not very clearly) including multihop and mobility in > "loss of connection" cases. I meant loss of PHY or layer 2 > connectivity - not "session connectivity." Those situations do, as you > suggest, drop packets when "link layer reliability" fails - but I > would call the cause of that loss process a "loss of connectivity" > however transient or healable. So, the question is: What is "loss of connection"? In cellular networks, this could mean a mobile is detached from its base station. It could mean as well: A mobile is not served by its base station. Allegedly, in HSDPA a transport block (L1, L2) is not scheduled if a channel's quality is too bad. So, a "bad channel" can introduce an unpredictable delay, because in general we cannot predict when a channel's quality will become "good" again, if at all. In case a block is scheduled despite the "bad channel", the block may be eventually lost. In both cases, the underlying problem is the "bad channel" which we can neither predict nor prevent. > > My main point was that these loss processes are not characterizable by > a "link loss rate". They are not like Poisson losses at all, which > are statistically a single parameter (called "rate"), memoryless > distribution. They are causal, correlated, memory-full processes. Differently put, perhpas more simple: Our knowledge about a wireless channel is extremely small. And from what I've seen so far even in scientific papers, we tend to use extremely simplified channel models. E.g. _pure_ Rayleigh channels. Or we _only_ consider distance based loss. (of signal strength). > And more importantly, one end or the other of the relevant link > experiences a directly sensed "loss of connectivity" event. In cellular networks, this may cause a mobile to attach to another base station. This process itself may cause random loss. This loss is not due to packet (block) corruption but due to roaming, if the technology in use does not provide a handover process. Hence, there is actually more than one reason for packet loss in mobile networks which is _not_ caused by congestion. > > Thus my point: one SHOULD NOT model practical TCP/IP congestion/flow > control based on an assumption of "links" with "loss rates" as if they > were Poisson loss processes. One should instead focus on modeling > loss processes that come from congestion and from loss of link > connectivity or route changes arising from responses to connectivity > loss in the appropriate ways that reflect practical reality. Where a "route change" (which covers my comment on roaming) may result not only in packet loss but in a sudden change of path capacity as well. BTW: If HSDPA does not support handover, HSDPA is not very well suited for media streaming. This is off topic in this post, but I just think about it because a huge number of papers talks about media streaming in HSDPA. This may work - as long as the mobile is not mobile ;-) However, what are the consequences from an end to end point of view? Do you agree that in cellular mobile networks, there is significant packet loss which is not congestion related? And which may lead to throughput degradation? Thanks Detlef -- Detlef Bosau Mail: detlef.bosau at web.de Galileistrasse 30 Web: http://www.detlef-bosau.de 70565 Stuttgart Skype: detlef.bosau Mobile: +49 172 681 9937 From arnaud.legout at sophia.inria.fr Tue Feb 10 00:25:27 2009 From: arnaud.legout at sophia.inria.fr (Arnaud Legout) Date: Tue, 10 Feb 2009 09:25:27 +0100 Subject: [e2e] packet pair code In-Reply-To: References: <55c1a2ac0902090057j12fb239dyd45fe153bc8a04c6@mail.gmail.com> Message-ID: <499139F7.4020000@sophia.inria.fr> Hi, Richard Wade wrote: > From memory, there is some code available for the NS2 simulator which > implements Keshav's algorithms in a basic form. I don't know > whether this is part of the current NS2 distribution, but it's easy > enough to search through the distribution. Let me know if you have > problems finding it. I did a basic implementation of PP that is still maintained in the NS2 main code. You fill find it if you look for PLM (Packet-pair Layered Multicast congestion control). I don't remember is someone else has a PP code still maintained in NS2, and I am not sure what I did is relevant to you. What I did is very simple: I send packets by pair (modifying a CBR source) and at the receiver I simply compute the inter-packet arrival time to compute the available bandwidth (here I am using a FQ queue). No magic here, and you will not see anything new compared to what was described in the Keshav Ph.D. thesis. Regards, Arnaud. From detlef.bosau at web.de Tue Feb 10 02:46:48 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 10 Feb 2009 11:46:48 +0100 Subject: [e2e] How can we "model" mobile networks? was:Re: TCP Loss Differentiation In-Reply-To: <49908411.1000800@reed.com> References: <49902B22.80403@web.de> <499058D5.9040807@reed.com> <49908411.1000800@reed.com> Message-ID: <49915B18.5090205@web.de> David P. Reed wrote: > > My main point was that these loss processes are not characterizable by > a "link loss rate". They are not like Poisson losses at all, which > are statistically a single parameter (called "rate"), memoryless > distribution. They are causal, correlated, memory-full processes. > And more importantly, one end or the other of the relevant link > experiences a directly sensed "loss of connectivity" event. > > Thus my point: one SHOULD NOT model practical TCP/IP congestion/flow > control based on an assumption of "links" with "loss rates" as if they > were Poisson loss processes. One should instead focus on modeling > loss processes that come from congestion and from loss of link > connectivity or route changes arising from responses to connectivity > loss in the appropriate ways that reflect practical reality. Your statement is somewhat discouraging in the sense that we know, how we SHOULD NOT model a mobile network. And you put in question the _practical_ relevance of quite a lot of work which models mobile networks using gilbert markov models and the like: The results may well hold in theory, however they might not hold in practical applications. This leads to the question: How can we build some kind of a "model"? Perhaps, these are actually two questions: 1.: What do we want to achieve? 2.: How can we assess a given approach, e.g. in a paper? My question well points into the direction whether we deal with a "scientific" question - or a "technological" one. And of course it could be the simple question whether mobile networks are suitable for TCP at all. One could well take the position: TCP is sufficiently generic to run about any packet switching network, including IPoAC (RFC 1149), which has actually seen practical implementations. (Please note the particular meaning of "loss" and "conservation principle" in that context ;-)) Whether or not the system's behaviour is satisfactory to the user, depends on the technology in use and is not subject of a scientific discussion. -- Detlef Bosau Mail: detlef.bosau at web.de Galileistrasse 30 Web: http://www.detlef-bosau.de 70565 Stuttgart Skype: detlef.bosau Mobile: +49 172 681 9937 From dpreed at reed.com Tue Feb 10 04:31:18 2009 From: dpreed at reed.com (David P. Reed) Date: Tue, 10 Feb 2009 07:31:18 -0500 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <4990F4D2.6050102@web.de> References: <49902B22.80403@web.de> <499058D5.9040807@reed.com> <49908411.1000800@reed.com> <4990F4D2.6050102@web.de> Message-ID: <49917396.6010503@reed.com> I absolutely agree that there are non-congestion-related sources of packet loss! All I was suggesting is that one should not use the term "loss rate" to characterize them. A "loss process" would be a mathematically more sound term, because it does not confuse the listener into thinking that there is a simplistic, memoryless, one-parameter model that can be "discovered" by TCP's control algorithms. That said, I was encouraging a dichotomy where the world is far more complicated: congestion drops vs. connectivity drops. One *might* be able to make much practical headway by building a model and a theory of "connectivity drops". That's all I'm trying to say. If this clarifies my point, I apologize for communicating it so poorly before. Well, actually there is one other thing I was hoping to suggest: that this second category called "connectivity drops" is an emergent systems property, and is best not thought of as a "link" property. Link measurements cannot be easily used to characterize the "loss process" here - you need *systems-level* measurements (of, for a simple example, the loss events that come when a car using 3G Data is driving down the Autobahn with soft-handoffs between cell sites - we are not talking about "links" here; and the same thing shows up in 802.11s meshes, where I have some significant measurement experience - the OLPC XO computers run into this all the time because of the layer 2 mesh). Detlef Bosau wrote: > David P. Reed wrote: >> I was (perhaps not very clearly) including multihop and mobility in >> "loss of connection" cases. I meant loss of PHY or layer 2 >> connectivity - not "session connectivity." Those situations do, as >> you suggest, drop packets when "link layer reliability" fails - but I >> would call the cause of that loss process a "loss of connectivity" >> however transient or healable. > > So, the question is: What is "loss of connection"? > > In cellular networks, this could mean a mobile is detached from its > base station. It could mean as well: A mobile is not served by its > base station. > Allegedly, in HSDPA a transport block (L1, L2) is not scheduled if a > channel's quality is too bad. > > So, a "bad channel" can introduce an unpredictable delay, because in > general we cannot predict when a channel's quality will become "good" > again, if at all. > In case a block is scheduled despite the "bad channel", the block may > be eventually lost. > > In both cases, the underlying problem is the "bad channel" which we > can neither predict nor prevent. > >> >> My main point was that these loss processes are not characterizable >> by a "link loss rate". They are not like Poisson losses at all, >> which are statistically a single parameter (called "rate"), >> memoryless distribution. They are causal, correlated, memory-full >> processes. > > Differently put, perhpas more simple: Our knowledge about a wireless > channel is extremely small. And from what I've seen so far even in > scientific papers, we tend to use extremely simplified channel models. > E.g. _pure_ Rayleigh channels. Or we _only_ consider distance based > loss. (of signal strength). >> And more importantly, one end or the other of the relevant link >> experiences a directly sensed "loss of connectivity" event. > In cellular networks, this may cause a mobile to attach to another > base station. This process itself may cause random loss. This loss is > not due > to packet (block) corruption but due to roaming, if the technology in > use does not provide a handover process. > > Hence, there is actually more than one reason for packet loss in > mobile networks which is _not_ caused by congestion. > >> >> Thus my point: one SHOULD NOT model practical TCP/IP congestion/flow >> control based on an assumption of "links" with "loss rates" as if >> they were Poisson loss processes. One should instead focus on >> modeling loss processes that come from congestion and from loss of >> link connectivity or route changes arising from responses to >> connectivity loss in the appropriate ways that reflect practical >> reality. > > Where a "route change" (which covers my comment on roaming) may result > not only in packet loss but in a sudden change of path capacity as well. > > BTW: If HSDPA does not support handover, HSDPA is not very well suited > for media streaming. This is off topic in this post, but I just think > about it because a huge number of papers talks about media streaming > in HSDPA. This may work - as long as the mobile is not mobile ;-) > > However, what are the consequences from an end to end point of view? > > Do you agree that in cellular mobile networks, there is significant > packet loss which is not congestion related? And which may lead to > throughput degradation? > > Thanks > > Detlef > From xaixili at live.com Tue Feb 10 12:19:47 2009 From: xaixili at live.com (Xai Xi) Date: Tue, 10 Feb 2009 20:19:47 +0000 Subject: [e2e] TCP Loss Differentiation Message-ID: are you saying that a congestion-based loss process cannot be modeled or predicted? a tool, badabing, from sigcomm'05, claims to be highly accurate in measuring end-to-end loss processes. David wrote: > A "loss process" would be a mathematically more sound term, because it does not confuse> the listener into thinking that there is a simplistic, memoryless, one-parameter model that> can be "discovered" by TCP's control algorithms. > That said, I was encouraging a dichotomy where the world is far more complicated: > congestion drops vs. connectivity drops. One *might* be able to make much practical > headway by building a model and a theory of "connectivity drops". _________________________________________________________________ Drag n? drop?Get easy photo sharing with Windows Live? Photos. http://www.microsoft.com/windows/windowslive/products/photos.aspx From dpreed at reed.com Wed Feb 11 07:20:07 2009 From: dpreed at reed.com (David P. Reed) Date: Wed, 11 Feb 2009 10:20:07 -0500 Subject: [e2e] TCP Loss Differentiation In-Reply-To: References: Message-ID: <4992ECA7.2020301@reed.com> I don't understand how what I wrote could be interpreted as "a congestion-based loss process cannot be modeled or predicted". I was speaking about *non-congestion-based* "connectivity loss related loss process", and I *said* that it is not a single-parameter, memoryless loss process. I said nothing whatsoever about congestion-based loss processes, having differentiated carefully the two types of loss (which differentiation was what Detlef started this thread with). Clearly I am not communicating, despite using English and common terms from systems modeling mathematics. Xai Xi wrote: > are you saying that a congestion-based loss process cannot be modeled or predicted? a tool, badabing, from sigcomm'05, claims to be highly accurate in measuring end-to-end loss processes. > > David wrote: > >> A "loss process" would be a mathematically more sound term, because it >> > does not confuse> the listener into thinking that there is a simplistic, > memoryless, one-parameter model that> can be "discovered" by TCP's > control algorithms. > >> That said, I was encouraging a dichotomy where the world is far more >> > complicated: > >> congestion drops vs. connectivity drops. One *might* be >> > able to make much practical > >> headway by building a model and a theory of >> > "connectivity drops". > > > _________________________________________________________________ > Drag n? drop?Get easy photo sharing with Windows Live? Photos. > > http://www.microsoft.com/windows/windowslive/products/photos.aspx > > From pganti at gmail.com Wed Feb 11 12:03:13 2009 From: pganti at gmail.com (Paddy Ganti) Date: Wed, 11 Feb 2009 12:03:13 -0800 Subject: [e2e] packet pair code In-Reply-To: <55c1a2ac0902090057j12fb239dyd45fe153bc8a04c6@mail.gmail.com> References: <55c1a2ac0902090057j12fb239dyd45fe153bc8a04c6@mail.gmail.com> Message-ID: <2ff1f08a0902111203x35b020eel1cb05b34cfcd6acd@mail.gmail.com> There was a tool called "tcpanaly" by Vern paxson which had an implementation of the packet pair algorithm. May be Bro script ( http://www.bro-ids.org/download.html) can do that these days. A slightly modified version of packet pair code is also available as a part of a tcp replay tool available here: http://sysnet.ucsd.edu/~ycheng/monkey/ Hope this is what you were looking for. Warm Regards, -Paddy On Mon, Feb 9, 2009 at 12:57 AM, Guy - Alain Lusilao-Zodi < guylzodi at gmail.com> wrote: > Hi all, > > I am looking for the c/c++ code of the packet pair techniques? Is there > anyone who can assist? > > Regards > -- > ---------------------------------------------------------- > G.-A. Lusilao-Zodi voice : 0216554019 > PhD in Video streaming cell :082687993 > > Communication group guylzodi at gmail.com > university of cape-Town > ----------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090211/80bcbb2f/attachment.html From fred at cisco.com Wed Feb 11 14:07:19 2009 From: fred at cisco.com (Fred Baker) Date: Wed, 11 Feb 2009 14:07:19 -0800 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <4992ECA7.2020301@reed.com> References: <4992ECA7.2020301@reed.com> Message-ID: <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> Copying the specific communicants in this thread as my postings to end2end-interest require moderator approval (I guess I'm not an acceptable person for some reason, and the moderator has told me that he will not tell me what rule prevents me from posting without moderation). I think you're communicating just fine. I understood, and agreed with, your comment. I actually think that a more important model is not loss processes, which as you describe are both congestion-related and related to other underlying issues, but a combination of several underlying and fundamentally different kinds of processes. One is perhaps "delay processes" (of which loss is the extreme case and L2 retransmission is a partially-understood and poorly modeled contributor to). Another might be interference processes (such as radio interference in 802.11/802.16 networks) that cause end to end packet loss for other reasons. In mobile networks, it might be worthwhile to distinguish the processes of network change - from the perspective of an endpoint that is in motion, its route, and therefore its next hop, is constantly changing and might at times not exist. Looking at it from a TCP/SCTP perspective, we can only really discuss it as how we can best manage to use a certain share of the capacity the network provides, how much use is counterproductive, when to retransmit, and all that. But understanding the underlying issues will contribute heavily to that model. On Feb 11, 2009, at 7:20 AM, David P. Reed wrote: > I don't understand how what I wrote could be interpreted as "a > congestion-based loss process cannot be modeled or predicted". > > I was speaking about *non-congestion-based* "connectivity loss > related loss process", and I *said* that it is not a single- > parameter, memoryless loss process. > > I said nothing whatsoever about congestion-based loss processes, > having differentiated carefully the two types of loss (which > differentiation was what Detlef started this thread with). > > Clearly I am not communicating, despite using English and common > terms from systems modeling mathematics. > > Xai Xi wrote: >> are you saying that a congestion-based loss process cannot be >> modeled or predicted? a tool, badabing, from sigcomm'05, claims to >> be highly accurate in measuring end-to-end loss processes. >> >> David wrote: >> >>> A "loss process" would be a mathematically more sound term, >>> because it >> does not confuse> the listener into thinking that there is a >> simplistic, memoryless, one-parameter model that> can be >> "discovered" by TCP's control algorithms. >> >>> That said, I was encouraging a dichotomy where the world is far more >> complicated: >>> congestion drops vs. connectivity drops. One *might* be >> able to make much practical >>> headway by building a model and a theory of >> "connectivity drops". >> >> >> _________________________________________________________________ >> Drag n? drop?Get easy photo sharing with Windows Live? Photos. >> >> http://www.microsoft.com/windows/windowslive/products/photos.aspx >> >> From fred at cisco.com Fri Feb 13 23:31:07 2009 From: fred at cisco.com (Fred Baker) Date: Fri, 13 Feb 2009 23:31:07 -0800 Subject: [e2e] test Message-ID: <76DADFC5-E101-4C63-8795-90C28C6DAEBF@cisco.com> Please discard. From touch at ISI.EDU Sun Feb 15 12:35:37 2009 From: touch at ISI.EDU (Joe Touch) Date: Sun, 15 Feb 2009 12:35:37 -0800 Subject: [e2e] Updated mailman filter rules now in place Message-ID: <49987C99.2060606@isi.edu> Hi, all, The mailman filter rules on this list have been updated. In the past, some members' posts were "held for approval" because their headers matched a filter rule, even though similar posts from others went directly to the list un-moderated. That happened because of a interaction between some optional email headers and mailman's python-based filter regular expressions, which have been updated to avoid the problem. Note that the filters still trigger moderation for the following reasons: - posts of moderated material (CFPs, book and software announcements) - posts of prohibited material (job posts, spam, non-text posts) - posts with too many recipients (direct or cc'd) If you have had posts held for unknown reasons in the past, please send a short test post to the list and verify that the current system works. If you continue to have problems, please let me know. Many thanks to Fred Baker for helping debug this issue with me. Joe (your list admin) From guylzodi at gmail.com Mon Feb 16 23:47:04 2009 From: guylzodi at gmail.com (Guy - Alain Lusilao-Zodi) Date: Tue, 17 Feb 2009 09:47:04 +0200 Subject: [e2e] Data flow models for real-time applications Message-ID: <55c1a2ac0902162347l69180e93uf79acf89207ca0e4@mail.gmail.com> Hi All, I am working on a project which consist of developing an end-to-end quality of service for real-time applications. The novelty of the project is to develop an instantaneous model of flow for real-time applications or for streaming video. I would like to known if no one have never try to develop a model of flow for real-time applications that can server as a basis for my project. Also as PhD degree, this is suppose to be my contribution, therefore if the work have already been done I will appreciate if you can provide me the necessary informations otherwise I will be wasting my time. Regards -- ---------------------------------------------------------- G.-A. Lusilao-Zodi voice : 0216554019 PhD in Video streaming cell :082687993 Communication group guylzodi at gmail.com university of cape-Town ----------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090217/3e4d7e31/attachment.html From craig at aland.bbn.com Tue Feb 17 05:06:44 2009 From: craig at aland.bbn.com (Craig Partridge) Date: Tue, 17 Feb 2009 08:06:44 -0500 Subject: [e2e] Data flow models for real-time applications In-Reply-To: Your message of "Tue, 17 Feb 2009 09:47:04 +0200." <55c1a2ac0902162347l69180e93uf79acf89207ca0e4@mail.gmail.com> Message-ID: <20090217130644.0108628E161@aland.bbn.com> There have certainly been attempts to develop data structures intended to model a real-time flow -- e.g. the ST-2 flow specs and token buckets come to mind. But to better answer your question, I think you need to clarify what you mean by "instantaneous model of a flow for real-time applications". What makes the model instantaneous? (Do you mean you adjust it in real-time? Or that you can specify a sufficiently good model of the flow before it starts?). What are you trying to model about the flow? (The delay constrained traffic within the flow? the total traffic load? a layered traffic/delay model?). What is your definition of real-time? (Sufficient for playback? Sufficient for two-way conversation?). Thanks! Craig In message <55c1a2ac0902162347l69180e93uf79acf89207ca0e4 at mail.gmail.com>, Guy - Alain Lusilao-Zodi writes: >--001636c5a72e60d25e04631882b7 >Content-Type: text/plain; charset=ISO-8859-1 >Content-Transfer-Encoding: 7bit > >Hi All, > >I am working on a project which consist of developing an end-to-end quality >of service for real-time applications. The novelty of the project is to >develop an instantaneous model of flow for real-time applications or for >streaming video. >I would like to known if no one have never try to develop a model of flow >for real-time applications that can server as a basis for my project. >Also as PhD degree, this is suppose to be my contribution, therefore if the >work have already been done I will appreciate if you can provide me the >necessary informations otherwise I will be wasting my time. > >Regards > >-- >---------------------------------------------------------- >G.-A. Lusilao-Zodi voice : 0216554019 >PhD in Video streaming cell :082687993 > >Communication group guylzodi at gmail.com >university of cape-Town >----------------------------------------------------------- > >--001636c5a72e60d25e04631882b7 >Content-Type: text/html; charset=ISO-8859-1 >Content-Transfer-Encoding: quoted-printable > >Hi All,

I am working on a project which consist of developing an end= >-to-end quality of service for real-time applications. The novelty of the p= >roject is to develop an instantaneous model of flow for real-time applicati= >ons or for streaming video.
>I would like to known if no one have never try to develop a model of flow f= >or real-time applications that can server as a basis for my project.
Al= >so as PhD degree, this is suppose to be my contribution, therefore if the w= >ork have already been done I will appreciate if you can provide me the nece= >ssary informations otherwise I will be wasting my time.
>
Regards

--
-----------------------------------= >-----------------------
G.-A. Lusilao-Zodi         &= >nbsp; voice : 0216554019
PhD in Video streaming  cell    = >:082687993                   &= >nbsp;              
>Communication group    guyl= >zodi at gmail.com    
university of cape-Town
-----------= >------------------------------------------------
> >--001636c5a72e60d25e04631882b7-- From fred at cisco.com Tue Feb 17 07:21:09 2009 From: fred at cisco.com (Fred Baker) Date: Tue, 17 Feb 2009 07:21:09 -0800 Subject: [e2e] Data flow models for real-time applications In-Reply-To: <20090217130644.0108628E161@aland.bbn.com> References: <20090217130644.0108628E161@aland.bbn.com> Message-ID: I think you also need to define "real time". The term derives from the historical milleiu around 1990. It is specifically defined in http://www.ietf.org/rfc/rfc1633.txt to refer to communications that have a stated end to end delay requirement, which turns out to be more practically addressed as a bandwidth requirement - a voice or video codec will generally require a rate within a range or within one of several ranges. Fundamentally, the difference is by comparison to elastic applications such as those that run over TCP; they will adjust themselves infinitely up to a point derived from the retransmission or drop-dead configuration of the communicants, while real-time applications break in some way. But there are a number of other applications that have real-time characteristics that are not play-back applications. Sensor traffic comes to mind; certain classes of sensors generate traffic at a certain rate, either regardless of circumstances or under certain circumstances, and if it doesn't get through there is no retransmission on the theory that it wasn't all that important. An example that has been proposed is with the CalTech Earthquake lab; if they had cheap seismometers built into set-top boxes in people's homes (say), whenever there is a jiggle above some threshold they could get an arbitrary number of UDP reports with a GPS location and a seismic reading. Now, that would have the characteristics of a DDOS attack (when there is a jiggle near CalTech, a *lot* of homes might be affected), so they had jolly well better have enough bandwidth in place. So when you say "real time", what do you mean - voice/video playback applications? Sensor traffic, and if so of what types? Is there something else (I'll bet there is but haven't spent a lot of time thinking about it)? On Feb 17, 2009, at 5:06 AM, Craig Partridge wrote: > > There have certainly been attempts to develop data structures > intended to > model a real-time flow -- e.g. the ST-2 flow specs and token buckets > come > to mind. > > But to better answer your question, I think you need to clarify what > you > mean by "instantaneous model of a flow for real-time applications". > What makes the model instantaneous? (Do you mean you adjust it in > real-time? Or that you can specify a sufficiently good model of the > flow > before it starts?). What are you trying to model about the flow? > (The delay > constrained traffic within the flow? the total traffic load? a > layered > traffic/delay model?). What is your definition of real-time? > (Sufficient > for playback? Sufficient for two-way conversation?). > > Thanks! > > Craig > > In message <55c1a2ac0902162347l69180e93uf79acf89207ca0e4 at mail.gmail.com > >, Guy - > Alain Lusilao-Zodi writes: > >> --001636c5a72e60d25e04631882b7 >> Content-Type: text/plain; charset=ISO-8859-1 >> Content-Transfer-Encoding: 7bit >> >> Hi All, >> >> I am working on a project which consist of developing an end-to-end >> quality >> of service for real-time applications. The novelty of the project >> is to >> develop an instantaneous model of flow for real-time applications >> or for >> streaming video. >> I would like to known if no one have never try to develop a model >> of flow >> for real-time applications that can server as a basis for my project. >> Also as PhD degree, this is suppose to be my contribution, >> therefore if the >> work have already been done I will appreciate if you can provide me >> the >> necessary informations otherwise I will be wasting my time. >> >> Regards >> >> -- >> ---------------------------------------------------------- >> G.-A. Lusilao-Zodi voice : 0216554019 >> PhD in Video streaming cell :082687993 >> >> Communication group guylzodi at gmail.com >> university of cape-Town >> ----------------------------------------------------------- >> >> --001636c5a72e60d25e04631882b7 >> Content-Type: text/html; charset=ISO-8859-1 >> Content-Transfer-Encoding: quoted-printable >> >> Hi All,

I am working on a project which consist of >> developing an end= >> -to-end quality of service for real-time applications. The novelty >> of the p= >> roject is to develop an instantaneous model of flow for real-time >> applicati= >> ons or for streaming video.
>> I would like to known if no one have never try to develop a model >> of flow f= >> or real-time applications that can server as a basis for my >> project.
Al= >> so as PhD degree, this is suppose to be my contribution, therefore >> if the w= >> ork have already been done I will appreciate if you can provide me >> the nece= >> ssary informations otherwise I will be wasting my time.
>>
Regards

-- >>
-----------------------------------= >> -----------------------
G.-A. Lusilao-Zodi       >>   &= >> nbsp; voice : 0216554019
PhD in Video streaming  cell >>    = >> :082687993                 >>   &= >> nbsp;              
>> Communication group    > href=3D"mailto:guylzodi at gmail.com">guyl= >> zodi at gmail.com    
university of cape- >> Town
-----------= >> ------------------------------------------------
>> >> --001636c5a72e60d25e04631882b7-- From huitema at windows.microsoft.com Tue Feb 17 09:13:27 2009 From: huitema at windows.microsoft.com (Christian Huitema) Date: Tue, 17 Feb 2009 09:13:27 -0800 Subject: [e2e] Data flow models for real-time applications In-Reply-To: References: <20090217130644.0108628E161@aland.bbn.com> Message-ID: <8EFB68EAE061884A8517F2A755E8B60A19DAEF0DFD@NA-EXMSG-W601.wingroup.windeploy.ntdev.microsoft.com> > But there are a number of other applications that have real-time > characteristics that are not play-back applications. Sensor traffic > comes to mind; certain classes of sensors generate traffic at a > certain rate, either regardless of circumstances or under certain > circumstances, and if it doesn't get through there is no > retransmission on the theory that it wasn't all that important. An > example that has been proposed is with the CalTech Earthquake lab; if > they had cheap seismometers built into set-top boxes in people's homes > (say), whenever there is a jiggle above some threshold they could get > an arbitrary number of UDP reports with a GPS location and a seismic > reading. Now, that would have the characteristics of a DDOS attack > (when there is a jiggle near CalTech, a *lot* of homes might be > affected), so they had jolly well better have enough bandwidth in > place. Either that, or make the sensors a tiny bit smarter. I am sure there are lots of publish papers on how to make distributed systems like these more efficient, e.g. have a varying fraction of the sensors activated at some time, with the fraction varying based on activity level... -- Christian Huitema From laurent at comp.lancs.ac.uk Tue Feb 17 12:06:36 2009 From: laurent at comp.lancs.ac.uk (Laurent Mathy) Date: Tue, 17 Feb 2009 20:06:36 GMT Subject: [e2e] CFP ACM SIGCOMM 2009 workshops, posters and demos Message-ID: <200902172006.n1HK6aMr020378@gateway.comp.lancs.ac.uk> [Apologies for multiple copies] CALL FOR PAPERS ACM SIGCOMM 2009 August 17-21, Barcelona, Spain Associated Workshops: -------------------- WOSN 2009 - Workshop on Online Social Networks http://conferences.sigcomm.org/sigcomm/2009/workshops/wosn/ Paper submissions: March 6, 2009 VISA 2009 - Virtualized Infrastructure Systems and Architectures http://conferences.sigcomm.org/sigcomm/2009/workshops/visa/ Paper submissions: March 6, 2009 WREN 2009 - Workshop: Research on Enterprise Networking http://conferences.sigcomm.org/sigcomm/2009/workshops/wren/ Abstract registrations: March 11, 2009 Paper submissions: March 18, 2009 PRESTO 2009 - Programmable Routers for Extensible Services of Tomorrow http://conferences.sigcomm.org/sigcomm/2009/workshops/presto/ Abstract registrations: March 6, 2009 Paper submissions: March 13, 2009 MobiHeld 2009 - Networking, Systems, Applications on Mobile Handhelds (Previously referred to as Mobihand) http://conferences.sigcomm.org/sigcomm/2009/workshops/mobiheld/ Abstract registration: March 17, 2009 Paper submissions: March 24, 2009 Posters: ------- http://conferences.sigcomm.org/sigcomm/2009/posters.php Submission deadline: May 4, 2009 Demos: ----- http://conferences.sigcomm.org/sigcomm/2009/demos.php Submission deadline: May 4, 2009 From mayer at tm.uka.de Wed Feb 18 00:20:48 2009 From: mayer at tm.uka.de (Christoph Mayer) Date: Wed, 18 Feb 2009 09:20:48 +0100 Subject: [e2e] Data flow models for real-time applications In-Reply-To: <8EFB68EAE061884A8517F2A755E8B60A19DAEF0DFD@NA-EXMSG-W601.wingroup.windeploy.ntdev.microsoft.com> References: <20090217130644.0108628E161@aland.bbn.com> <8EFB68EAE061884A8517F2A755E8B60A19DAEF0DFD@NA-EXMSG-W601.wingroup.windeploy.ntdev.microsoft.com> Message-ID: <499BC4E0.7050202@tm.uka.de> Hi, >> reading. Now, that would have the characteristics of a DDOS attack >> (when there is a jiggle near CalTech, a *lot* of homes might be >> affected), so they had jolly well better have enough bandwidth in >> place. > > Either that, or make the sensors a tiny bit smarter. I am sure there are lots of publish papers on how to make distributed systems like these more efficient, e.g. have a varying fraction of the sensors activated at some time, with the fraction varying based on activity level... if anyone is aware of papers that go into this direction I would be very interested! I currently start looking into what network traffic will look like in the "Internet of Things". Imagining sensors, tags, etc. everywhere in our environment and all of them communicating ... this will maybe change the traffic characteristics in Core/Edge networks dramatically. I am happy on any ideas into this direction. Best Regards, Christoph Mayer From detlef.bosau at web.de Wed Feb 18 01:24:50 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 18 Feb 2009 10:24:50 +0100 Subject: [e2e] Data flow models for real-time applications In-Reply-To: <55c1a2ac0902162347l69180e93uf79acf89207ca0e4@mail.gmail.com> References: <55c1a2ac0902162347l69180e93uf79acf89207ca0e4@mail.gmail.com> Message-ID: <499BD3E2.7060105@web.de> I have to admit, that I'm somewhat confused by your question. On the one hand, there is an incredibly huge amount of work which deals with realtime applications for streaming video. Actually, the amount of work is that huge that I'm really surprised that you want to start a PhD project in this area. There might be a gap left - but it would perhaps require 10 PhD projects to find it. On the other hand, and I tried to work in that area myself some years ago, this kind of work is somewhat frustrating. The point is that the Internet is 1. a packet switching network and 2. should be able to work without central ressource management. (Precisely: "must permit distributed management of its ressources".) (D. D. Clark, "The Design Philosophiy of the DARPA Internet Protocolls", SIGCOMM 88) Actually, I don't see that real time streaming will easily fit into this philosphy. Of course, there _are_ attempts of fitting streaming into IP. The Streamint Protocol ST II was already mentioned. http://www.isi.edu/in-notes/rfc1819.txt Surely, you want to look at the Tenet suite by Ferrari et al. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.49.4645 (As you easily see, this work dates bake into the early 90s. That was the time of the "Multimedia Hype" in the Internet.) And of couse, you will have a look at the "Intserv" work. And of course one of the most important works in this area is the PhD dissertation by Srinivasan Keshav, 1991 http://www.cs.cornell.edu/skeshav/doc/keshav.th.tar.Z At least the multimedia streaming approaches build a circuit switching upon IP, which basically requires negotation and reservation of ressources from the sender to the receiver. So basically, these works provide a careful and convincing proof, that IP packet switching can be used in the same way as ATM cell switching or other approaches from the telephony world. My first concern is: This is nothing new. We all know, that ATM works. My second concern is: Is this really, what we want? Particularly, Keshav's work points out that there _is_ some kind of ressource management necessary, which can be implemented in a distributed manner of course, but what was Clark's intention? A hierarchical approach? Or a heterarchical approach? In the case of a heterarchical intention, the whole thing appears to me like some kind of "abuse" of the Internet: Why shall we use a packet switching service, which is intended to be a best effort, heterarchical, asynchronous service as a replacement for a TDMA service? And what is the purpose to prove that peaches can be taken as oranges? For me, another concern is important. When I worked in this area, I dealt with mobile networks. And I was expected to build an "architecture" for "QoS" etc. with mobile networks. To make a long story short: QoS parameters like "bit rate", "bit error ratio" and the like cannot be mapped onto physical parameters in a UMTS link or the like, even if you consider only one link. Period. Actually, media streaming / voice transfer in mobile networks, particularly with QoS support like UMTS, is basically line switched /TDMA switched and follows different paradigms than packet switching. And this makes sense: As we all know, we _can_ talk to each other with cell phones. And there is absolutely no reason to reinvent the wheel here with packet switching for the one and only reason that we are not willing to see that different technologies are well suitable for different purposes. So, if my post sounds a bit frustrated, the reason is: I _am_ frustrated an this matter, and I really wonder why you really start a work which has been done 15 years ago and which lead to quite a few "demo implementations" in universities where it was carefully proven that we can achieve "QoS by underutilization" and that we can do video streaming from one PC to another - iff the GE back to back cabling between them was "big and fat enough", while in the commercial industrie, there was a certain episode with "speeach conference systems" and "multimedia audiovisual cnference systems" (namely by Intel and Sony) and the like which lasted IIRC less than five years. In the late 90s, these systems were some kind of status symbol for "managers" and "decision makers", but I always saw them unused, because no decision maker really wants to abstain from the inevitable "meetings" and "buisness trips". So, the argument was to buy a conference system in order to spare costs - and the reality was that the bill was to be paid for both: The conference system and the buisness trip as well. (We all know the fortune cookie: Do not hesitate any costs to spare on this one!) Detlef Guy - Alain Lusilao-Zodi wrote: > Hi All, > > I am working on a project which consist of developing an end-to-end > quality of service for real-time applications. The novelty of the > project is to develop an instantaneous model of flow for real-time > applications or for streaming video. > I would like to known if no one have never try to develop a model of > flow for real-time applications that can server as a basis for my > project. > Also as PhD degree, this is suppose to be my contribution, therefore > if the work have already been done I will appreciate if you can > provide me the necessary informations otherwise I will be wasting my time. > > Regards > > -- > ---------------------------------------------------------- > G.-A. Lusilao-Zodi voice : 0216554019 > PhD in Video streaming cell :082687993 > > Communication group guylzodi at gmail.com > university of cape-Town > ----------------------------------------------------------- -- Detlef Bosau Mail: detlef.bosau at web.de Galileistrasse 30 Web: http://www.detlef-bosau.de 70565 Stuttgart Skype: detlef.bosau Mobile: +49 172 681 9937 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3351 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20090218/16e7d1ca/smime.bin From craig at aland.bbn.com Wed Feb 18 02:03:28 2009 From: craig at aland.bbn.com (Craig Partridge) Date: Wed, 18 Feb 2009 05:03:28 -0500 Subject: [e2e] Data flow models for real-time applications In-Reply-To: Your message of "Wed, 18 Feb 2009 09:20:48 +0100." <499BC4E0.7050202@tm.uka.de> Message-ID: <20090218100328.B9CCE28E155@aland.bbn.com> In message <499BC4E0.7050202 at tm.uka.de>, Christoph Mayer writes: >> Either that, or make the sensors a tiny bit smarter. I am sure there are lot >s of publish papers on how to make distributed systems like these more effici >ent, e.g. have a varying fraction of the sensors activated at some time, with > the fraction varying based on activity level... > >if anyone is aware of papers that go into this direction I would be very >interested! I currently start looking into what network traffic will >look like in the "Internet of Things". Imagining sensors, tags, etc. >everywhere in our environment and all of them communicating ... this >will maybe change the traffic characteristics in Core/Edge networks >dramatically. I am happy on any ideas into this direction. I can't seem to call up the papers off hand but let me suggest some directions. There are papers on how to make sensor networks more effective at getting information to a collection point. The usual hard problem is that you assume a field of sensors, and someone/something passes by and wants to be told what the sensor net has learned. The simple version is the collector send out a query, which gets flooded (easy, a tree out from the sensor node the collector is closest to), and the answers flood back (up the tree to the root) and overwhelm the sensor network near the collector. Various papers on summarizing reports along the way back, etc. I think you may find the Sensys proceedings a good place to dig on this topic. I don't know if he's published it, but John Doyle of CalTech pointed out to me last fall that if you look at biological networks for inspiration, there's an alternative approach. In bio networks (such as how the body gets inputs from your eyes), there's apparently often an order of magnitude more bandwidth from collector to sensor than from sensor to collector. So imagine that the collector instead of saying "what do you know?" says "have you observed events of the following types in the following time ranges?" and only hears from sensors for which the answer is "yes". Hope this is useful! Craig From calvert at netlab.uky.edu Wed Feb 18 12:17:35 2009 From: calvert at netlab.uky.edu (Ken Calvert) Date: Wed, 18 Feb 2009 15:17:35 -0500 (EST) Subject: [e2e] NetArch 2009 Message-ID: <20090218201735.C63761362EA@ozark.netlab.uky.edu> Call for Position Papers and Participation NETARCH 2009 Symposium March 16-20, 2009 Monte Verita, Ascona, Switzerland http://www.netarch2009.net/ This year marks the 40th anniversary of RFC 1, the first of the famous line of documents that today represents the specification of the Internet. The phenomenal success of the Internet has been concurrent with many other architectural proposals made during this time. Not all of them could be sufficiently explored, for various pragmatic, technological, or economic reasons. But many of the problems they were intended to address persist today, and many believe that a "clean slate" approach is necessary. A prerequisite, however, is to consider carefully the experience of the last 40 years and sort out the timeless principles from the historical accidents. NetArch 2009 brings together experts and researchers for that specific purpose. The event is built around keynote presentations by experts on key topics such as naming, virtualization, and why technologies fail; they will be interleaved with discussion sessions and counterpoint talks, both forward- and backward-looking. The symposium will take place at Monte Verita (literally, "Mountain of Truth") in Ascona, Switzerland. Insights from the symposium will be collected and published in book form as a lasting record. Keynote speakers include Jon Crowcroft, Van Jacobson, Raj Jain, Paul Mockapetris, Craig Partridge, Guru Parulkar, Jonathan M. Smith, and Martha Steenstrup. Space is limited. However, we have room for additional participants. Interested researchers are asked to submit a brief (max two pages) position paper to , stating their viewpoint or relevant research result to be contributed. Submissions will be evaluated as they are received, so apply early. Important dates: * Submission deadline: 25 February 2009 * Notification: 1 March 2009 Organizing Committee: C. Jelger (UniBasel), M.May (Thomson), B. Plattner (ETH Zurich), C. Tschudin (UniBasel), K. Calvert (U. Kentucky) From rhee at ncsu.edu Thu Feb 19 19:07:55 2009 From: rhee at ncsu.edu (Injong Rhee) Date: Thu, 19 Feb 2009 22:07:55 -0500 Subject: [e2e] TCP Loss Differentiation References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> Message-ID: <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> Perhaps I might add on this thread. Yes. I agree that it is not so clear that we have a model for non-congestion related losses. The motivation for this differentiation is, I guess, to disregard non-congestion related losses for TCP window control. So the motivation is valid. But maybe we should look at the problem from a different perspective. Instead of trying to detect non-congestion losses, why not try to detect congestion losses? Well..congestion signals are definitely easy to detect because losses are typically associated with some patterns of delays. So the scheme would be "reduce the congestion window ONLY when it is certain with high probability that losses are from congestion". This scheme would be different from "reduce whenever any indication of congestion occurs". Well my view could be too dangerous. But given that there are protocols out there, e.g., DCCP, that react to congestion much more slowly than TCP, this type of protocols may not be so bad... ----- Original Message ----- From: "Fred Baker" To: "David P. Reed" Cc: "end2end-interest list" Sent: Wednesday, February 11, 2009 5:07 PM Subject: Re: [e2e] TCP Loss Differentiation > Copying the specific communicants in this thread as my postings to > end2end-interest require moderator approval (I guess I'm not an > acceptable person for some reason, and the moderator has told me that he > will not tell me what rule prevents me from posting without moderation). > > I think you're communicating just fine. I understood, and agreed with, > your comment. > > I actually think that a more important model is not loss processes, which > as you describe are both congestion-related and related to other > underlying issues, but a combination of several underlying and > fundamentally different kinds of processes. One is perhaps "delay > processes" (of which loss is the extreme case and L2 retransmission is a > partially-understood and poorly modeled contributor to). Another might be > interference processes (such as radio interference in 802.11/802.16 > networks) that cause end to end packet loss for other reasons. In mobile > networks, it might be worthwhile to distinguish the processes of network > change - from the perspective of an endpoint that is in motion, its > route, and therefore its next hop, is constantly changing and might at > times not exist. > > Looking at it from a TCP/SCTP perspective, we can only really discuss it > as how we can best manage to use a certain share of the capacity the > network provides, how much use is counterproductive, when to retransmit, > and all that. But understanding the underlying issues will contribute > heavily to that model. > > On Feb 11, 2009, at 7:20 AM, David P. Reed wrote: > >> I don't understand how what I wrote could be interpreted as "a >> congestion-based loss process cannot be modeled or predicted". >> >> I was speaking about *non-congestion-based* "connectivity loss related >> loss process", and I *said* that it is not a single- parameter, >> memoryless loss process. >> >> I said nothing whatsoever about congestion-based loss processes, having >> differentiated carefully the two types of loss (which differentiation >> was what Detlef started this thread with). >> >> Clearly I am not communicating, despite using English and common terms >> from systems modeling mathematics. >> >> Xai Xi wrote: >>> are you saying that a congestion-based loss process cannot be modeled >>> or predicted? a tool, badabing, from sigcomm'05, claims to be highly >>> accurate in measuring end-to-end loss processes. >>> >>> David wrote: >>> >>>> A "loss process" would be a mathematically more sound term, because it >>> does not confuse> the listener into thinking that there is a >>> simplistic, memoryless, one-parameter model that> can be "discovered" >>> by TCP's control algorithms. >>> >>>> That said, I was encouraging a dichotomy where the world is far more >>> complicated: >>>> congestion drops vs. connectivity drops. One *might* be >>> able to make much practical >>>> headway by building a model and a theory of >>> "connectivity drops". >>> >>> >>> _________________________________________________________________ >>> Drag n? drop?Get easy photo sharing with Windows Live? Photos. >>> >>> http://www.microsoft.com/windows/windowslive/products/photos.aspx >>> >>> > > > From fred at cisco.com Thu Feb 19 20:32:03 2009 From: fred at cisco.com (Fred Baker) Date: Thu, 19 Feb 2009 20:32:03 -0800 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> Message-ID: <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> Which begs the question - why are we tuning to loss in the first place? Once you have filled the data path enough to achieve your "fair share" of the capacity, filling the queue more doesn't improve your speed and it hurts everyone around you. As your cwnd grows, your mean RTT grows with it so that the ratio of cwnd/rtt remains equal to the capacity of the bottleneck. Seems pointless and selfish, the kind of thing we discipline our children if they do. On Feb 19, 2009, at 7:07 PM, Injong Rhee wrote: > Perhaps I might add on this thread. Yes. I agree that it is not so > clear that we have a model for non-congestion related losses. The > motivation for this differentiation is, I guess, to disregard non- > congestion related losses for TCP window control. So the motivation > is valid. But maybe we should look at the problem from a different > perspective. Instead of trying to detect non-congestion losses, why > not try to detect congestion losses? Well..congestion signals are > definitely easy to detect because losses are typically associated > with some patterns of delays. So the scheme would be "reduce the > congestion window ONLY when it is certain with high probability that > losses are from congestion". This scheme would be different from > "reduce whenever any indication of congestion occurs". Well my view > could be too dangerous. But given that there are protocols out > there, e.g., DCCP, that react to congestion much more slowly than > TCP, this type of protocols may not be so bad... > > > ----- Original Message ----- From: "Fred Baker" > To: "David P. Reed" > Cc: "end2end-interest list" > Sent: Wednesday, February 11, 2009 5:07 PM > Subject: Re: [e2e] TCP Loss Differentiation > > >> Copying the specific communicants in this thread as my postings to >> end2end-interest require moderator approval (I guess I'm not an >> acceptable person for some reason, and the moderator has told me >> that he will not tell me what rule prevents me from posting >> without moderation). >> >> I think you're communicating just fine. I understood, and agreed >> with, your comment. >> >> I actually think that a more important model is not loss >> processes, which as you describe are both congestion-related and >> related to other underlying issues, but a combination of several >> underlying and fundamentally different kinds of processes. One is >> perhaps "delay processes" (of which loss is the extreme case and L2 >> retransmission is a partially-understood and poorly modeled >> contributor to). Another might be interference processes (such as >> radio interference in 802.11/802.16 networks) that cause end to >> end packet loss for other reasons. In mobile networks, it might be >> worthwhile to distinguish the processes of network change - from >> the perspective of an endpoint that is in motion, its route, and >> therefore its next hop, is constantly changing and might at times >> not exist. >> >> Looking at it from a TCP/SCTP perspective, we can only really >> discuss it as how we can best manage to use a certain share of the >> capacity the network provides, how much use is counterproductive, >> when to retransmit, and all that. But understanding the underlying >> issues will contribute heavily to that model. >> >> On Feb 11, 2009, at 7:20 AM, David P. Reed wrote: >> >>> I don't understand how what I wrote could be interpreted as "a >>> congestion-based loss process cannot be modeled or predicted". >>> >>> I was speaking about *non-congestion-based* "connectivity loss >>> related loss process", and I *said* that it is not a single- >>> parameter, memoryless loss process. >>> >>> I said nothing whatsoever about congestion-based loss processes, >>> having differentiated carefully the two types of loss (which >>> differentiation was what Detlef started this thread with). >>> >>> Clearly I am not communicating, despite using English and common >>> terms from systems modeling mathematics. >>> >>> Xai Xi wrote: >>>> are you saying that a congestion-based loss process cannot be >>>> modeled or predicted? a tool, badabing, from sigcomm'05, claims >>>> to be highly accurate in measuring end-to-end loss processes. >>>> >>>> David wrote: >>>> >>>>> A "loss process" would be a mathematically more sound term, >>>>> because it >>>> does not confuse> the listener into thinking that there is a >>>> simplistic, memoryless, one-parameter model that> can be >>>> "discovered" by TCP's control algorithms. >>>> >>>>> That said, I was encouraging a dichotomy where the world is far >>>>> more >>>> complicated: >>>>> congestion drops vs. connectivity drops. One *might* be >>>> able to make much practical >>>>> headway by building a model and a theory of >>>> "connectivity drops". >>>> >>>> >>>> _________________________________________________________________ >>>> Drag n? drop?Get easy photo sharing with Windows Live? Photos. >>>> >>>> http://www.microsoft.com/windows/windowslive/products/photos.aspx >>>> >>>> >> >> > From dpreed at reed.com Thu Feb 19 21:55:05 2009 From: dpreed at reed.com (David P. Reed) Date: Fri, 20 Feb 2009 00:55:05 -0500 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> Message-ID: <499E45B9.7030101@reed.com> Fred, you are right. Let's get ECN done. Get your company to take the lead. The ideal state in a steady state (if there is an ideal) would be, that along any path, there would be essentially a single packet waiting on each "bottleneck" link between the source and the destination. Any more packets in queues along the way would be (as you say, Fred) harmful, because the end-to-end latency would be bigger than needed for full utilization. And latency matters a lot. In contrast, if there are fewer packets in flight, there would be underutilization, and adding a packet enqueued along path would make all users happier, until latency gets above that minimum. So the control loop in each TCP sharing a path tries to "lock into" that optimal state (or it should), using AIMD, triggerered by the best congestion signals it can get. Prefer non-loss congestion signalling such as ECN over RED over Queue overflow triggered packet dropping. Shortening signaling delay would suggest (and literature bears out) that "head drops" or "head marking" is better than "tail drops" for minimizing latency, but desire to eke out a few percent improved throughput for FTPs has argued for tail drops and long queues on all output links. (bias of theory community toward throughput measures rather than latency measures is wrong, IMO). What makes it complex is that during a flow, many contentious flows may arise and die on "cross traffic" that makes any path unstable. Increased utilization under such probabilistic transients requires longer queues. But longer queues lead to more latency and increased jitter (higher moments of delay statistics). Good control response and stability is best achieved by minimizing queueing in any path, so that control is more responsive to transient queue buildup. Most traffic in Internet apps (that require QoS to make users happier) care about end-to-end latency or jitter or both, not maximal throughput. Maximal throughput is what the operator cares about if their users don't care about QoS, only bulk FTP users care about the last few percent of optimal throughput vs. minimizing latency/delay. Fred Baker wrote: > Which begs the question - why are we tuning to loss in the first > place? Once you have filled the data path enough to achieve your "fair > share" of the capacity, filling the queue more doesn't improve your > speed and it hurts everyone around you. As your cwnd grows, your mean > RTT grows with it so that the ratio of cwnd/rtt remains equal to the > capacity of the bottleneck. > > Seems pointless and selfish, the kind of thing we discipline our > children if they do. > > On Feb 19, 2009, at 7:07 PM, Injong Rhee wrote: > >> Perhaps I might add on this thread. Yes. I agree that it is not so >> clear that we have a model for non-congestion related losses. The >> motivation for this differentiation is, I guess, to disregard >> non-congestion related losses for TCP window control. So the >> motivation is valid. But maybe we should look at the problem from a >> different perspective. Instead of trying to detect non-congestion >> losses, why not try to detect congestion losses? Well..congestion >> signals are definitely easy to detect because losses are typically >> associated with some patterns of delays. So the scheme would be >> "reduce the congestion window ONLY when it is certain with high >> probability that losses are from congestion". This scheme would be >> different from "reduce whenever any indication of congestion occurs". >> Well my view could be too dangerous. But given that there are >> protocols out there, e.g., DCCP, that react to congestion much more >> slowly than TCP, this type of protocols may not be so bad... >> >> >> ----- Original Message ----- From: "Fred Baker" >> To: "David P. Reed" >> Cc: "end2end-interest list" >> Sent: Wednesday, February 11, 2009 5:07 PM >> Subject: Re: [e2e] TCP Loss Differentiation >> >> >>> Copying the specific communicants in this thread as my postings to >>> end2end-interest require moderator approval (I guess I'm not an >>> acceptable person for some reason, and the moderator has told me >>> that he will not tell me what rule prevents me from posting >>> without moderation). >>> >>> I think you're communicating just fine. I understood, and agreed >>> with, your comment. >>> >>> I actually think that a more important model is not loss processes, >>> which as you describe are both congestion-related and related to >>> other underlying issues, but a combination of several underlying and >>> fundamentally different kinds of processes. One is perhaps "delay >>> processes" (of which loss is the extreme case and L2 retransmission >>> is a partially-understood and poorly modeled contributor to). >>> Another might be interference processes (such as radio interference >>> in 802.11/802.16 networks) that cause end to end packet loss for >>> other reasons. In mobile networks, it might be worthwhile to >>> distinguish the processes of network change - from the perspective >>> of an endpoint that is in motion, its route, and therefore its next >>> hop, is constantly changing and might at times not exist. >>> >>> Looking at it from a TCP/SCTP perspective, we can only really >>> discuss it as how we can best manage to use a certain share of the >>> capacity the network provides, how much use is counterproductive, >>> when to retransmit, and all that. But understanding the underlying >>> issues will contribute heavily to that model. >>> >>> On Feb 11, 2009, at 7:20 AM, David P. Reed wrote: >>> >>>> I don't understand how what I wrote could be interpreted as "a >>>> congestion-based loss process cannot be modeled or predicted". >>>> >>>> I was speaking about *non-congestion-based* "connectivity loss >>>> related loss process", and I *said* that it is not a single- >>>> parameter, memoryless loss process. >>>> >>>> I said nothing whatsoever about congestion-based loss processes, >>>> having differentiated carefully the two types of loss (which >>>> differentiation was what Detlef started this thread with). >>>> >>>> Clearly I am not communicating, despite using English and common >>>> terms from systems modeling mathematics. >>>> >>>> Xai Xi wrote: >>>>> are you saying that a congestion-based loss process cannot be >>>>> modeled or predicted? a tool, badabing, from sigcomm'05, claims >>>>> to be highly accurate in measuring end-to-end loss processes. >>>>> >>>>> David wrote: >>>>> >>>>>> A "loss process" would be a mathematically more sound term, >>>>>> because it >>>>> does not confuse> the listener into thinking that there is a >>>>> simplistic, memoryless, one-parameter model that> can be >>>>> "discovered" by TCP's control algorithms. >>>>> >>>>>> That said, I was encouraging a dichotomy where the world is far more >>>>> complicated: >>>>>> congestion drops vs. connectivity drops. One *might* be >>>>> able to make much practical >>>>>> headway by building a model and a theory of >>>>> "connectivity drops". >>>>> >>>>> >>>>> _________________________________________________________________ >>>>> Drag n? drop?Get easy photo sharing with Windows Live? Photos. >>>>> >>>>> http://www.microsoft.com/windows/windowslive/products/photos.aspx >>>>> >>>>> >>> >>> >> > > From rhee at ncsu.edu Fri Feb 20 04:38:40 2009 From: rhee at ncsu.edu (Injong Rhee) Date: Fri, 20 Feb 2009 07:38:40 -0500 Subject: [e2e] TCP Loss Differentiation References: <4992ECA7.2020301@reed.com><69720037-E589-4B51-937E-0757FE5A3D17@cisco.com><006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> Message-ID: <000501c99358$296e7640$7001a8c0@RHEELAPTOP> Agreed. It is rather silly to raise your window at the time of full utilization; it ends up filling up the buffer and increase delays. But unless we have some mechanism that tells explicitly how much utilization a flow has in all the links that flow goes through (e.g., ECN on all links), loss based congestion control will stay, i think. Going back to the original question, differentiating congestion related losses from other losses is important and I believe it can be done effectively. The point is that we don't have to differentiate all the losses -- the congestion related losses seem to be rather easier to distinguish from the other types of losses whose models or patterns are not well understood. ----- Original Message ----- From: "Fred Baker" To: "Injong Rhee" Cc: "David P. Reed" ; "end2end-interest list" Sent: Thursday, February 19, 2009 11:32 PM Subject: Re: [e2e] TCP Loss Differentiation > Which begs the question - why are we tuning to loss in the first place? > Once you have filled the data path enough to achieve your "fair share" of > the capacity, filling the queue more doesn't improve your speed and it > hurts everyone around you. As your cwnd grows, your mean RTT grows with > it so that the ratio of cwnd/rtt remains equal to the capacity of the > bottleneck. > > Seems pointless and selfish, the kind of thing we discipline our children > if they do. > > On Feb 19, 2009, at 7:07 PM, Injong Rhee wrote: > >> Perhaps I might add on this thread. Yes. I agree that it is not so clear >> that we have a model for non-congestion related losses. The motivation >> for this differentiation is, I guess, to disregard non- congestion >> related losses for TCP window control. So the motivation is valid. But >> maybe we should look at the problem from a different perspective. >> Instead of trying to detect non-congestion losses, why not try to detect >> congestion losses? Well..congestion signals are definitely easy to >> detect because losses are typically associated with some patterns of >> delays. So the scheme would be "reduce the congestion window ONLY when >> it is certain with high probability that losses are from congestion". >> This scheme would be different from "reduce whenever any indication of >> congestion occurs". Well my view could be too dangerous. But given that >> there are protocols out there, e.g., DCCP, that react to congestion much >> more slowly than TCP, this type of protocols may not be so bad... >> >> >> ----- Original Message ----- From: "Fred Baker" >> To: "David P. Reed" >> Cc: "end2end-interest list" >> Sent: Wednesday, February 11, 2009 5:07 PM >> Subject: Re: [e2e] TCP Loss Differentiation >> >> >>> Copying the specific communicants in this thread as my postings to >>> end2end-interest require moderator approval (I guess I'm not an >>> acceptable person for some reason, and the moderator has told me that >>> he will not tell me what rule prevents me from posting without >>> moderation). >>> >>> I think you're communicating just fine. I understood, and agreed with, >>> your comment. >>> >>> I actually think that a more important model is not loss processes, >>> which as you describe are both congestion-related and related to other >>> underlying issues, but a combination of several underlying and >>> fundamentally different kinds of processes. One is perhaps "delay >>> processes" (of which loss is the extreme case and L2 retransmission is >>> a partially-understood and poorly modeled contributor to). Another >>> might be interference processes (such as radio interference in >>> 802.11/802.16 networks) that cause end to end packet loss for other >>> reasons. In mobile networks, it might be worthwhile to distinguish the >>> processes of network change - from the perspective of an endpoint that >>> is in motion, its route, and therefore its next hop, is constantly >>> changing and might at times not exist. >>> >>> Looking at it from a TCP/SCTP perspective, we can only really discuss >>> it as how we can best manage to use a certain share of the capacity >>> the network provides, how much use is counterproductive, when to >>> retransmit, and all that. But understanding the underlying issues will >>> contribute heavily to that model. >>> >>> On Feb 11, 2009, at 7:20 AM, David P. Reed wrote: >>> >>>> I don't understand how what I wrote could be interpreted as "a >>>> congestion-based loss process cannot be modeled or predicted". >>>> >>>> I was speaking about *non-congestion-based* "connectivity loss >>>> related loss process", and I *said* that it is not a single- >>>> parameter, memoryless loss process. >>>> >>>> I said nothing whatsoever about congestion-based loss processes, >>>> having differentiated carefully the two types of loss (which >>>> differentiation was what Detlef started this thread with). >>>> >>>> Clearly I am not communicating, despite using English and common >>>> terms from systems modeling mathematics. >>>> >>>> Xai Xi wrote: >>>>> are you saying that a congestion-based loss process cannot be >>>>> modeled or predicted? a tool, badabing, from sigcomm'05, claims to >>>>> be highly accurate in measuring end-to-end loss processes. >>>>> >>>>> David wrote: >>>>> >>>>>> A "loss process" would be a mathematically more sound term, because >>>>>> it >>>>> does not confuse> the listener into thinking that there is a >>>>> simplistic, memoryless, one-parameter model that> can be >>>>> "discovered" by TCP's control algorithms. >>>>> >>>>>> That said, I was encouraging a dichotomy where the world is far more >>>>> complicated: >>>>>> congestion drops vs. connectivity drops. One *might* be >>>>> able to make much practical >>>>>> headway by building a model and a theory of >>>>> "connectivity drops". >>>>> >>>>> >>>>> _________________________________________________________________ >>>>> Drag n? drop?Get easy photo sharing with Windows Live? Photos. >>>>> >>>>> http://www.microsoft.com/windows/windowslive/products/photos.aspx >>>>> >>>>> >>> >>> >> > > > From kkrama at research.att.com Fri Feb 20 09:29:29 2009 From: kkrama at research.att.com (RAMAKRISHNAN, KADANGODE K (K. K.)) Date: Fri, 20 Feb 2009 12:29:29 -0500 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> References: <4992ECA7.2020301@reed.com><69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> Message-ID: Injong's note prompted me to write this quick note: We have worked on LT-TCP, an enhancement to TCP to deal with lossy end-end paths (the topic of discussion here, I guess). LT-TCP uses ECN as an "unambiguous" indication of congestion, and treats loss as being primarily non-congestion related loss. Of course, this is not always true, even if the end-end path adopted ECN (because of the path operating at a load that is beyond the dynamic range of the ECN based feedback congestion avoidance). In addition, ECN itself may not be enabled on all the hops in the end-end path (even if we enable the use of ECN in the network). Therefore, to allow for backwards compatibility, we had proposed using the correlation between loss with observed end-end delay in the absence of ECN indications to cause the end-systems to fall back out of LT-TCP and use TCP's loss-based congestion response. We had talked about this at discussions we have had in the ICCRG meetings (2006 etc). We think for such a coarse use of delay as a guide to determine if the losses we are encountering are unrelated to congestion or if they are due to both congestion and other sources of loss is possible. Whether we can do better than that of course, is a separate question... Thanks, -- K. K. Ramakrishnan???????????? ??? ?Email: kkrama at research.att.com AT&T Labs-Research, Rm. A161??? ??? Tel: (973)360-8764 180 Park Ave, Florham Park, NJ 07932?? ?Fax: (973) 360-8871 ????? URL: http://www.research.att.com/info/kkrama -----Original Message----- From: end2end-interest-bounces at postel.org [mailto:end2end-interest-bounces at postel.org] On Behalf Of Injong Rhee Sent: Thursday, February 19, 2009 10:08 PM To: Fred Baker; David P. Reed Cc: end2end-interest list Subject: Re: [e2e] TCP Loss Differentiation Perhaps I might add on this thread. Yes. I agree that it is not so clear that we have a model for non-congestion related losses. The motivation for this differentiation is, I guess, to disregard non-congestion related losses for TCP window control. So the motivation is valid. But maybe we should look at the problem from a different perspective. Instead of trying to detect non-congestion losses, why not try to detect congestion losses? Well..congestion signals are definitely easy to detect because losses are typically associated with some patterns of delays. So the scheme would be "reduce the congestion window ONLY when it is certain with high probability that losses are from congestion". This scheme would be different from "reduce whenever any indication of congestion occurs". Well my view could be too dangerous. But given that there are protocols out there, e.g., DCCP, that react to congestion much more slowly than TCP, this type of protocols may not be so bad... From fred at cisco.com Fri Feb 20 10:10:34 2009 From: fred at cisco.com (Fred Baker) Date: Fri, 20 Feb 2009 10:10:34 -0800 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <499E45B9.7030101@reed.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> Message-ID: <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> On Feb 19, 2009, at 9:55 PM, David P. Reed wrote: > Fred, you are right. Let's get ECN done. Get your company to take > the lead. ECN has been in the field, in some products, for the better part of a decade. Next step; get ISPs to turn it on. The products that don't support it don't because our customers tell us they don't need it (nobody is paying them to turn it on) or are simply not asking for it. That said, I'm not at all convinced that the end system can't do this effectively for itself. Setting Vegas and CalTech FAST aside (Vegas has problems and FAST has IPR that gets in the way, and neither actually tunes to the knee, they try to keep alpha in the bottleneck queue for some definition of alpha), there are some reasonably good delay-based algorithms around. The guys at Hamilton Institute have one (not HSTCP, their other one) that actually tunes to Jain's "knee" and appears to be fairly effective in preliminary work. > The ideal state in a steady state (if there is an ideal) would be, > that along any path, there would be essentially a single packet > waiting on each "bottleneck" link between the source and the > destination. I'll dispute that a little; the ideal state is that the amount of traffic that the end system is keeping in the network is the minimum that will maintain its maximum throughput rate, what Jain would call the "knee". That might mean several or even many segments in the same lambda, as one can often maintain a number of packets in a lambda due to speed of light issues. And on access interfaces, it can mean three or four packets in the same queue at some times. On access interfaces, it's not uncommon to see a burst of lets-say-three packets arrive and play out, and while the third packet is playing out, get the Ack back that triggers the next couple of packets. In such a case, "the minimum that will maintain the maximum rate" turns out to be a cwnd of 3-4 packets. I have a great capture of an upload to Picasa that would demonstrate this; between my Mac (BSD) and Picassa's Linux system, what we actually see is queues being built up to 1130 ms RTT when 3 packets (92 ms RTT when Linux is acking every other packet) would do the job. And as a result, I have to get in the queues in the router and play QoS games to make my VoIP at all useful. > Any more packets in queues along the way would be (as you say, Fred) > harmful, because the end-to-end latency would be bigger than needed > for full utilization. And latency matters a lot. > > In contrast, if there are fewer packets in flight, there would be > underutilization, and adding a packet enqueued along path would make > all users happier, until latency gets above that minimum. > > So the control loop in each TCP sharing a path tries to "lock into" > that optimal state (or it should), using AIMD, triggerered by the > best congestion signals it can get. Prefer non-loss congestion > signalling such as ECN over RED over Queue overflow triggered packet > dropping. Shortening signaling delay would suggest (and literature > bears out) that "head drops" or "head marking" is better than "tail > drops" for minimizing latency, but desire to eke out a few percent > improved throughput for FTPs has argued for tail drops and long > queues on all output links. (bias of theory community toward > throughput measures rather than latency measures is wrong, IMO). > > What makes it complex is that during a flow, many contentious flows > may arise and die on "cross traffic" that makes any path unstable. > Increased utilization under such probabilistic transients requires > longer queues. But longer queues lead to more latency and increased > jitter (higher moments of delay statistics). Well, yes and no. The bottleneck link is almost invariably the access link at one end or the other; in the core of the network the ISPs try pretty hard to stay ahead of the curve. Cross traffic happens, but I think the case is less obvious than it might appear. > Good control response and stability is best achieved by minimizing > queueing in any path, so that control is more responsive to > transient queue buildup. > > Most traffic in Internet apps (that require QoS to make users > happier) care about end-to-end latency or jitter or both, not > maximal throughput. Maximal throughput is what the operator cares > about if their users don't care about QoS, only bulk FTP users care > about the last few percent of optimal throughput vs. minimizing > latency/delay. > > > Fred Baker wrote: >> Which begs the question - why are we tuning to loss in the first >> place? Once you have filled the data path enough to achieve your >> "fair share" of the capacity, filling the queue more doesn't >> improve your speed and it hurts everyone around you. As your cwnd >> grows, your mean RTT grows with it so that the ratio of cwnd/rtt >> remains equal to the capacity of the bottleneck. >> >> Seems pointless and selfish, the kind of thing we discipline our >> children if they do. >> >> On Feb 19, 2009, at 7:07 PM, Injong Rhee wrote: >> >>> Perhaps I might add on this thread. Yes. I agree that it is not so >>> clear that we have a model for non-congestion related losses. The >>> motivation for this differentiation is, I guess, to disregard non- >>> congestion related losses for TCP window control. So the >>> motivation is valid. But maybe we should look at the problem from >>> a different perspective. Instead of trying to detect non- >>> congestion losses, why not try to detect congestion losses? >>> Well..congestion signals are definitely easy to detect because >>> losses are typically associated with some patterns of delays. So >>> the scheme would be "reduce the congestion window ONLY when it is >>> certain with high probability that losses are from congestion". >>> This scheme would be different from "reduce whenever any >>> indication of congestion occurs". Well my view could be too >>> dangerous. But given that there are protocols out there, e.g., >>> DCCP, that react to congestion much more slowly than TCP, this >>> type of protocols may not be so bad... >>> >>> >>> ----- Original Message ----- From: "Fred Baker" >>> To: "David P. Reed" >>> Cc: "end2end-interest list" >>> Sent: Wednesday, February 11, 2009 5:07 PM >>> Subject: Re: [e2e] TCP Loss Differentiation >>> >>> >>>> Copying the specific communicants in this thread as my postings >>>> to end2end-interest require moderator approval (I guess I'm not >>>> an acceptable person for some reason, and the moderator has told >>>> me that he will not tell me what rule prevents me from posting >>>> without moderation). >>>> >>>> I think you're communicating just fine. I understood, and agreed >>>> with, your comment. >>>> >>>> I actually think that a more important model is not loss >>>> processes, which as you describe are both congestion-related and >>>> related to other underlying issues, but a combination of several >>>> underlying and fundamentally different kinds of processes. One is >>>> perhaps "delay processes" (of which loss is the extreme case and >>>> L2 retransmission is a partially-understood and poorly modeled >>>> contributor to). Another might be interference processes (such >>>> as radio interference in 802.11/802.16 networks) that cause end >>>> to end packet loss for other reasons. In mobile networks, it >>>> might be worthwhile to distinguish the processes of network >>>> change - from the perspective of an endpoint that is in motion, >>>> its route, and therefore its next hop, is constantly changing >>>> and might at times not exist. >>>> >>>> Looking at it from a TCP/SCTP perspective, we can only really >>>> discuss it as how we can best manage to use a certain share of >>>> the capacity the network provides, how much use is >>>> counterproductive, when to retransmit, and all that. But >>>> understanding the underlying issues will contribute heavily to >>>> that model. >>>> >>>> On Feb 11, 2009, at 7:20 AM, David P. Reed wrote: >>>> >>>>> I don't understand how what I wrote could be interpreted as "a >>>>> congestion-based loss process cannot be modeled or predicted". >>>>> >>>>> I was speaking about *non-congestion-based* "connectivity loss >>>>> related loss process", and I *said* that it is not a single- >>>>> parameter, memoryless loss process. >>>>> >>>>> I said nothing whatsoever about congestion-based loss >>>>> processes, having differentiated carefully the two types of >>>>> loss (which differentiation was what Detlef started this thread >>>>> with). >>>>> >>>>> Clearly I am not communicating, despite using English and >>>>> common terms from systems modeling mathematics. >>>>> >>>>> Xai Xi wrote: >>>>>> are you saying that a congestion-based loss process cannot be >>>>>> modeled or predicted? a tool, badabing, from sigcomm'05, claims >>>>>> to be highly accurate in measuring end-to-end loss processes. >>>>>> >>>>>> David wrote: >>>>>> >>>>>>> A "loss process" would be a mathematically more sound term, >>>>>>> because it >>>>>> does not confuse> the listener into thinking that there is a >>>>>> simplistic, memoryless, one-parameter model that> can be >>>>>> "discovered" by TCP's control algorithms. >>>>>> >>>>>>> That said, I was encouraging a dichotomy where the world is >>>>>>> far more >>>>>> complicated: >>>>>>> congestion drops vs. connectivity drops. One *might* be >>>>>> able to make much practical >>>>>>> headway by building a model and a theory of >>>>>> "connectivity drops". >>>>>> >>>>>> >>>>>> _________________________________________________________________ >>>>>> Drag n? drop?Get easy photo sharing with Windows Live? Photos. >>>>>> >>>>>> http://www.microsoft.com/windows/windowslive/products/photos.aspx >>>>>> >>>>>> >>>> >>>> >>> >> >> From L.Wood at surrey.ac.uk Fri Feb 20 13:48:09 2009 From: L.Wood at surrey.ac.uk (Lloyd Wood) Date: Fri, 20 Feb 2009 21:48:09 +0000 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> Message-ID: On 20 Feb 2009, at 18:10, Fred Baker wrote: > On Feb 19, 2009, at 9:55 PM, David P. Reed wrote: >> Fred, you are right. Let's get ECN done. Get your company to take >> the lead. > > ECN has been in the field, in some products, for the better part of > a decade. Next step; get ISPs to turn it on. The products that don't > support it don't because our customers tell us they don't need it > (nobody is paying them to turn it on) or are simply not asking for it. It's like RED - turn it on by default, see what happens. L. DTN work: http://info.ee.surrey.ac.uk/Personal/L.Wood/saratoga/ From fred at cisco.com Fri Feb 20 14:14:59 2009 From: fred at cisco.com (Fred Baker) Date: Fri, 20 Feb 2009 14:14:59 -0800 Subject: [e2e] TCP Loss Differentiation In-Reply-To: References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> Message-ID: <4E8EC531-3E05-42C0-BC5E-9905331AB1EB@cisco.com> The problem is that "elevated RTT" doesn't necessarily mean a lot. Consider ftp://ftpeng.cisco.com/fred/RTT/index.html. ftp://ftpeng.cisco.com/fred/RTT/Pages/2.html shows the picture you are expecting. Delay is temporarily increased because there is a file transfer happening. In such a case it might make sense. ftp://ftpeng.cisco.com/fred/RTT/Pages/3.html and ftp://ftpeng.cisco.com/fred/RTT/Pages/6.html show cases of a sustained change in delay due to some shift in underlying routing. In such cases, or at least the former, dropping because delay appears to be high will have to reducing your window ad infinitum with no valuable effect. On Feb 20, 2009, at 10:53 AM, Saverio Mascolo wrote: > the idea of consider a packet loss as due to congestion IF and ONLY > IF it happens in the presence of inflated rtt is always appealing > even though not new anymore. > i think tcp santa cruz was the first to propose it. > > does someone know experimental results of using a modified TCP > based on this idea? > > -saverio > > On Fri, Feb 20, 2009 at 6:29 PM, RAMAKRISHNAN, KADANGODE K (K. K.) > wrote: > Injong's note prompted me to write this quick note: > We have worked on LT-TCP, an enhancement to TCP to deal with lossy > end-end paths (the topic of discussion here, I guess). > LT-TCP uses ECN as an "unambiguous" indication of congestion, and > treats loss as being primarily non-congestion related loss. Of > course, this is not always true, even if the end-end path adopted > ECN (because of the path operating at a load that is beyond the > dynamic range of the ECN based feedback congestion avoidance). In > addition, ECN itself may not be enabled on all the hops in the end- > end path (even if we enable the use of ECN in the network). > Therefore, to allow for backwards compatibility, we had proposed > using the correlation between loss with observed end-end delay in > the absence of ECN indications to cause the end-systems to fall back > out of LT-TCP and use TCP's loss-based congestion response. We had > talked about this at discussions we have had in the ICCRG meetings > (2006 etc). We think for such a coarse use of delay as a guide to > determine if the losses we are encountering are unrelated to > congestion or if they are due to both congestion and other sources > of loss i! > s possible. Whether we can do better than that of course, is a > separate question... > Thanks, > -- > K. K. Ramakrishnan Email: kkrama at research.att.com > AT&T Labs-Research, Rm. A161 Tel: (973)360-8764 > 180 Park Ave, Florham Park, NJ 07932 Fax: (973) 360-8871 > URL: http://www.research.att.com/info/kkrama > > > -----Original Message----- > From: end2end-interest-bounces at postel.org [mailto:end2end-interest-bounces at postel.org > ] On Behalf Of Injong Rhee > Sent: Thursday, February 19, 2009 10:08 PM > To: Fred Baker; David P. Reed > Cc: end2end-interest list > Subject: Re: [e2e] TCP Loss Differentiation > > Perhaps I might add on this thread. Yes. I agree that it is not so > clear > that we have a model for non-congestion related losses. The > motivation for > this differentiation is, I guess, to disregard non-congestion > related losses > for TCP window control. So the motivation is valid. But maybe we > should look > at the problem from a different perspective. Instead of trying to > detect > non-congestion losses, why not try to detect congestion losses? > Well..congestion signals are definitely easy to detect because > losses are > typically associated with some patterns of delays. So the scheme > would be > "reduce the congestion window ONLY when it is certain with high > probability > that losses are from congestion". This scheme would be different from > "reduce whenever any indication of congestion occurs". Well my view > could be > too dangerous. But given that there are protocols out there, e.g., > DCCP, > that react to congestion much more slowly than TCP, this type of > protocols > may not be so bad... > > > > > > > > -- > Prof. Saverio Mascolo > Dipartimento di Elettrotecnica ed Elettronica > Politecnico di Bari > Via Orabona 4 > 70125 Bari > Italy > Tel. +39 080 5963621 > Fax. +39 080 5963410 > email:mascolo at poliba.it > > http://c3lab.poliba.it > > > ================================= > This message may contain confidential and/or legally privileged > information. > If you are not the intended recipient of the message, please > destroy it. > Any unauthorized dissemination, distribution, or copying of the > material in > this message, and any attachments to the message, is strictly > forbidden. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090220/832d5387/attachment-0001.html From dpreed at reed.com Fri Feb 20 18:46:55 2009 From: dpreed at reed.com (David P. Reed) Date: Fri, 20 Feb 2009 21:46:55 -0500 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> Message-ID: <499F6B1F.3050009@reed.com> I think we have no serious debate on any of these things. I know Cisco's products support ECN, it's really an endpoint stack problem, and the word "lead" was meant to suggest use of Cisco's bully pulpit (white papers, etc.). And real practical "optimality" is hard to define. I strongly emphasize the role of latency and jitter. You would de-emphasize it a bit. This is nuance. Fred Baker wrote: > On Feb 19, 2009, at 9:55 PM, David P. Reed wrote: >> Fred, you are right. Let's get ECN done. Get your company to take >> the lead. > > ECN has been in the field, in some products, for the better part of a > decade. Next step; get ISPs to turn it on. The products that don't > support it don't because our customers tell us they don't need it > (nobody is paying them to turn it on) or are simply not asking for it. > > That said, I'm not at all convinced that the end system can't do this > effectively for itself. Setting Vegas and CalTech FAST aside (Vegas > has problems and FAST has IPR that gets in the way, and neither > actually tunes to the knee, they try to keep alpha in the bottleneck > queue for some definition of alpha), there are some reasonably good > delay-based algorithms around. The guys at Hamilton Institute have one > (not HSTCP, their other one) that actually tunes to Jain's "knee" and > appears to be fairly effective in preliminary work. > >> The ideal state in a steady state (if there is an ideal) would be, >> that along any path, there would be essentially a single packet >> waiting on each "bottleneck" link between the source and the >> destination. > > I'll dispute that a little; the ideal state is that the amount of > traffic that the end system is keeping in the network is the minimum > that will maintain its maximum throughput rate, what Jain would call > the "knee". That might mean several or even many segments in the same > lambda, as one can often maintain a number of packets in a lambda due > to speed of light issues. And on access interfaces, it can mean three > or four packets in the same queue at some times. On access interfaces, > it's not uncommon to see a burst of lets-say-three packets arrive and > play out, and while the third packet is playing out, get the Ack back > that triggers the next couple of packets. In such a case, "the minimum > that will maintain the maximum rate" turns out to be a cwnd of 3-4 > packets. I have a great capture of an upload to Picasa that would > demonstrate this; between my Mac (BSD) and Picassa's Linux system, > what we actually see is queues being built up to 1130 ms RTT when 3 > packets (92 ms RTT when Linux is acking every other packet) would do > the job. And as a result, I have to get in the queues in the router > and play QoS games to make my VoIP at all useful. > >> Any more packets in queues along the way would be (as you say, Fred) >> harmful, because the end-to-end latency would be bigger than needed >> for full utilization. And latency matters a lot. >> >> In contrast, if there are fewer packets in flight, there would be >> underutilization, and adding a packet enqueued along path would make >> all users happier, until latency gets above that minimum. >> >> So the control loop in each TCP sharing a path tries to "lock into" >> that optimal state (or it should), using AIMD, triggerered by the >> best congestion signals it can get. Prefer non-loss congestion >> signalling such as ECN over RED over Queue overflow triggered packet >> dropping. Shortening signaling delay would suggest (and literature >> bears out) that "head drops" or "head marking" is better than "tail >> drops" for minimizing latency, but desire to eke out a few percent >> improved throughput for FTPs has argued for tail drops and long >> queues on all output links. (bias of theory community toward >> throughput measures rather than latency measures is wrong, IMO). >> >> What makes it complex is that during a flow, many contentious flows >> may arise and die on "cross traffic" that makes any path unstable. >> Increased utilization under such probabilistic transients requires >> longer queues. But longer queues lead to more latency and increased >> jitter (higher moments of delay statistics). > > Well, yes and no. The bottleneck link is almost invariably the access > link at one end or the other; in the core of the network the ISPs try > pretty hard to stay ahead of the curve. Cross traffic happens, but I > think the case is less obvious than it might appear. > >> Good control response and stability is best achieved by minimizing >> queueing in any path, so that control is more responsive to transient >> queue buildup. >> >> Most traffic in Internet apps (that require QoS to make users >> happier) care about end-to-end latency or jitter or both, not maximal >> throughput. Maximal throughput is what the operator cares about if >> their users don't care about QoS, only bulk FTP users care about the >> last few percent of optimal throughput vs. minimizing latency/delay. >> >> >> Fred Baker wrote: >>> Which begs the question - why are we tuning to loss in the first >>> place? Once you have filled the data path enough to achieve your >>> "fair share" of the capacity, filling the queue more doesn't improve >>> your speed and it hurts everyone around you. As your cwnd grows, >>> your mean RTT grows with it so that the ratio of cwnd/rtt remains >>> equal to the capacity of the bottleneck. >>> >>> Seems pointless and selfish, the kind of thing we discipline our >>> children if they do. >>> >>> On Feb 19, 2009, at 7:07 PM, Injong Rhee wrote: >>> >>>> Perhaps I might add on this thread. Yes. I agree that it is not so >>>> clear that we have a model for non-congestion related losses. The >>>> motivation for this differentiation is, I guess, to disregard >>>> non-congestion related losses for TCP window control. So the >>>> motivation is valid. But maybe we should look at the problem from a >>>> different perspective. Instead of trying to detect non-congestion >>>> losses, why not try to detect congestion losses? Well..congestion >>>> signals are definitely easy to detect because losses are typically >>>> associated with some patterns of delays. So the scheme would be >>>> "reduce the congestion window ONLY when it is certain with high >>>> probability that losses are from congestion". This scheme would be >>>> different from "reduce whenever any indication of congestion >>>> occurs". Well my view could be too dangerous. But given that there >>>> are protocols out there, e.g., DCCP, that react to congestion much >>>> more slowly than TCP, this type of protocols may not be so bad... >>>> >>>> >>>> ----- Original Message ----- From: "Fred Baker" >>>> To: "David P. Reed" >>>> Cc: "end2end-interest list" >>>> Sent: Wednesday, February 11, 2009 5:07 PM >>>> Subject: Re: [e2e] TCP Loss Differentiation >>>> >>>> >>>>> Copying the specific communicants in this thread as my postings to >>>>> end2end-interest require moderator approval (I guess I'm not an >>>>> acceptable person for some reason, and the moderator has told me >>>>> that he will not tell me what rule prevents me from posting >>>>> without moderation). >>>>> >>>>> I think you're communicating just fine. I understood, and agreed >>>>> with, your comment. >>>>> >>>>> I actually think that a more important model is not loss >>>>> processes, which as you describe are both congestion-related and >>>>> related to other underlying issues, but a combination of several >>>>> underlying and fundamentally different kinds of processes. One is >>>>> perhaps "delay processes" (of which loss is the extreme case and >>>>> L2 retransmission is a partially-understood and poorly modeled >>>>> contributor to). Another might be interference processes (such as >>>>> radio interference in 802.11/802.16 networks) that cause end to >>>>> end packet loss for other reasons. In mobile networks, it might >>>>> be worthwhile to distinguish the processes of network change - >>>>> from the perspective of an endpoint that is in motion, its route, >>>>> and therefore its next hop, is constantly changing and might at >>>>> times not exist. >>>>> >>>>> Looking at it from a TCP/SCTP perspective, we can only really >>>>> discuss it as how we can best manage to use a certain share of >>>>> the capacity the network provides, how much use is >>>>> counterproductive, when to retransmit, and all that. But >>>>> understanding the underlying issues will contribute heavily to >>>>> that model. >>>>> >>>>> On Feb 11, 2009, at 7:20 AM, David P. Reed wrote: >>>>> >>>>>> I don't understand how what I wrote could be interpreted as "a >>>>>> congestion-based loss process cannot be modeled or predicted". >>>>>> >>>>>> I was speaking about *non-congestion-based* "connectivity loss >>>>>> related loss process", and I *said* that it is not a single- >>>>>> parameter, memoryless loss process. >>>>>> >>>>>> I said nothing whatsoever about congestion-based loss processes, >>>>>> having differentiated carefully the two types of loss (which >>>>>> differentiation was what Detlef started this thread with). >>>>>> >>>>>> Clearly I am not communicating, despite using English and common >>>>>> terms from systems modeling mathematics. >>>>>> >>>>>> Xai Xi wrote: >>>>>>> are you saying that a congestion-based loss process cannot be >>>>>>> modeled or predicted? a tool, badabing, from sigcomm'05, claims >>>>>>> to be highly accurate in measuring end-to-end loss processes. >>>>>>> >>>>>>> David wrote: >>>>>>> >>>>>>>> A "loss process" would be a mathematically more sound term, >>>>>>>> because it >>>>>>> does not confuse> the listener into thinking that there is a >>>>>>> simplistic, memoryless, one-parameter model that> can be >>>>>>> "discovered" by TCP's control algorithms. >>>>>>> >>>>>>>> That said, I was encouraging a dichotomy where the world is far >>>>>>>> more >>>>>>> complicated: >>>>>>>> congestion drops vs. connectivity drops. One *might* be >>>>>>> able to make much practical >>>>>>>> headway by building a model and a theory of >>>>>>> "connectivity drops". >>>>>>> >>>>>>> >>>>>>> _________________________________________________________________ >>>>>>> Drag n? drop?Get easy photo sharing with Windows Live? Photos. >>>>>>> >>>>>>> http://www.microsoft.com/windows/windowslive/products/photos.aspx >>>>>>> >>>>>>> >>>>> >>>>> >>>> >>> >>> > > From gorinsky at arl.wustl.edu Sat Feb 21 07:08:27 2009 From: gorinsky at arl.wustl.edu (Sergey Gorinsky) Date: Sat, 21 Feb 2009 09:08:27 -0600 (CST) Subject: [e2e] TCP Loss Differentiation In-Reply-To: References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> Message-ID: Dear David and Fred, A small buffer offers small delay to delay-sensitive traffic. Throughput-greedy traffic is served through another, bigger buffer at a higher per-flow rate. The two-buffer forwarding gives the senders complete freedom in choosing between the two best-effort services of larger throughput versus low queuing delay: "RD Network Services: Differentiation through Performance Incentives" by Maxim Podlesny and Sergey Gorinsky, Proceedings of ACM SIGCOMM 2008, pp. 255-266, August 2008, http://www.arl.wustl.edu/~gorinsky/pdf/RD_Services_SIGCOMM_2008.pdf The architecture is an attempt at throughput-delay differentiation designed explicitly for incremental deployment in the multi-provider Internet. Thank you, Sergey > On 20 Feb 2009, at 18:10, Fred Baker wrote: > >> On Feb 19, 2009, at 9:55 PM, David P. Reed wrote: >>> Fred, you are right. Let's get ECN done. Get your company to take the >>> lead. >> >> ECN has been in the field, in some products, for the better part of a >> decade. Next step; get ISPs to turn it on. The products that don't support >> it don't because our customers tell us they don't need it (nobody is paying >> them to turn it on) or are simply not asking for it. From dpreed at reed.com Sat Feb 21 07:27:25 2009 From: dpreed at reed.com (David P. Reed) Date: Sat, 21 Feb 2009 10:27:25 -0500 Subject: [e2e] TCP Loss Differentiation In-Reply-To: References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> Message-ID: <49A01D5D.3050804@reed.com> Sergey - I understand the motivation and approach. The problem today is that few applications are deployed that would exercise that freedom. There is not strong (in the sense of importance to users) tradeoff between throughput and micro/per-packet delays today in the Internet. Most of the Internet gives aggregate throughput that is better than 1/2 the potential throughput, and if the improvement is less than a factor of 2 for big FTPs, it's not really that important except to tweakers. Sergey Gorinsky wrote: > > Dear David and Fred, > > A small buffer offers small delay to delay-sensitive traffic. > Throughput-greedy traffic is served through another, bigger buffer > at a higher per-flow rate. The two-buffer forwarding gives > the senders complete freedom in choosing between > the two best-effort services of larger throughput versus > low queuing delay: > > "RD Network Services: Differentiation through Performance Incentives" > by Maxim Podlesny and Sergey Gorinsky, > Proceedings of ACM SIGCOMM 2008, pp. 255-266, August 2008, > http://www.arl.wustl.edu/~gorinsky/pdf/RD_Services_SIGCOMM_2008.pdf > > The architecture is an attempt at throughput-delay differentiation > designed explicitly for incremental deployment in the multi-provider > Internet. > > Thank you, > > Sergey > >> On 20 Feb 2009, at 18:10, Fred Baker wrote: >> >>> On Feb 19, 2009, at 9:55 PM, David P. Reed wrote: >>>> Fred, you are right. Let's get ECN done. Get your company to take >>>> the lead. >>> >>> ECN has been in the field, in some products, for the better part of >>> a decade. Next step; get ISPs to turn it on. The products that don't >>> support it don't because our customers tell us they don't need it >>> (nobody is paying them to turn it on) or are simply not asking for it. > From pekka.nikander at nomadiclab.com Sat Feb 21 07:43:06 2009 From: pekka.nikander at nomadiclab.com (Pekka Nikander) Date: Sat, 21 Feb 2009 17:43:06 +0200 Subject: [e2e] Changing dynamics In-Reply-To: References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> Message-ID: <1D3E525E-D6B7-4A6C-89E2-3F1DF5E114F3@nomadiclab.com> Given that the memory prices have been plummeting about 100x every decade for the last two or three decades while the long-haul communication prices only maybe 5-20x every decade, why do we still consider the memory in the forwarding boxes as ordered queues? If the network knew a little bit more about what it handles, instead of queues we could have opportunistic caches worth for several seconds or even minutes of traffic, couldn't we?. No longer need to wait for a full RTT to get a missed packet? No longer necessary to send packets out in the order received but better classified by latency requirements? Ability to wait for better radio conditions before bursting the next bucket of spam? Instead of trying to optimise some queues in an end-to-end fashion and fighting of whether the optimal queue size is 1 or 4, perhaps we should aim to keep the fibers bitted up all the time, and all of the memories filled with usable data? Isn't lit but idle fiber or powered but unfilled storage essentially waste? Or, when in 2018 I will drive to Fry's to buy my 100TB disk at $100, should I pick it pre-filled with a web cache, my favourite movies, or what? Or will still I wait for it the three months it takes to be filled at constant 100 Mb/s, my upstream Tier-2 apparently still paying maybe $1/Mb/month to its Tier-1 for transit (instead of the present $20/Mb/month)? Are we seeing a Content Wall joining the well-known Memory Wall? --Pekka From dpreed at reed.com Sat Feb 21 08:57:01 2009 From: dpreed at reed.com (David P. Reed) Date: Sat, 21 Feb 2009 11:57:01 -0500 Subject: [e2e] Changing dynamics In-Reply-To: <1D3E525E-D6B7-4A6C-89E2-3F1DF5E114F3@nomadiclab.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <1D3E525E-D6B7-4A6C-89E2-3F1DF5E114F3@nomadiclab.com> Message-ID: <49A0325D.3060208@reed.com> Most (not all) of these ideas seem to reflect the idea that we should operate the net with a lot of internal buffering. For example, if it were actually a frequent benefit to search *partway back towards the source* rather than all the way back, to find a packet that was dropped because of a transient queue growing because of a burst of cross traffic, then what would that mean? It would mean that the packets are not transiting the network in the US with little or no delay other than hops*packet size*line rate (which could be helped by lower hops or smaller packets, or upping line rate, no memory required). I do think there is a virtue to moving replicated content closer to the endpoints. But that is a different thing, and has nothing to do with routers and e2e protocols. That thing has to do with what we were debating a few weeks ago: what Van Jacobson calls "content centric networks" or what Akamai does at the app layer, or my point about communications not having to be about information that begins with the assumption that information is in "one place". - David Pekka Nikander wrote: > Given that the memory prices have been plummeting about 100x every > decade for the last two or three decades while the long-haul > communication prices only maybe 5-20x every decade, why do we still > consider the memory in the forwarding boxes as ordered queues? > > If the network knew a little bit more about what it handles, instead > of queues we could have opportunistic caches worth for several seconds > or even minutes of traffic, couldn't we?. No longer need to wait for > a full RTT to get a missed packet? No longer necessary to send > packets out in the order received but better classified by latency > requirements? Ability to wait for better radio conditions before > bursting the next bucket of spam? > > Instead of trying to optimise some queues in an end-to-end fashion and > fighting of whether the optimal queue size is 1 or 4, perhaps we > should aim to keep the fibers bitted up all the time, and all of the > memories filled with usable data? Isn't lit but idle fiber or powered > but unfilled storage essentially waste? > > Or, when in 2018 I will drive to Fry's to buy my 100TB disk at $100, > should I pick it pre-filled with a web cache, my favourite movies, or > what? Or will still I wait for it the three months it takes to be > filled at constant 100 Mb/s, my upstream Tier-2 apparently still > paying maybe $1/Mb/month to its Tier-1 for transit (instead of the > present $20/Mb/month)? > > Are we seeing a Content Wall joining the well-known Memory Wall? > > --Pekka > > From gorinsky at arl.wustl.edu Sat Feb 21 09:04:14 2009 From: gorinsky at arl.wustl.edu (Sergey Gorinsky) Date: Sat, 21 Feb 2009 11:04:14 -0600 (CST) Subject: [e2e] TCP Loss Differentiation In-Reply-To: <49A01D5D.3050804@reed.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <49A01D5D.3050804@reed.com> Message-ID: Dear David, Freedom is exercised when available. Turning on the low-delay service in some providers creates incentives for interactive gaming and telephony applications to opt in, and puts pressure on other providers to follow suit. By default, a throughput-greedy flow gets a twice higher rate in the RD Network Services. The throughput ratio can be increased for stronger incentives in selecting between the low-delay and larger-throughput services. The benefits for throughput-greedy traffic from such forwarding differentiation are in addition to those from potential improvements in routing (to which you attribute the 50% underutilization of the Internet aggregate throughput, I presume). Best regards, Sergey On Sat, 21 Feb 2009, David P. Reed wrote: > Sergey - I understand the motivation and approach. The problem today is that > few applications are deployed that would exercise that freedom. > > There is not strong (in the sense of importance to users) tradeoff between > throughput and micro/per-packet delays today in the Internet. Most of the > Internet gives aggregate throughput that is better than 1/2 the potential > throughput, and if the improvement is less than a factor of 2 for big FTPs, > it's not really that important except to tweakers. > > Sergey Gorinsky wrote: >> >> Dear David and Fred, >> >> A small buffer offers small delay to delay-sensitive traffic. >> Throughput-greedy traffic is served through another, bigger buffer >> at a higher per-flow rate. The two-buffer forwarding gives >> the senders complete freedom in choosing between >> the two best-effort services of larger throughput versus >> low queuing delay: >> >> "RD Network Services: Differentiation through Performance Incentives" >> by Maxim Podlesny and Sergey Gorinsky, >> Proceedings of ACM SIGCOMM 2008, pp. 255-266, August 2008, >> http://www.arl.wustl.edu/~gorinsky/pdf/RD_Services_SIGCOMM_2008.pdf >> >> The architecture is an attempt at throughput-delay differentiation >> designed explicitly for incremental deployment in the multi-provider >> Internet. >> >> Thank you, >> >> Sergey >> >>> On 20 Feb 2009, at 18:10, Fred Baker wrote: >>> >>>> On Feb 19, 2009, at 9:55 PM, David P. Reed wrote: >>>>> Fred, you are right. Let's get ECN done. Get your company to take the >>>>> lead. >>>> >>>> ECN has been in the field, in some products, for the better part of a >>>> decade. Next step; get ISPs to turn it on. The products that don't >>>> support it don't because our customers tell us they don't need it (nobody >>>> is paying them to turn it on) or are simply not asking for it. >> > From perfgeek at mac.com Sat Feb 21 09:29:29 2009 From: perfgeek at mac.com (rick jones) Date: Sat, 21 Feb 2009 09:29:29 -0800 Subject: [e2e] Changing dynamics In-Reply-To: <1D3E525E-D6B7-4A6C-89E2-3F1DF5E114F3@nomadiclab.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <1D3E525E-D6B7-4A6C-89E2-3F1DF5E114F3@nomadiclab.com> Message-ID: <7AE5AECD-52C2-42D7-A175-6E74B2093EE5@mac.com> On Feb 21, 2009, at 7:43 AM, Pekka Nikander wrote: > Or, when in 2018 I will drive to Fry's to buy my 100TB disk at $100, > should I pick it pre-filled with a web cache, my favourite movies, > or what? Or will still I wait for it the three months it takes to > be filled at constant 100 Mb/s, my upstream Tier-2 apparently still > paying maybe $1/Mb/month to its Tier-1 for transit (instead of the > present $20/Mb/month)? I'm not sure that is a purely technical question, but rather one directed at layers 8 and 9. The MPAA will see 100TB of movies as being worth rather more than a fraction of $100. They will see a disc filled with ~2000 HD movies as being worth more like $40000. And unless they can be convinced that they will see that revenue later as you spend the $$ to unlock the content stored on the disc they will likely make it (legally) impossible to do anything but dribble them onto the disc in a pay-as- you-fill model. My picking the MPAA was simply the most convenient example, the same idea would hold, perhaps with different constants for just about any content on that disc drive - what guarantee would be in place that those advertising on the pages in the web cache will see their (collective) much, Much, MUCH more than $100 in advertising revenue as you browse the disc, etc etc etc rick jones there is no rest for the wicked, yet the virtuous have no pillows -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090221/a69a8a3c/attachment.html From fred at cisco.com Sat Feb 21 10:01:25 2009 From: fred at cisco.com (Fred Baker) Date: Sat, 21 Feb 2009 10:01:25 -0800 Subject: [e2e] Changing dynamics In-Reply-To: <1D3E525E-D6B7-4A6C-89E2-3F1DF5E114F3@nomadiclab.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <1D3E525E-D6B7-4A6C-89E2-3F1DF5E114F3@nomadiclab.com> Message-ID: <7070394F-B9B7-40B9-B13B-4AAFBE3980A7@cisco.com> On Feb 21, 2009, at 7:43 AM, Pekka Nikander wrote: > Instead of trying to optimise some queues in an end-to-end fashion > and fighting of whether the optimal queue size is 1 or 4, perhaps we > should aim to keep the fibers bitted up all the time, and all of the > memories filled with usable data? Isn't lit but idle fiber or > powered but unfilled storage essentially waste? Two answers. One is in RFC 970. The other is that they *do* run bitted up all the time and nobody is saying they shouldn't. I'm suggesting that one could simultaneously do that *and* run other applications without forcing other things out of one's way. One might, for example, take a look at the BitTorrent vs ISP issue. Stanislaus has decided that he would rather have his application run than have it shut down when ISPs want to deliver services to all of their customers, not just the ones running BitTorrent. He is figuring out how to accomplish his goals of moving data quickly - not too hard - without trashing everything in his path. I'm suggesting that we as a community take that approach as well. From huitema at windows.microsoft.com Sat Feb 21 11:13:37 2009 From: huitema at windows.microsoft.com (Christian Huitema) Date: Sat, 21 Feb 2009 11:13:37 -0800 Subject: [e2e] Changing dynamics In-Reply-To: <49A0325D.3060208@reed.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <1D3E525E-D6B7-4A6C-89E2-3F1DF5E114F3@nomadiclab.com> <49A0325D.3060208@reed.com> Message-ID: <8EFB68EAE061884A8517F2A755E8B60A19DAFCAFE9@NA-EXMSG-W601.wingroup.windeploy.ntdev.microsoft.com> > Most (not all) of these ideas seem to reflect the idea that we should > operate the net with a lot of internal buffering. This is more or less required if we expect transport to use congestion control based on windows sizes. By definition, such controls can lead to great swings. Each transport connection will be a window that matches its share of the transmission resource and one or 2 RTT. In theory, they are more or less spaced over time, and the law of big numbers apply. But if for any reason they fire all at once, then the routers will see quite an incoming wave of traffic. Window based control also has a spiraling effect on RTT. As more stations use the link, queues will build up. In the absence of losses, the stations will react by increasing the windows size, which will tend to further increase the queues. I believe that if we want to get small queues, we have to implement some form of rate-based flow control, so we tame the peaks of traffic that any given station can generate. -- Christian Huitema From L.Wood at surrey.ac.uk Sat Feb 21 12:36:07 2009 From: L.Wood at surrey.ac.uk (Lloyd Wood) Date: Sat, 21 Feb 2009 20:36:07 +0000 Subject: [e2e] Changing dynamics In-Reply-To: <49A0325D.3060208@reed.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <1D3E525E-D6B7-4A6C-89E2-3F1DF5E114F3@nomadiclab.com> <49A0325D.3060208@reed.com> Message-ID: On 21 Feb 2009, at 16:57, David P. Reed wrote: > Most (not all) of these ideas seem to reflect the idea that we > should operate the net with a lot of internal buffering. That depends on where the control loops are. Adding buffering within a tight control loop is not always helpful. Adding buffering between tight control loops is not always unhelpful. (Delay-tolerant networking can make every hop a separate control loop, rather than relying on an end-to-end control loop and benefits from buffering.) L. DTN work: http://info.ee.surrey.ac.uk/Personal/L.Wood/saratoga/ From pekka.nikander at nomadiclab.com Sat Feb 21 23:31:03 2009 From: pekka.nikander at nomadiclab.com (Pekka Nikander) Date: Sun, 22 Feb 2009 09:31:03 +0200 Subject: [e2e] Changing dynamics In-Reply-To: <49A0325D.3060208@reed.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <1D3E525E-D6B7-4A6C-89E2-3F1DF5E114F3@nomadiclab.com> <49A0325D.3060208@reed.com> Message-ID: <6AD397D3-D167-4237-8A6C-E027E71ECDCF@nomadiclab.com> > Most (not all) of these ideas seem to reflect the idea that we > should operate the net with a lot of internal buffering. I would rather say the idea is to change how we view the buffering in the network. > For example, if it were actually a frequent benefit to search > partway back ... then what would that mean? It would mean that the > packets are not transiting the network in the US with little or no > delay ... But then we would not be considering the fact that not all traffic requires short delay. AFAICS, the reason why TCP requires short delay is built-in to TCP; given a different control loop structure (see Lloyd's and Christian's messages) there could easily be transport protocols that do not require such short e2e delays. Taking a higher-layer view, only a small fraction of traffic requires 50..200 ms e2e delay, basically games and "interactive multimedia" (aka voice :-). Most transaction-like apps (IM, web apps etc) could tolerate 2000..4000 ms delays, not even speaking about bulk. > I do think there is a virtue to moving replicated content closer to > the endpoints. But that is a different thing, and has nothing to do > with routers and e2e protocols. I sort-of thought so, too, in the beginning. Then I realised that changing the way the network handles information will change the system dynamics, affecting transport and e2e protocols. > That thing has to do with what we were debating a few weeks ago: > what Van Jacobson calls "content centric networks" or what Akamai > does at the app layer, or my point about communications not having > to be about information that begins with the assumption that > information is in "one place". Indeed. That was the starting point. But then, adding a look at the tends in the price/performance rate changes of the components, the landscape starts to change. As a Gedankenexperiment, what if the probability of the data still being opportunistically cached in the forwarding nodes would be higher than the "sending" end-node still being up? (Also consider a "stateless" data source, or a source that has no direct material interest of knowing if the sink has got the data or not.) Or another example, I think which you brought forward a few weeks ago, what if the information in question doesn't have any well defined "location" but is "smeared" over the network, e.g. due to coding or being an answer to a question. I presume that it would also have quite a large effects on how to build routers or transport protocols. In such a world it is hard to speak about e2e; where is the other end? Taking a shorter-term time perspective, haven't we learned anything from the so-called wireless TCP accelerators? There we saw the effects of the interaction of two (or three) control loops. Now, if the development of technology makes it a much stronger requirement than today to prefer local communications to long haul communications, presumably changing the dynamics (number of interacting control loops), I surmise that it will have its effects on transport protocols, too. --Pekka From pekka.nikander at nomadiclab.com Sat Feb 21 23:53:23 2009 From: pekka.nikander at nomadiclab.com (Pekka Nikander) Date: Sun, 22 Feb 2009 09:53:23 +0200 Subject: [e2e] Changing dynamics In-Reply-To: References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <1D3E525E-D6B7-4A6C-89E2-3F1DF5E114F3@nomadiclab.com> Message-ID: <5A3DD8F3-A07F-4713-83B0-07634BDDEA82@nomadiclab.com> >> Instead of trying to optimise some queues in an end-to-end fashion >> and fighting of whether the optimal queue size is 1 or 4, perhaps >> we should aim to keep the fibers bitted up all the time, and all of >> the memories filled with usable data? Isn't lit but idle fiber or >> powered but unfilled storage essentially waste? > > You cheated and said "powered" - but in general, no, there are cost/ > green reasons to not use things when you don't need to. Oh, I may be wrong, but my understanding is that you cannot easily power up and down parts of RAM, parts of a hard disk, or fast enough the sending LED on a fiber. It may also turn out that the opportunities to cut user-perceived latencies (such as in downloading) may have greater benefits than what the marginal energy savings would be from powering down partial components. >> Are we seeing a Content Wall joining the well-known Memory Wall? > > It's not a content wall - it's the speed of light. As the saying > goes, "2.99 x 10^8 m/s: It's not just a good idea - it's the law." You are missing the point. It is not about the speed of light, it is about the changing rates. For memory wall, it's about CPU cycle vs. DRAM random access latency. For the presumed content wall, it might be long-haul transmission cost vs. storage cost. > As bandwidth and storage -> infinity, we'll want ways to abuse that > bandwidth and storage to get low latency access to content. If I > were making a blatant plug, I'd say that after we get everyone using > ECN, we then should get all the app writers to use DOT [1] for their > applications. Scott might say to deploy DONA [2], and Van might say > to use his (afaik unnamed) system [3]. Of course, something like that is the broad picture. I'm trying to look a little bit closer. --Pekka From detlef.bosau at web.de Sun Feb 22 03:08:44 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 22 Feb 2009 12:08:44 +0100 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> Message-ID: <49A1323C.3040705@web.de> Injong Rhee wrote: > Perhaps I might add on this thread. Yes. I agree that it is not so > clear that we have a model for non-congestion related losses. The > motivation for this differentiation is, I guess, to disregard > non-congestion related losses for TCP window control. So the > motivation is valid. But maybe we should look at the problem from a > different perspective. Instead of trying to detect non-congestion > losses, why not try to detect congestion losses? I did not follow the entire thread yet, but I think you miss the problem. The problem is not how to detect congestion losses. In the thread, ECN is mentioned, perhaps other mechanisms as well. The problem is how to react properly upon packet loss? Particularly: how do we react properly upon an _individual_ packet loss? When should a packet be disregarded for congeston control? On a statistical basis, there is an approach: the CETEN work by Alman and Eddy. But can a reliable loss differentiation on an e-2-e basis be achieved for individual packet? -- Detlef Bosau Mail: detlef.bosau at web.de Galileistrasse 30 Web: http://www.detlef-bosau.de 70565 Stuttgart Skype: detlef.bosau Mobile: +49 172 681 9937 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3351 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20090222/90f86e4c/smime.bin From rhee at ncsu.edu Sun Feb 22 04:24:52 2009 From: rhee at ncsu.edu (Injong Rhee) Date: Sun, 22 Feb 2009 07:24:52 -0500 Subject: [e2e] TCP Loss Differentiation References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com><006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <49A1323C.3040705@web.de> Message-ID: <001901c994e8$909e8080$7001a8c0@RHEELAPTOP> You missed my point. I am commenting on *when we should disregard a packet loss for congestion control* in particular. My speculaiton is that it might be ok to react to losses only when they are from congestion. I am suggesting one way to differentiate congestion losses from the other losses (which we don't have clear model for). Therefore, instead of trying to explicitly model non-congestion losses, just model congestion losses which we understand it a bit better and react to them. ----- Original Message ----- From: "Detlef Bosau" To: "end2end-interest list" Sent: Sunday, February 22, 2009 6:08 AM Subject: Re: [e2e] TCP Loss Differentiation > I did not follow the entire thread yet, but I think you miss the problem. > > The problem is not how to detect congestion losses. In the thread, ECN is > mentioned, perhaps other mechanisms as well. > > The problem is how to react properly upon packet loss? > > Particularly: how do we react properly upon an _individual_ packet loss? > When should a packet be disregarded for congeston control? > On a statistical basis, there is an approach: the CETEN work by Alman and > Eddy. But can a reliable loss differentiation on an e-2-e basis be > achieved for individual packet? > > -- > Detlef Bosau Mail: detlef.bosau at web.de > Galileistrasse 30 Web: http://www.detlef-bosau.de > 70565 Stuttgart Skype: detlef.bosau > Mobile: +49 172 681 9937 > > From Jon.Crowcroft at cl.cam.ac.uk Sun Feb 22 04:33:17 2009 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Sun, 22 Feb 2009 12:33:17 +0000 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <49A1323C.3040705@web.de> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <49A1323C.3040705@web.de> Message-ID: hmmm - do you want the CS answer or the EE answer i think the CS answer is that you can't differentiate congestion from failure causes for loss on a per packet basis thought experiment - what if a packet that is ECN marked gets lost? i think the EE answer is that you could infer the likely cause from the statistics of the channel over the recent past. of course, it may be easier to do this at a receiver than a sender. I am not sure if this is relevant, but it looks like we ought to understand it anyhow: A Survey of Results for Deletion Channels and Related Synchronization Channels, by Michael Mitzenmacher http://portal.acm.org/citation.cfm?id=1427856 I suppose if the channel is delay bounded, and doesn't re-route without telling you, then there might be some expensive way to put in zillions of bits in the packet (or have each hop look at each packet and see if it "should have seen" a packet before but hadn't, and set a excplicit loss notification bit + seqno - maybe we could send an ICMP unreachable-from-previous hop with a copy of the next packet (better still, with a timestamp of prev to one, and curent packet) in multihop radio networks (especially if you have multipath routing), there is even a problem statistically dis-ambiguating packet loss due to congestion and interference, since some intererence is due to other packets on parallel flows that go over "links" that are near enough (even on "link disjoint" routes) to cause interference...(think "rubbernecking") In missive <49A1323C.3040705 at web.de>, Detlef Bosau typed: >>This is a cryptographically signed message in MIME format. >> >>--------------ms060804080801040005020107 >>Content-Type: text/plain; charset=windows-1252; format=flowed >>Content-Transfer-Encoding: 7bit >> >>Injong Rhee wrote: >>> Perhaps I might add on this thread. Yes. I agree that it is not so >>> clear that we have a model for non-congestion related losses. The >>> motivation for this differentiation is, I guess, to disregard >>> non-congestion related losses for TCP window control. So the >>> motivation is valid. But maybe we should look at the problem from a >>> different perspective. Instead of trying to detect non-congestion >>> losses, why not try to detect congestion losses? >> >>I did not follow the entire thread yet, but I think you miss the problem. >> >>The problem is not how to detect congestion losses. In the thread, ECN >>is mentioned, perhaps other mechanisms as well. >> >>The problem is how to react properly upon packet loss? >> >>Particularly: how do we react properly upon an _individual_ packet loss? >>When should a packet be disregarded for congeston control? >>On a statistical basis, there is an approach: the CETEN work by Alman >>and Eddy. But can a reliable loss differentiation on an e-2-e basis be >>achieved for individual packet? >> >>-- >>Detlef Bosau Mail: detlef.bosau at web.de >>Galileistrasse 30 Web: http://www.detlef-bosau.de >>70565 Stuttgart Skype: detlef.bosau >>Mobile: +49 172 681 9937 >> >> >>--------------ms060804080801040005020107 >>Content-Type: application/x-pkcs7-signature; name="smime.p7s" >>Content-Transfer-Encoding: base64 >>Content-Disposition: attachment; filename="smime.p7s" >>Content-Description: S/MIME Cryptographic Signature >> >>MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIJdTCC >>AxUwggJ+oAMCAQICEErz7fLQ+lB9XkxJSlRsPJ0wDQYJKoZIhvcNAQEFBQAwYjELMAkGA1UE >>BhMCWkExJTAjBgNVBAoTHFRoYXd0ZSBDb25zdWx0aW5nIChQdHkpIEx0ZC4xLDAqBgNVBAMT >>I1RoYXd0ZSBQZXJzb25hbCBGcmVlbWFpbCBJc3N1aW5nIENBMB4XDTA4MDkyMzEyMzQzNFoX >>DTA5MDkyMzEyMzQzNFowejEOMAwGA1UEBBMFQm9zYXUxHjAcBgNVBCoTFURldGxlZiBIYXJh >>bGQgQW5kcmVhczEkMCIGA1UEAxMbRGV0bGVmIEhhcmFsZCBBbmRyZWFzIEJvc2F1MSIwIAYJ >>KoZIhvcNAQkBFhNkZXRsZWYuYm9zYXVAd2ViLmRlMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A >>MIIBCgKCAQEA89wxDqb1nR6gDj7+sqdBTkbg5MrNzOVP8Ox4WFUYfpavrXVQEimbGeXlJSAV >>GgAzn6Ip1gu7+qJsPRlFe3cPiTs82btqja6/+galJYsXXJSfJBnKBPWoNespcg5hSQnsBtn7 >>ZSJqfhBoS6xBgpK1rlwQ5F6vMUmp1z+50k4a9katBKuj1kE+sfRQX2rCm6uRKYo1hKpmHRZd >>72TYW7NlyjjT6nL6Z5bK3wMP7F7DoSpOFLmJsE323Mn1aUBSt6va8oL2GmP8md5BXmkKyjSV >>tPrsXjWd8U+C+dD+E/XdxtRCH35XkIgRY/8Bho/VFpPjtE6M160Td0KE95fQ2u2erQIDAQAB >>ozAwLjAeBgNVHREEFzAVgRNkZXRsZWYuYm9zYXVAd2ViLmRlMAwGA1UdEwEB/wQCMAAwDQYJ >>KoZIhvcNAQEFBQADgYEABiiGWyxlhmRwhz8huaDIwJmzR4hy8fYiwdCWLffyEVhf3ZRtiajp >>GabbOIK82ORvaPOhZTkeojY+ALJ6h41Id3I4B4+8IvnwaqJ7TQre8GraBpd48FJbZigqZ35U >>DHf8WkybF548CbMgANOzNgGwJEnMaeu2mkpJBIk64DOZBQAwggMVMIICfqADAgECAhBK8+3y >>0PpQfV5MSUpUbDydMA0GCSqGSIb3DQEBBQUAMGIxCzAJBgNVBAYTAlpBMSUwIwYDVQQKExxU >>aGF3dGUgQ29uc3VsdGluZyAoUHR5KSBMdGQuMSwwKgYDVQQDEyNUaGF3dGUgUGVyc29uYWwg >>RnJlZW1haWwgSXNzdWluZyBDQTAeFw0wODA5MjMxMjM0MzRaFw0wOTA5MjMxMjM0MzRaMHox >>DjAMBgNVBAQTBUJvc2F1MR4wHAYDVQQqExVEZXRsZWYgSGFyYWxkIEFuZHJlYXMxJDAiBgNV >>BAMTG0RldGxlZiBIYXJhbGQgQW5kcmVhcyBCb3NhdTEiMCAGCSqGSIb3DQEJARYTZGV0bGVm >>LmJvc2F1QHdlYi5kZTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPPcMQ6m9Z0e >>oA4+/rKnQU5G4OTKzczlT/DseFhVGH6Wr611UBIpmxnl5SUgFRoAM5+iKdYLu/qibD0ZRXt3 >>D4k7PNm7ao2uv/oGpSWLF1yUnyQZygT1qDXrKXIOYUkJ7AbZ+2Uian4QaEusQYKSta5cEORe >>rzFJqdc/udJOGvZGrQSro9ZBPrH0UF9qwpurkSmKNYSqZh0WXe9k2FuzZco40+py+meWyt8D >>D+xew6EqThS5ibBN9tzJ9WlAUrer2vKC9hpj/JneQV5pCso0lbT67F41nfFPgvnQ/hP13cbU >>Qh9+V5CIEWP/AYaP1RaT47ROjNetE3dChPeX0Nrtnq0CAwEAAaMwMC4wHgYDVR0RBBcwFYET >>ZGV0bGVmLmJvc2F1QHdlYi5kZTAMBgNVHRMBAf8EAjAAMA0GCSqGSIb3DQEBBQUAA4GBAAYo >>hlssZYZkcIc/IbmgyMCZs0eIcvH2IsHQli338hFYX92UbYmo6Rmm2ziCvNjkb2jzoWU5HqI2 >>PgCyeoeNSHdyOAePvCL58Gqie00K3vBq2gaXePBSW2YoKmd+VAx3/FpMmxeePAmzIADTszYB >>sCRJzGnrtppKSQSJOuAzmQUAMIIDPzCCAqigAwIBAgIBDTANBgkqhkiG9w0BAQUFADCB0TEL >>MAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2FwZSBUb3du >>MRowGAYDVQQKExFUaGF3dGUgQ29uc3VsdGluZzEoMCYGA1UECxMfQ2VydGlmaWNhdGlvbiBT >>ZXJ2aWNlcyBEaXZpc2lvbjEkMCIGA1UEAxMbVGhhd3RlIFBlcnNvbmFsIEZyZWVtYWlsIENB >>MSswKQYJKoZIhvcNAQkBFhxwZXJzb25hbC1mcmVlbWFpbEB0aGF3dGUuY29tMB4XDTAzMDcx >>NzAwMDAwMFoXDTEzMDcxNjIzNTk1OVowYjELMAkGA1UEBhMCWkExJTAjBgNVBAoTHFRoYXd0 >>ZSBDb25zdWx0aW5nIChQdHkpIEx0ZC4xLDAqBgNVBAMTI1RoYXd0ZSBQZXJzb25hbCBGcmVl >>bWFpbCBJc3N1aW5nIENBMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEpjxVc1X7TrnK >>mVoeaMB1BHCd3+n/ox7svc31W/Iadr1/DDph8r9RzgHU5VAKMNcCY1osiRVwjt3J8CuFWqo/ >>cVbLrzwLB+fxH5E2JCoTzyvV84J3PQO+K/67GD4Hv0CAAmTXp6a7n2XRxSpUhQ9IBH+nttE8 >>YQRAHmQZcmC3+wIDAQABo4GUMIGRMBIGA1UdEwEB/wQIMAYBAf8CAQAwQwYDVR0fBDwwOjA4 >>oDagNIYyaHR0cDovL2NybC50aGF3dGUuY29tL1RoYXd0ZVBlcnNvbmFsRnJlZW1haWxDQS5j >>cmwwCwYDVR0PBAQDAgEGMCkGA1UdEQQiMCCkHjAcMRowGAYDVQQDExFQcml2YXRlTGFiZWwy >>LTEzODANBgkqhkiG9w0BAQUFAAOBgQBIjNFQg+oLLswNo2asZw9/r6y+whehQ5aUnX9MIbj4 >>Nh+qLZ82L8D0HFAgk3A8/a3hYWLD2ToZfoSxmRsAxRoLgnSeJVCUYsfbJ3FXJY3dqZw5jowg >>T2Vfldr394fWxghOrvbqNOUQGls1TXfjViF4gtwhGTXeJLHTHUb/XV9lTzGCA2QwggNgAgEB >>MHYwYjELMAkGA1UEBhMCWkExJTAjBgNVBAoTHFRoYXd0ZSBDb25zdWx0aW5nIChQdHkpIEx0 >>ZC4xLDAqBgNVBAMTI1RoYXd0ZSBQZXJzb25hbCBGcmVlbWFpbCBJc3N1aW5nIENBAhBK8+3y >>0PpQfV5MSUpUbDydMAkGBSsOAwIaBQCgggHDMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEw >>HAYJKoZIhvcNAQkFMQ8XDTA5MDIyMjExMDg0NFowIwYJKoZIhvcNAQkEMRYEFAFDTSIu9hZz >>BxBj6zKA77bT0YLCMFIGCSqGSIb3DQEJDzFFMEMwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwIC >>AgCAMA0GCCqGSIb3DQMCAgFAMAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIGFBgkrBgEEAYI3 >>EAQxeDB2MGIxCzAJBgNVBAYTAlpBMSUwIwYDVQQKExxUaGF3dGUgQ29uc3VsdGluZyAoUHR5 >>KSBMdGQuMSwwKgYDVQQDEyNUaGF3dGUgUGVyc29uYWwgRnJlZW1haWwgSXNzdWluZyBDQQIQ >>SvPt8tD6UH1eTElKVGw8nTCBhwYLKoZIhvcNAQkQAgsxeKB2MGIxCzAJBgNVBAYTAlpBMSUw >>IwYDVQQKExxUaGF3dGUgQ29uc3VsdGluZyAoUHR5KSBMdGQuMSwwKgYDVQQDEyNUaGF3dGUg >>UGVyc29uYWwgRnJlZW1haWwgSXNzdWluZyBDQQIQSvPt8tD6UH1eTElKVGw8nTANBgkqhkiG >>9w0BAQEFAASCAQB53ULM94R7ekULizl6eIwKCX/vtseOIbLAolf/54BZMnOhoTQgdaOyUfUf >>aofT3qoFQCUG9g18ei5MJsrON6MoaUXa3ZS4eCx99X0i0gl1NAoSxcCcSEnwzpKqpxM4gXMo >>cLduU4eHlzQ8mvCmfoz0B8mpy9HFyAaBUzXgDG+IbKVEjD3pLNeNRNiQ9X42EgNt5UAz9dwB >>rudYakjBie7ZlAxSejE4j8Lbv0bKQAqnsfczdsf9z4FunLk/PYER3sUPhG8xWcenzCBZximS >>ULmpH2Yl28vPUcrJf7qTPfGd4tdwYI9MgVG3X0bs3bqmHNF0U24r9BLtRZGjvIydydJlAAAA >>AAAA >>--------------ms060804080801040005020107-- cheers jon From detlef.bosau at web.de Sun Feb 22 09:19:26 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 22 Feb 2009 18:19:26 +0100 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <001901c994e8$909e8080$7001a8c0@RHEELAPTOP> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com><006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <49A1323C.3040705@web.de> <001901c994e8$909e8080$7001a8c0@RHEELAPTOP> Message-ID: <49A1891E.5080903@web.de> Injong Rhee wrote: > You missed my point. I am commenting on *when we should disregard a > packet loss for congestion control* in particular. My speculaiton is > that it might be ok to react to losses only when they are from > congestion. I am suggesting one way to differentiate congestion losses > from the other losses (which we don't have clear model for). > Therefore, instead of trying to explicitly model non-congestion > losses, just model congestion losses which we understand it a bit > better and react to them. > This would make sense to me if the proposal were to disregard losses for congestion control anyway (to put it in a very sharpened form) and initiate congestion action by valid congestion detection mechanisms, e.g. ECN. Does this match your idea in a better way? Detlef -- Detlef Bosau Mail: detlef.bosau at web.de Galileistrasse 30 Web: http://www.detlef-bosau.de 70565 Stuttgart Skype: detlef.bosau Mobile: +49 172 681 9937 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3351 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20090222/993a7529/smime.bin From detlef.bosau at web.de Sun Feb 22 09:38:20 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 22 Feb 2009 18:38:20 +0100 Subject: [e2e] TCP Loss Differentiation In-Reply-To: References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <49A1323C.3040705@web.de> Message-ID: <49A18D8C.90309@web.de> Jon Crowcroft wrote: > hmmm - do you want the CS answer or the EE answer > > i think the CS answer is that you can't differentiate > congestion from failure causes for loss on a per packet basis > > thought experiment - what if a packet that is ECN marked gets lost? > ...then the ECN mark is lost as well :-) That's the very strenght of implicitely signalling congestion by packet loss: loss cannot get lost :-) > i think the EE answer is that you could infer the likely cause > from the statistics of the channel over the recent past. of course, it > may be easier to do this at a receiver than a sender. > So, you make a maximum likelihood decision, correct? Of course, such a decision may be wrong. Hence, I personally tend to the CS answer ;-) > > in multihop radio networks (especially if you have multipath routing), > there is even a problem statistically > dis-ambiguating packet loss due to congestion and interference, since > some intererence is due to other packets on parallel flows that go > over "links" that are near enough (even on "link disjoint" routes) > to cause interference...(think "rubbernecking") > > When I wrote my first post in this thread, I intentionally did not consider multipath routing on network layer. I well consider multipath routing on physical layer. More precisely, I thought of networks like HSDPA, which don't support soft handover to my knowledge. In UMTS networks with soft handover, the situation may be different of course. And not only different, but interesting as well, because multipath routing on network layer may result in different routes with different capacities as well.... Now, I have written some brainless stuff on this matter and submitted it somewhere.... And thinking about this, I'm curious whether I will get a remark on this one by the reviewers ;-) -- Detlef Bosau Mail: detlef.bosau at web.de Galileistrasse 30 Web: http://www.detlef-bosau.de 70565 Stuttgart Skype: detlef.bosau Mobile: +49 172 681 9937 From rhee at ncsu.edu Sun Feb 22 10:52:39 2009 From: rhee at ncsu.edu (Injong Rhee) Date: Sun, 22 Feb 2009 13:52:39 -0500 Subject: [e2e] TCP Loss Differentiation References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com><006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <49A1323C.3040705@web.de> <001901c994e8$909e8080$7001a8c0@RHEELAPTOP> <49A1891E.5080903@web.de> Message-ID: <007301c9951e$bcbd2640$7001a8c0@RHEELAPTOP> With ECN, of course. But without it, the best thing we can do, i guess, is not to trust losses unassociaited with purterbation in delays (.e.g., increased RTTs, ack or packet train intervals -- of course with some intelligent filtering such as ignoring delays in burst, and so on). ----- Original Message ----- From: "Detlef Bosau" To: "end2end-interest list" Cc: "Injong Rhee" Sent: Sunday, February 22, 2009 12:19 PM Subject: Re: [e2e] TCP Loss Differentiation > Injong Rhee wrote: >> You missed my point. I am commenting on *when we should disregard a >> packet loss for congestion control* in particular. My speculaiton is >> that it might be ok to react to losses only when they are from >> congestion. I am suggesting one way to differentiate congestion losses >> from the other losses (which we don't have clear model for). >> Therefore, instead of trying to explicitly model non-congestion >> losses, just model congestion losses which we understand it a bit >> better and react to them. >> > > This would make sense to me if the proposal were to disregard losses for > congestion control anyway (to put it in a very sharpened form) and > initiate congestion action by valid congestion detection mechanisms, > e.g. ECN. Does this match your idea in a better way? > > Detlef > > > -- > Detlef Bosau Mail: detlef.bosau at web.de > Galileistrasse 30 Web: http://www.detlef-bosau.de > 70565 Stuttgart Skype: detlef.bosau > Mobile: +49 172 681 9937 > > From lars.eggert at nokia.com Mon Feb 23 08:12:06 2009 From: lars.eggert at nokia.com (Lars Eggert) Date: Mon, 23 Feb 2009 17:12:06 +0100 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <499F6B1F.3050009@reed.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <499F6B1F.3050009@reed.com> Message-ID: Hi, On 2009-2-21, at 3:46, David P. Reed wrote: > I think we have no serious debate on any of these things. I know > Cisco's products support ECN, it's really an endpoint stack problem, > and > the word "lead" was meant to suggest use of Cisco's bully pulpit > (white > papers, etc.). FYI - and this may be clear to you but maybe not to everyone - it's not so much an endpoint problem in the sense that ECN wasn't implemented, it's an endpoint problem in the sense that turning it on makes broken middleboxes misbehave, to a degree where Microsoft for example has decided to leave it off in Vista. See Dave Thaler's slides from a recent IETF meeting: http://www.ietf.org/proceedings/07mar/slides/tsvarea-3/sld6.htm (and the one following) So we're realistically looking at a deployment delay of max(OS upgrade cycle, NAT upgrade cycle). That same slide deck has interesting information on the deployment feasibility of several other TCP features (window scaling, DSACK, F- RTO). Lars -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2446 bytes Desc: not available Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20090223/fa9c2304/smime.bin From dpreed at reed.com Mon Feb 23 09:01:03 2009 From: dpreed at reed.com (David P. Reed) Date: Mon, 23 Feb 2009 12:01:03 -0500 Subject: [e2e] TCP Loss Differentiation In-Reply-To: References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <499F6B1F.3050009@reed.com> Message-ID: <49A2D64F.7020209@reed.com> Broken middleboxes need to be shamed and blamed. Can't get there if you don't decide to move firmly in a good direction. Is the Internet ecology so broken that good things that are pretty simple just cannot be deployed at all? What does that imply for any hope at all for a "clean slate" other than omphalocentric research by hypercautious academics who will never have an impact (other than sucking money from NSF and DARPA)? :-) Lars Eggert wrote: > Hi, > > On 2009-2-21, at 3:46, David P. Reed wrote: >> I think we have no serious debate on any of these things. I know >> Cisco's products support ECN, it's really an endpoint stack problem, and >> the word "lead" was meant to suggest use of Cisco's bully pulpit (white >> papers, etc.). > > FYI - and this may be clear to you but maybe not to everyone - it's > not so much an endpoint problem in the sense that ECN wasn't > implemented, it's an endpoint problem in the sense that turning it on > makes broken middleboxes misbehave, to a degree where Microsoft for > example has decided to leave it off in Vista. See Dave Thaler's slides > from a recent IETF meeting: > > http://www.ietf.org/proceedings/07mar/slides/tsvarea-3/sld6.htm (and > the one following) > > So we're realistically looking at a deployment delay of max(OS upgrade > cycle, NAT upgrade cycle). > > That same slide deck has interesting information on the deployment > feasibility of several other TCP features (window scaling, DSACK, F-RTO). > > Lars From lars.eggert at nokia.com Mon Feb 23 09:16:42 2009 From: lars.eggert at nokia.com (Lars Eggert) Date: Mon, 23 Feb 2009 18:16:42 +0100 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <49A2D64F.7020209@reed.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <499F6B1F.3050009@reed.com> <49A2D64F.7020209@reed.com> Message-ID: <788D6C2D-CC27-4E0D-AA81-F0FF697BE0AF@nokia.com> On 2009-2-23, at 18:01, David P. Reed wrote: > Is the Internet ecology so broken that good things that are pretty > simple just cannot be deployed at all? Yes, pretty much, if they could end up causing a significant number of support hotline calls ("Vista broke my router"). There is a bit of hope, because the same slide deck shows that things like F-RTO and DSACK were OK to turn on. But they're algorithmic changes to the end system stack and don't cause bits to go "1" that didn't use to go "1". And the problem isn't only broken NATs, there's also often overzealous firewalls that drop packets that have bits set by protocol extensions that are newer than the RFC the firewall vendor chose to scrub the dataflow against. > What does that imply for any > hope at all for a "clean slate" other than omphalocentric research by > hypercautious academics who will never have an impact (other than > sucking money from NSF and DARPA)? :-) Realizing that we're getting pretty far off topic here, I believe clean-slate research to be important in establishing upper bounds on what is doable in layer 3/4 internetworking, but I'm not holding my breath for actual commercial deployment. Then again, if the benefits are convincing enough, maybe people are willing to take the leap. Lars -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2446 bytes Desc: not available Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20090223/e1befc5a/smime.bin From detlef.bosau at web.de Mon Feb 23 09:42:30 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 23 Feb 2009 18:42:30 +0100 Subject: [e2e] TCP Loss Differentiation In-Reply-To: References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <499F6B1F.3050009@reed.com> Message-ID: <49A2E006.9020904@web.de> Lars Eggert wrote: > Hi, > > On 2009-2-21, at 3:46, David P. Reed wrote: >> I think we have no serious debate on any of these things. I know >> Cisco's products support ECN, it's really an endpoint stack problem, and >> the word "lead" was meant to suggest use of Cisco's bully pulpit (white >> papers, etc.). > > FYI - and this may be clear to you but maybe not to everyone - it's > not so much an endpoint problem in the sense that ECN wasn't > implemented, it's an endpoint problem in the sense that turning it on > makes broken middleboxes misbehave, Thank you for this strong argument in favour of the end to end principle. It's always a good idea to assume nothing about middleboxes, it's a better idea to assume less. > o a degree where Microsoft for example has decided to leave it off in > Vista. See Dave Thaler's slides from a recent IETF meeting: > > http://www.ietf.org/proceedings/07mar/slides/tsvarea-3/sld6.htm (and > the one following) > I'm surprised that Microsoft is involved in the development of middleboxes..... ;-) Don't they provide these funny graphical programs for home computers? *SCNR* > So we're realistically looking at a deployment delay of max(OS upgrade > cycle, NAT upgrade cycle). I was already surprised to find the term "NAT" mentioned here in the group. I don't know the attitude of the community. But as far as the colleagues are concerned I know of, NAT is considered to be simply an absolute NoNo. To the best of my knowledge, its a strong consensus to overcome NAT as soon as possible. (However, there's some romour, that some folks want it for IPv6. But we all know Einsteins famous quote about the infinity of the universe ;-)) I'm not quite sure whether we should pursue keepalives for things we want to get rid of. In this sense, I'm a strong advocate of the end to end principle. There may be of course valid reasons for adding functions to middleboxes. But we never should assume more about a middlebox' behaviour than it is absolutely necessary. And this is not only a philsophical view. The less functions a unit has, the less functions can be out of order. It's simply a matter of robustness to keep things small and simple. And it would be near to commit suicide for the Internet, if we would try to follow the version numbers and flavours of MS windows. -- Detlef Bosau Mail: detlef.bosau at web.de Galileistrasse 30 Web: http://www.detlef-bosau.de 70565 Stuttgart Skype: detlef.bosau Mobile: +49 172 681 9937 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3351 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20090223/e40bcd98/smime.bin From craig at aland.bbn.com Mon Feb 23 11:41:20 2009 From: craig at aland.bbn.com (Craig Partridge) Date: Mon, 23 Feb 2009 14:41:20 -0500 Subject: [e2e] Van Jacobson interview on content networks Message-ID: <20090223194120.60B6E28E155@aland.bbn.com> Hi folks: ACM Queue has just published an interview with Van about content networks. I'm biased, as I had the fun of being the guy asking questions, but I think many people on this list will find it of interest. http://mags.acm.org/queue/200901/?pg=3D8 Craig From fred at cisco.com Mon Feb 23 11:43:58 2009 From: fred at cisco.com (Fred Baker) Date: Mon, 23 Feb 2009 11:43:58 -0800 Subject: [e2e] IPv6 NAT In-Reply-To: <49A2E006.9020904@web.de> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <499F6B1F.3050009@reed.com> <49A2E006.9020904@web.de> Message-ID: <7935D38E-A75B-45FF-9F6D-01DD5CCDA674@cisco.com> On Feb 23, 2009, at 9:42 AM, Detlef Bosau wrote: > (However, there's some romour, that some folks want it for IPv6. But > we all know Einsteins famous quote about the infinity of the > universe ;-)) There are two discussions of NAT/IPv6 going on. One is a tool for assisting IPv4/IPv6 conversion for networks (mostly ISP) that have not been smart enough to be ready for the present state, and as such is intended to be temporary. The other surrounds GSE and is looking at routing scalability, and btw could be used for certain kinds of making- things-a-little-less-obviously-insecure (which is to say that it's a security-related question but not the asymptotic approach to insanity that some folks seem to enjoy). From fred at cisco.com Mon Feb 23 12:34:46 2009 From: fred at cisco.com (Fred Baker) Date: Mon, 23 Feb 2009 12:34:46 -0800 Subject: [e2e] TCP Loss Differentiation In-Reply-To: References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <49A1323C.3040705@web.de> Message-ID: <3D86E9E6-8FB4-458C-9DC2-8883F8805161@cisco.com> On Feb 22, 2009, at 4:33 AM, Jon Crowcroft wrote: > thought experiment - what if a packet that is ECN marked gets lost? The structure of ECN is such that it really shouldn't matter any more than dropping a TCP Ack matters, or not dropping some data segment and dropping a different one instead. Yes, there is an effect. The ECN loss, or not dropping one data segment and instead dropping a different one in Reno/etc, means that *that* packet doesn't trigger a cwnd change. If there is one ECN mark/Reno packet loss there is likely to be another as the same conditions hold for some period of time, so the session that *other* packet is in might adjust cwnd. If there isn't an ECN mark on some other packet (if the duration of the congestion event was really so short that no other packet was marked) one might be justified in asking what the fuss is about. Dropping a TCP Ack means that you will skip a small burst (2-3 packets sent in response to the Ack) and send one or two larger bursts a little later. There is an isolated effect, but I don't think it is a huge one. From detlef.bosau at web.de Mon Feb 23 12:54:37 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 23 Feb 2009 21:54:37 +0100 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <788D6C2D-CC27-4E0D-AA81-F0FF697BE0AF@nokia.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <499F6B1F.3050009@reed.com> <49A2D64F.7020209@reed.com> <788D6C2D-CC27-4E0D-AA81-F0FF697BE0AF@nokia.com> Message-ID: <49A30D0D.1030003@web.de> Lars Eggert wrote: > On 2009-2-23, at 18:01, David P. Reed wrote: >> Is the Internet ecology so broken that good things that are pretty >> simple just cannot be deployed at all? > > Yes, pretty much, if they could end up causing a significant number of > support hotline calls ("Vista broke my router"). > I can't help to quote a common signature in German news: "User help desk, good morning, how may I help you?" "I installed ***** on my computer." "And what's your problem?" "My computer doesn't work any more." "Thank you, but you already told so...." Sorry, but a end point hardly can break a router or "the Internet". (Hopefully....) When did Cerf and Kahn deploy IP and TCP? And how many computers are obviously able to work with these? 500 millions? One billion? Or even more? So, I tend to say: When some device cannot work with the Internet and appears to "break a router", the device is somewhat flawed... -- Detlef Bosau Mail: detlef.bosau at web.de Galileistrasse 30 Web: http://www.detlef-bosau.de 70565 Stuttgart Skype: detlef.bosau Mobile: +49 172 681 9937 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3351 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20090223/27bb84e4/smime.bin From Jon.Crowcroft at cl.cam.ac.uk Mon Feb 23 15:01:56 2009 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Mon, 23 Feb 2009 23:01:56 +0000 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <3D86E9E6-8FB4-458C-9DC2-8883F8805161@cisco.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <49A1323C.3040705@web.de> <3D86E9E6-8FB4-458C-9DC2-8883F8805161@cisco.com> Message-ID: um - i am not talking about whether ECN works right in the presence of lost packets - i am talking about what you need to differentiate non-congestive loss. you need to be somewhat more ingeneious - if each router were to record in every packet in a flow, all the packets it had seen, the order it had seen them, and the routers own address, then when any packet arrives, you'd have a cmplete history of packets predeceessors and successors and gaps, and where gaps are caused, and so a receiver can disambiguate on a _per packet_ base when a loss was not congestive... In missive <3D86E9E6-8FB4-458C-9DC2-8883F8805161 at cisco.com>, Fred Baker typed: >> >>On Feb 22, 2009, at 4:33 AM, Jon Crowcroft wrote: >> >>> thought experiment - what if a packet that is ECN marked gets lost? >> >>The structure of ECN is such that it really shouldn't matter any more >>than dropping a TCP Ack matters, or not dropping some data segment and >>dropping a different one instead. Yes, there is an effect. The ECN >>loss, or not dropping one data segment and instead dropping a >>different one in Reno/etc, means that *that* packet doesn't trigger a >>cwnd change. If there is one ECN mark/Reno packet loss there is likely >>to be another as the same conditions hold for some period of time, so >>the session that *other* packet is in might adjust cwnd. If there >>isn't an ECN mark on some other packet (if the duration of the >>congestion event was really so short that no other packet was marked) >>one might be justified in asking what the fuss is about. Dropping a >>TCP Ack means that you will skip a small burst (2-3 packets sent in >>response to the Ack) and send one or two larger bursts a little later. >> >>There is an isolated effect, but I don't think it is a huge one. cheers jon From fred at cisco.com Mon Feb 23 15:10:53 2009 From: fred at cisco.com (Fred Baker) Date: Mon, 23 Feb 2009 15:10:53 -0800 Subject: [e2e] TCP Loss Differentiation In-Reply-To: References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <49A1323C.3040705@web.de> <3D86E9E6-8FB4-458C-9DC2-8883F8805161@cisco.com> Message-ID: <8D67D867-C4F4-43CC-B76A-795F78D72FDD@cisco.com> On Feb 23, 2009, at 3:01 PM, Jon Crowcroft wrote: > if each router were to record in every packet in a flow, all the > packets it had seen, the order it had seen them, and the routers own > address, then when any packet arrives, you'd have a cmplete history > of packets predeceessors and successors and gaps, and where gaps are > caused, and so a receiver can disambiguate on a _per packet_ base > when a loss was not congestive... Great! I'm really excited to hear that every router has now sprouted a reason to add a few more gigahunks of memory, and that service providers are willing to pay the operational expense in heat and power! My product managers will be truly dumbfounded that they had not tumbled on that source of revenue in the past. And I'm sure that the ISPs will be all too eager to pass the upgrade cost along to their customers. From Jon.Crowcroft at cl.cam.ac.uk Mon Feb 23 15:37:08 2009 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Mon, 23 Feb 2009 23:37:08 +0000 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <8D67D867-C4F4-43CC-B76A-795F78D72FDD@cisco.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <49A1323C.3040705@web.de> <3D86E9E6-8FB4-458C-9DC2-8883F8805161@cisco.com> <8D67D867-C4F4-43CC-B76A-795F78D72FDD@cisco.com> Message-ID: its a thought experiment - you don't implement thoguht experiments - the point is to illustrate why not. i'm not attacking ECN _ what I am pointing out is purely a response to the idea that you cannot tell, on a per packet basis, in any affordable way, that a loss was caused by congestion or otherwise - there's an information theoertic argument fighting to get out here... ( I suppose i could start to quote your last governments leaders "known unknowns and unknown unknowns" if I wanted to be annoying, but I wont) I'm not being sarcastic so dontbe so to me. In missive <8D67D867-C4F4-43CC-B76A-795F78D72FDD at cisco.com>, Fred Baker typed: >> >>On Feb 23, 2009, at 3:01 PM, Jon Crowcroft wrote: >> >>> if each router were to record in every packet in a flow, all the >>> packets it had seen, the order it had seen them, and the routers own >>> address, then when any packet arrives, you'd have a cmplete history >>> of packets predeceessors and successors and gaps, and where gaps are >>> caused, and so a receiver can disambiguate on a _per packet_ base >>> when a loss was not congestive... >> >>Great! I'm really excited to hear that every router has now sprouted a >>reason to add a few more gigahunks of memory, and that service >>providers are willing to pay the operational expense in heat and >>power! My product managers will be truly dumbfounded that they had not >>tumbled on that source of revenue in the past. And I'm sure that the >>ISPs will be all too eager to pass the upgrade cost along to their >>customers. cheers jon From dpreed at reed.com Mon Feb 23 17:34:54 2009 From: dpreed at reed.com (David P. Reed) Date: Mon, 23 Feb 2009 20:34:54 -0500 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <49A30D0D.1030003@web.de> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <499F6B1F.3050009@reed.com> <49A2D64F.7020209@reed.com> <788D6C2D-CC27-4E0D-AA81-F0FF697BE0AF@nokia.com> <49A30D0D.1030003@web.de> Message-ID: <49A34EBE.5070401@reed.com> The routers don't break. The routers and middleboxes (by interfering with delivery) break communications between endpoints. They are not supposed to be blocking packets with IP and TCP options, for example. When they do, the rationale is some kind of silly paternalistic argument. But the IP and TCP options are there for a reason. If they are bad ideas, don't block them unilaterally, get the IETF to decide that they should be removed from the standard. The original "firewall" idea was introduced by Bellovin and Cheswick, etc. because Unix folks had a bad attitude about security (accept traffic on as many ports as possible and don't use secure root passwords). Detlef Bosau wrote: > Lars Eggert wrote: >> On 2009-2-23, at 18:01, David P. Reed wrote: >>> Is the Internet ecology so broken that good things that are pretty >>> simple just cannot be deployed at all? >> >> Yes, pretty much, if they could end up causing a significant number >> of support hotline calls ("Vista broke my router"). >> > > I can't help to quote a common signature in German news: > > "User help desk, good morning, how may I help you?" > "I installed ***** on my computer." > "And what's your problem?" > "My computer doesn't work any more." > "Thank you, but you already told so...." > > Sorry, but a end point hardly can break a router or "the Internet". > (Hopefully....) > > When did Cerf and Kahn deploy IP and TCP? And how many computers are > obviously able to work with these? 500 millions? One billion? > Or even more? > > So, I tend to say: When some device cannot work with the Internet and > appears to "break a router", the device is somewhat flawed... > > From detlef.bosau at web.de Tue Feb 24 03:03:09 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 24 Feb 2009 12:03:09 +0100 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <49A34EBE.5070401@reed.com> References: <4992ECA7.2020301@reed.com> <69720037-E589-4B51-937E-0757FE5A3D17@cisco.com> <006901c99308$728d3cd0$7001a8c0@RHEELAPTOP> <8F9B0D9F-F7B6-410A-AE56-F5EF35E2665C@cisco.com> <499E45B9.7030101@reed.com> <7F3F44B6-36DC-4FFB-AE4C-CEC94CDED82D@cisco.com> <499F6B1F.3050009@reed.com> <49A2D64F.7020209@reed.com> <788D6C2D-CC27-4E0D-AA81-F0FF697BE0AF@nokia.com> <49A30D0D.1030003@web.de> <49A34EBE.5070401@reed.com> Message-ID: <49A3D3ED.2020502@web.de> David P. Reed wrote: > The routers don't break. The routers and middleboxes (by interfering > with delivery) break communications between endpoints. And from an end user's perspective, the consequences are pretty much the same ;-) -- Detlef Bosau Mail: detlef.bosau at web.de Galileistrasse 30 Web: http://www.detlef-bosau.de 70565 Stuttgart Skype: detlef.bosau Mobile: +49 172 681 9937 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3351 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20090224/c383be30/smime-0001.bin From L.Wood at surrey.ac.uk Tue Feb 24 05:31:37 2009 From: L.Wood at surrey.ac.uk (L.Wood@surrey.ac.uk) Date: Tue, 24 Feb 2009 13:31:37 -0000 Subject: [e2e] TCP Loss Differentiation References: <4992ECA7.2020301@reed.com><69720037-E589-4B51-937E-0757FE5A3D17@cisco.com><006901c99308$728d3cd0$7001a8c0@RHEELAPTOP><49A1323C.3040705@web.de> <3D86E9E6-8FB4-458C-9DC2-8883F8805161@cisco.com> <8D67D867-C4F4-43CC-B76A-795F78D72FDD@cisco.com> Message-ID: <4835AFD53A246A40A3B8DA85D658C4BE7B0E86@EVS-EC1-NODE4.surrey.ac.uk> Fred, Actually, it should be possible to combine existing per-flow statistics (e.g. Netflow) with the kind of existing radio-aware statistics generated by e.g. RFC4938 (also implemented by Cisco), to give a good idea of when loss on a (noisy radio) channel is correlated to packet loss, and isn't congestion. A complete history isn't needed. The last couple of hundred ms should do it. All that's being done is combining observations about the channel, and about the packets flowing through the link, available in existing code. L. -----Original Message----- From: end2end-interest-bounces at postel.org on behalf of Fred Baker Sent: Mon 2009-02-23 23:10 To: Jon Crowcroft Cc: end2end-interest list Subject: Re: [e2e] TCP Loss Differentiation On Feb 23, 2009, at 3:01 PM, Jon Crowcroft wrote: > if each router were to record in every packet in a flow, all the > packets it had seen, the order it had seen them, and the routers own > address, then when any packet arrives, you'd have a cmplete history > of packets predeceessors and successors and gaps, and where gaps are > caused, and so a receiver can disambiguate on a _per packet_ base > when a loss was not congestive... Great! I'm really excited to hear that every router has now sprouted a reason to add a few more gigahunks of memory, and that service providers are willing to pay the operational expense in heat and power! My product managers will be truly dumbfounded that they had not tumbled on that source of revenue in the past. And I'm sure that the ISPs will be all too eager to pass the upgrade cost along to their customers. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090224/521c990a/attachment.html From detlef.bosau at web.de Tue Feb 24 10:58:04 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 24 Feb 2009 19:58:04 +0100 Subject: [e2e] TCP Loss Differentiation In-Reply-To: <4835AFD53A246A40A3B8DA85D658C4BE7B0E86@EVS-EC1-NODE4.surrey.ac.uk> References: <4992ECA7.2020301@reed.com><69720037-E589-4B51-937E-0757FE5A3D17@cisco.com><006901c99308$728d3cd0$7001a8c0@RHEELAPTOP><49A1323C.3040705@web.de> <3D86E9E6-8FB4-458C-9DC2-8883F8805161@cisco.com> <8D67D867-C4F4-43CC-B76A-795F78D72FDD@cisco.com> <4835AFD53A246A40A3B8DA85D658C4BE7B0E86@EVS-EC1-NODE4.surrey.ac.uk> Message-ID: <49A4433C.8020101@web.de> L.Wood at surrey.ac.uk wrote: > > > A complete history isn't needed. The last couple of hundred > ms should do it. > Depending on the technology in use, hundred ms may cover perhaps one or two rounds. Perhaps even less. I'm not quite sure whether a statistical approach will be sufficient here, particularly in networks with varying packet corruption rates. > -- Detlef Bosau Mail: detlef.bosau at web.de Galileistrasse 30 Web: http://www.detlef-bosau.de 70565 Stuttgart Skype: detlef.bosau Mobile: +49 172 681 9937 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3351 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20090224/5b8de7e1/smime.bin From vitacaishun at gmail.com Tue Feb 24 19:38:00 2009 From: vitacaishun at gmail.com (CaiShun) Date: Wed, 25 Feb 2009 11:38:00 +0800 Subject: [e2e] Throughput and delay in a theoretical model of network system Message-ID: <200902251137576564571@gmail.com> Hello, everyone I have a problem on the" throughput and delay " when view ing a network in a theoretical way, e.g. multi-commodity flow model, or Queueing Network(Stochastic Optimization) . Suppose a simple wirless network with 6 nodes. A traffic flow from node 1 to node 6, and the rate of each link is shown in the following Fig. Intuitively, the throughput is 6 = 4+2. Although the last hop link(4,6) and link(5,6) can not be active simultaneously, the rate of the two links is much higher than link24,and link 35. But we observe that each node can not receive and transmit simultaneously. So it takes 3 slots for a packet to travel from node 1 to node 6. Suppose at some slot 6 packets arrives at node 1, then the throughput is 6/3 = 2 packets per slot, right? I believe there's something wrong to define throughput and delay in the way above. But I am really getting confused . why most theoretical models concern on the throughput of the system without taking DEDLAY into account? Maybe it is about different layers -Network layer and MAC layer£¿ Regards, -Kara- CaiShun 2009-02-25 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090225/7f15effa/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/octet-stream Size: 219990 bytes Desc: not available Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20090225/7f15effa/attachment-0001.obj From detlef.bosau at web.de Wed Feb 25 05:20:45 2009 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 25 Feb 2009 14:20:45 +0100 Subject: [e2e] Throughput and delay in a theoretical model of network system In-Reply-To: <200902251137576564571@gmail.com> References: <200902251137576564571@gmail.com> Message-ID: <49A545AD.4020901@web.de> CaiShun wrote: > Hello, everyone > I have a problem on the" throughput and delay " when view ing a > network in a theoretical way, e.g. multi-commodity flow model, or > Queueing Network(Stochastic Optimization) . > Suppose a simple wirless network with 6 nodes. A traffic flow from > node 1 to node 6, and the rate of each link is shown in the following Fig. I'm sorry, but I cannot see anything but a checkerboard here... > Intuitively, the throughput is 6 = 4+2. Although the last hop > link(4,6) and link(5,6) can not be active simultaneously, the rate of > the two links is much higher than link24,and link 35. > But we observe that each node can not receive and transmit simultaneously. Hm. To be honest, I have a fundamental problem with fluid flow models and the like. I don't see a convincing way to build an anylitical, typically _continous_, model for a packet switching network which is really convincing. Although there is a huge amount of work in this area. > So it takes 3 slots for a packet to travel from node 1 to node 6. > Suppose at some slot 6 packets arrives at node 1, then the throughput > is 6/3 = 2 packets per slot, right? > I believe there's something wrong to define throughput and delay in > the way above. But I am really getting confused . why most theoretical > models concern on the throughput of the system without taking DEDLAY > into account? Maybe it is about different layers -Network layer and > MAC layer£¿ The fundamental relationship between delay, throughput and capacity is given by Little's Law. In a settled / stable system we have average number of jobs in a system = average time a job spends in a system * average birth rate In computer networks, the birth rate is your throughput, the service time is your delay and the average number of jobs in a system - this is the congestion window in TCP ;-) -- Detlef Bosau Mail: detlef.bosau at web.de Galileistrasse 30 Web: http://www.detlef-bosau.de 70565 Stuttgart Skype: detlef.bosau Mobile: +49 172 681 9937 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3351 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20090225/845f53a3/smime.bin From Peter.Sewell at cl.cam.ac.uk Fri Feb 27 06:20:51 2009 From: Peter.Sewell at cl.cam.ac.uk (Peter Sewell) Date: Fri, 27 Feb 2009 14:20:51 +0000 Subject: [e2e] on protocol specification and TCP Message-ID: Some readers of this list may recall our work on rigorous specification of TCP and Sockets, from SIGCOMM 05 and POPL 06. That was a low-level specification of the protocol. We've recently produced a technical report containing a high-level specification, of the service provided by TCP as seen by Sockets API clients, that abstracts from those protocol details (an end-to-end specification in a broad sense, if you will). While surely not definitive, this may be of some interest and use, in itself and as an example. The urls and abstract are below; comments would be welcome. Peter (and Tom Ridge and Michael Norrish) The technical report: http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-742.html The project page: http://www.cl.cam.ac.uk/~pes20/Netsem/index.html Abstract: Despite more than 30 years of research on protocol specification, the major protocols deployed in the Internet, such as TCP, are described only in informal prose RFCs and executable code. In part this is because the scale and complexity of these protocols makes them challenging targets for formal descriptions, and because techniques for mathematically rigorous (but appropriately loose) specification are not in common use. In this work we show how these difficulties can be addressed. We develop a high-level specification for TCP and the Sockets API, describing the byte-stream service that TCP provides to users, expressed in the formalised mathematics of the HOL proof assistant. This complements our previous low-level specification of the protocol internals, and makes it possible for the first time to state what it means for TCP to be correct: that the protocol implements the service. We define a precise abstraction function between the models and validate it by testing, using verified testing infrastructure within HOL. Some errors may remain, of course, especially as our resources for testing were limited, but it would be straightforward to use the method on a larger scale. This is a pragmatic alternative to full proof, providing reasonable confidence at a relatively low entry cost. Together with our previous validation of the low-level model, this shows how one can rigorously tie together concrete implementations, low-level protocol models, and specifications of the services they claim to provide, dealing with the complexity of real-world protocols throughout. Similar techniques should be applicable, and even more valuable, in the design of new protocols (as we illustrated elsewhere, for a MAC protocol for the SWIFT optically switched network). For TCP and Sockets, our specifications had to capture the historical complexities, whereas for a new protocol design, such specification and testing can identify unintended complexities at an early point in the design.