From detlef.bosau at web.de Fri Feb 8 16:35:32 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 09 Feb 2013 01:35:32 +0100 Subject: [e2e] Comparing Linux qdiscs in lab conditions (paper) In-Reply-To: <87bocnfjy9.fsf@toke.dk> References: <87bocnfjy9.fsf@toke.dk> Message-ID: <511599D4.60808@web.de> Admittedly, I did not follow the whole discussion in its ramifications. What I still wonder about, not only in the context of buffer bloat, is that I frequently find the claim, the "bottleneck" of a TCP flow would be located on edge routers and not on core routers. And that's the central point I simply don't buy. The question may be often asked, but do we really know, where packets pile up and which routers suffer from buffer bloat? Particularly in the paper by Van Jacobson and Kathleen Nichols, I find quite some remarks on CoDel on edge routers. I simply see no sense in placing AQM algorithms on edge routers. When I, living in Stuttgart, download a huge file from some server in Boston, it is almost sure that my ADSL router is definitely not the bottleneck. Generally, when queues grow too large on core routers, why don't we manage queues there? Perhaps, this is a general misconception of mine, however: why don't we fight buffer bloat where it occurs? Detlef -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de ------------------------------------------------------------------ The nonsense that passes for knowledge around wireless networking, even taught by "professors of networking" is appalling. It's the blind leading the blind. (D.P. Reed, 2012/12/25) ------------------------------------------------------------------ From touch at isi.edu Wed Feb 13 12:39:42 2013 From: touch at isi.edu (Joe Touch) Date: Wed, 13 Feb 2013 12:39:42 -0800 Subject: [e2e] Comparing Linux qdiscs in lab conditions (paper) In-Reply-To: <511599D4.60808@web.de> References: <87bocnfjy9.fsf@toke.dk> <511599D4.60808@web.de> Message-ID: <511BFA0E.9090303@isi.edu> On 2/8/2013 4:35 PM, Detlef Bosau wrote: > Admittedly, I did not follow the whole discussion in its ramifications. > What I still wonder about, not only in the context of buffer bloat, is > that I frequently find the claim, the "bottleneck" of a TCP flow would > be located on edge routers and not on core routers. And that's the > central point I simply don't buy. > > The question may be often asked, but do we really know, where packets > pile up and which routers suffer from buffer bloat? Particularly in the > paper by Van Jacobson and Kathleen Nichols, I find quite some remarks on > CoDel on edge routers. I simply see no sense in placing AQM algorithms > on edge routers. When I, living in Stuttgart, download a huge file from > some server in Boston, it is almost sure that my ADSL router is > definitely not the bottleneck. > > Generally, when queues grow too large on core routers, why don't we > manage queues there? > > Perhaps, this is a general misconception of mine, however: why don't we > fight buffer bloat where it occurs? FWIW, bufferbloat examples I've seen tend to focus on *upload* out of the home. As you note, download isn't the issue as much. Try this: - upload a video to youtube (or anywhere) - ping somewhere else That can happen within a single machine (that was the first case I heard about, and as anticipated the issue was in-kernel buffering) or between two machines (if the home router has too much buffering and too little brains to do proportional sharing). Less buffering - either in the OS or the router - sometimes helps reduce delay because the greedy application backs off (due to losses), is scheduled to transmit less often (when on the same machine), or simply loses more packets. Joe From detlef.bosau at web.de Wed Feb 13 14:55:43 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 13 Feb 2013 23:55:43 +0100 Subject: [e2e] Comparing Linux qdiscs in lab conditions (paper) In-Reply-To: <511BFA0E.9090303@isi.edu> References: <87bocnfjy9.fsf@toke.dk> <511599D4.60808@web.de> <511BFA0E.9090303@isi.edu> Message-ID: <511C19EF.90901@web.de> Am 13.02.2013 21:39, schrieb Joe Touch: > > FWIW, bufferbloat examples I've seen tend to focus on *upload* out of > the home. As you note, download isn't the issue as much. > > Try this: > - upload a video to youtube (or anywhere) > - ping somewhere else Now, what will happen? The TCP socket will fully utilize the outgoing interface's buffer. And the ICMP packet has to enqueue itself at the end of the queue. However, this is not a problem of too much buffering. This is a scheduling problem. Actually, VJCC extends TCP's self/_clocking_ /into a self/_scheduling_/. As far as this works at all, it will achieve some kind of "statistically similar throughput" for greedy sources and long term flows. However, from the perspective of the ICMP echo request, the whole issue appears as some kind of head of line blocking. (With numerous heads...) > > That can happen within a single machine (that was the first case I > heard about, and as anticipated the issue was in-kernel buffering) or > between two machines (if the home router has too much buffering and > too little brains to do proportional sharing). > Absolutely. However, this is no issue. What you describe is an IP stack that "works as designed". > Less buffering - either in the OS or the router - sometimes helps > reduce delay because the greedy application backs off (due to losses), > is scheduled to transmit less often (when on the same machine), or > simply loses more packets. I don't think that the described effect is a buffering issue. The problem is that we intertwined - congestion control - ressource sharing - scheduling into one complex algorithm - and now, we wonder why the whole thing works as designed. And BTW, the congavoid paper only talks about TCP. I don't remember that Van Jacobson or Michael Karels talked about "ping fairness" in this paper.... ;-) -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de ------------------------------------------------------------------ The nonsense that passes for knowledge around wireless networking, even taught by "professors of networking" is appalling. It's the blind leading the blind. (D.P. Reed, 2012/12/25) ------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130213/ced82feb/attachment.html From detlef.bosau at web.de Thu Feb 14 13:45:44 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Thu, 14 Feb 2013 22:45:44 +0100 Subject: [e2e] and in addition: Re: Comparing Linux qdiscs in lab conditions (paper) In-Reply-To: <511C19EF.90901@web.de> References: <87bocnfjy9.fsf@toke.dk> <511599D4.60808@web.de> <511BFA0E.9090303@isi.edu> <511C19EF.90901@web.de> Message-ID: <511D5B08.9090606@web.de> How will, e.g. CoDel change this situation, particularly on routers or computers of end users? Despite of making TCP Sockets retransmit the dropped packets? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de ------------------------------------------------------------------ The nonsense that passes for knowledge around wireless networking, even taught by "professors of networking" is appalling. It's the blind leading the blind. (D.P. Reed, 2012/12/25) ------------------------------------------------------------------ From touch at isi.edu Thu Feb 14 14:38:49 2013 From: touch at isi.edu (Joe Touch) Date: Thu, 14 Feb 2013 14:38:49 -0800 Subject: [e2e] back by popular demand - a DNS calculator Message-ID: <511D6779.1000508@isi.edu> Hi, all, By popular request, I've restored the DNS calculator function as an operational service. See: http://www.isi.edu/touch/tools/dns-calc.html (this was designed for a Sigcomm OO session, but it's been used several places as a good example why the DNS should NOT be anything more than a mapping service) Joe PS - if you do happen to use it as an example, please do drop me a note; I'd be very glad to track that info. and/or include a pointer to the classes that use it. From detlef.bosau at web.de Sun Feb 24 07:08:36 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 24 Feb 2013 16:08:36 +0100 Subject: [e2e] How do we deal with mobile networks? Message-ID: <512A2CF4.6070703@web.de> After some recent off list discussions, I think we have some issue here. First of all: Where is the right place for loss recovery? Up to now, we discuss mainly two alternatives: End to End and local. When a packet cannot be successfully transmitted via some mobile link due to noise, it does neither make sense to repeat it over and over locally nor does it make sense to repeat it over and over end to end. Although the latter alternative is worse than the first because /absurd retransmissions end to end do harm to competing flows./ The same holds true when we replace retransmissions by some sophisticated FEC scheme, as it was by Van Jacobson in "A Rant on Queues" in 2006. Despite of all other difficulties, avoiding local ARQ by adding redundancy, e.g. like in the TETRYS work by Lochin et al., uses resources which are no longer available for competing flows. Now, in his talk, VJ proposes even to avoid local Reed Solomon Coding and the like and I completely disagree. I think we are in the well proven tradition of Salzers End to End paper, when we carefully consider were packet recovery is done best: As locally as possible. First, there is nothing like "the universal error free FEC", any kind of FEC will always be chosen adequate to the link. If a FEC scheme is chosen to strong, the resource consumption is to expensive and the throughput as seen by upper layers is to small. If a FEC scheme is chosen to weak, a flow (and hence competing flows as well) will suffer from lots of retransmissions. Second, when we want to find FEC schemes appropriate for a channel, we must a) have information about the noisy channel to find appropriate schemes and b) react timely. When we think of, e.g., HSDPA where the coding scheme is adapted every 2 milliseconds, it is obvious that we cannot timely adapt to a HSDPA link on a End to End basis. A basic insight, which seems not to be commonly accepted to me, is that a link's throughput, more precisely: the time needed to successfully transfer a packet on the link, does not mainly depend on the technology in use but on the channel's properties. I was a bit confused here by RFC 3819, where the authors write the following: > 8.5.3. Analysis of Link-Layer Effects on TCP Performance > > Consider the following example: > > A designer invents a new wireless link layer which, on average, loses > 1% of IP packets. The link layer supports packets of up to 1040 > bytes, and has a one-way delay of 20 msec. First, such a link layer inevitably needs some local FEC/ARQ mechanism. Second: The loss probability does not depend on the link layer but on the channel. Of course, a link layer may abstract this to upper layers - to the cost of unbounded transmission times. Particularly for HSDPA, the typical selection schemes for coding schemes well include "out of range areas", i.e. when a channel is too bad it is simply not used. What makes me curious in the context of the aforementioned RFC is that the authors in the following text use well known TCP formulae, e.g. by Mathis or Padhye, to do throughput estimations and simply assume that parameters like "RTT" or "RTO" would be available in the context of mobile networks. Or when it comes to the dimensioning of queueing memory, it is assumed that we had something like a "bottleneck bandwidth" which is the least throughput along the path. What is the throughput of a mobile wireless interface? Particularly in the context of packet switching where we expect packets to be delivered correctly? /Simply spoken: In the general case, it is unknown./ *What are my conclusions?* 1. As soon as TCP paths include one or more mobile links, TCP RTT estimators, and hence derived confidence intervals like RTO, do not really hold. 2. The same holds true for formulae based upon those statistics. 3. Error recovery should be done as locally as possible. In that particular respect we should change our attitude from hat outlined in RFC 791 from > 1.2. Scope > > The internet protocol is specifically limited in scope to provide the > functions necessary to deliver a package of bits (an internet > datagram) from a source to a destination over an interconnected system > of networks. There are no mechanisms to augment end-to-end data > reliability, flow control, sequencing, or other services commonly > found in host-to-host protocols. The internet protocol can capitalize > on the services of its supporting networks to provide various types > and qualities of service. to an attitude where we request a sufficiently small packet loss probability, e.g. 1 percent, and request an appropriate signalling mechanism when this cannot be achieved in order to look for a possibility to overcome the problem, e.g. by some route change, or to inform the application layer of the problem to enable appropriate actions. It may even be appropriate to define some technology specific maximum transmission time for a packet, hence we have a certain limit: "SDU transmission time < 0.1 seconds, SDU loss probility <= 0.01" - and when this cannot be achieved, upper layers are notified accordingly. 4. TCP is an asynchronous protocol. We should not expect TCP flows to run "smoothly" with "the same rate" from End to End throughout the whole path. TCP packets my clump together in parts of the path, they may be sparsely distributed in others. Hence, we should discuss whether it does make sense to do congestion control, resource management and scheduling on a strong End to End basis, as it is done today, or whether we should discuss possible alternatives. (This discussion is not only motivated by mobile networks, I could mention other reasons as well, however in this post I would like to restrict myself on mobile networks.) -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130224/decfec31/attachment.html From fahad.dogar at gmail.com Sun Feb 24 07:51:45 2013 From: fahad.dogar at gmail.com (Fahad Dogar) Date: Sun, 24 Feb 2013 15:51:45 +0000 Subject: [e2e] How do we deal with mobile networks? In-Reply-To: <512A2CF4.6070703@web.de> References: <512A2CF4.6070703@web.de> Message-ID: Hi Detlef, Your may find our work on segment based transport relevant to this discussion: http://conferences.sigcomm.org/co-next/2012/eproceedings/conext/p13.pdf cheers, fahad On Sun, Feb 24, 2013 at 3:08 PM, Detlef Bosau wrote: > After some recent off list discussions, I think we have some issue here. > > First of all: Where is the right place for loss recovery? > > Up to now, we discuss mainly two alternatives: End to End and local. > > When a packet cannot be successfully transmitted via some mobile link due > to noise, it does neither make sense to repeat it over and over locally nor > does it make sense to repeat it over and over end to end. Although the > latter alternative is worse than the first because *absurd > retransmissions end to end do harm to competing flows.* The same holds > true when we replace retransmissions by some sophisticated FEC scheme, as > it was by Van Jacobson in "A Rant on Queues" in 2006. Despite of all other > difficulties, avoiding local ARQ by adding redundancy, e.g. like in the > TETRYS work by Lochin et al., uses resources which are no longer available > for competing flows. > > Now, in his talk, VJ proposes even to avoid local Reed Solomon Coding and > the like and I completely disagree. I think we are in the well proven > tradition of Salzers End to End paper, when we carefully consider were > packet recovery is done best: As locally as possible. > > First, there is nothing like "the universal error free FEC", any kind of > FEC will always be chosen adequate to the link. If a FEC scheme is chosen > to strong, the resource consumption is to expensive and the throughput as > seen by upper layers is to small. If a FEC scheme is chosen to weak, a flow > (and hence competing flows as well) will suffer from lots of > retransmissions. > > Second, when we want to find FEC schemes appropriate for a channel, we > must a) have information about the noisy channel to find appropriate > schemes and b) react timely. When we think of, e.g., HSDPA where the coding > scheme is adapted every 2 milliseconds, it is obvious that we cannot timely > adapt to a HSDPA link on a End to End basis. > > A basic insight, which seems not to be commonly accepted to me, is that a > link's throughput, more precisely: the time needed to successfully transfer > a packet on the link, does not mainly depend on the technology in use but > on the channel's properties. I was a bit confused here by RFC 3819, where > the authors write the following: > > 8.5.3. Analysis of Link-Layer Effects on TCP Performance > > Consider the following example: > > A designer invents a new wireless link layer which, on average, loses > 1% of IP packets. The link layer supports packets of up to 1040 > bytes, and has a one-way delay of 20 msec. > > > First, such a link layer inevitably needs some local FEC/ARQ mechanism. > Second: The loss probability does not depend on the link layer but on the > channel. Of course, a link layer may abstract this to upper layers - to the > cost of unbounded transmission times. Particularly for HSDPA, the typical > selection schemes for coding schemes well include "out of range areas", > i.e. when a channel is too bad it is simply not used. > > What makes me curious in the context of the aforementioned RFC is that the > authors in the following text use well known TCP formulae, e.g. by Mathis > or Padhye, to do throughput estimations and simply assume that parameters > like "RTT" or "RTO" would be available in the context of mobile networks. > Or when it comes to the dimensioning of queueing memory, it is assumed > that we had something like a "bottleneck bandwidth" which is the least > throughput along the path. What is the throughput of a mobile wireless > interface? Particularly in the context of packet switching where we expect > packets to be delivered correctly? *Simply spoken: In the general case, > it is unknown.* > > *What are my conclusions?* > > > 1. As soon as TCP paths include one or more mobile links, TCP RTT > estimators, and hence derived confidence intervals like RTO, do not really > hold. > 2. The same holds true for formulae based upon those statistics. > 3. Error recovery should be done as locally as possible. In that > particular respect we should change our attitude from hat outlined in RFC > 791 from > > 1.2. Scope > > The internet protocol is specifically limited in scope to provide the > functions necessary to deliver a package of bits (an internet > datagram) from a source to a destination over an interconnected system > of networks. There are no mechanisms to augment end-to-end data > reliability, flow control, sequencing, or other services commonly > found in host-to-host protocols. The internet protocol can capitalize > on the services of its supporting networks to provide various types > and qualities of service. > > to an attitude where we request a sufficiently small packet loss > probability, e.g. 1 percent, and request an appropriate signalling > mechanism when this cannot be achieved in order to look for a possibility > to overcome the problem, e.g. by some route change, or to inform the > application layer of the problem to enable appropriate actions. It may even > be appropriate to define some technology specific maximum transmission time > for a packet, hence we have a certain limit: "SDU transmission time < 0.1 > seconds, SDU loss probility <= 0.01" - and when this cannot be achieved, > upper layers are notified accordingly. > 4. TCP is an asynchronous protocol. We should not expect TCP flows to > run "smoothly" with "the same rate" from End to End throughout the whole > path. TCP packets my clump together in parts of the path, they may be > sparsely distributed in others. Hence, we should discuss whether it does > make sense to do congestion control, resource management and scheduling on > a strong End to End basis, as it is done today, or whether we should > discuss possible alternatives. (This discussion is not only motivated by > mobile networks, I could mention other reasons as well, however in this > post I would like to restrict myself on mobile networks.) > > > > > -- > ------------------------------------------------------------------ > Detlef Bosau > Galileistra?e 30 > 70565 Stuttgart Tel.: +49 711 5208031 > mobile: +49 172 6819937 > skype: detlef.bosau > ICQ: 566129673detlef.bosau at web.de http://www.detlef-bosau.de > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130224/43cb1087/attachment-0001.html From detlef.bosau at web.de Mon Feb 25 02:40:34 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 25 Feb 2013 11:40:34 +0100 Subject: [e2e] How do we deal with mobile networks? In-Reply-To: References: <512A2CF4.6070703@web.de> Message-ID: <512B3FA2.2060703@web.de> Am 24.02.2013 18:36, schrieb John Day: > Re: [e2e] How do we deal with mobile networks? > I don't know if this helps, but the fundamental premise of this form > of architecture (and understood from the beginning in the early 70s) > is that the purpose of the data link layer is to provide sufficient > error control to ensure that end-to-end reliability at the Transport > Layer is cost-effective. Let me quote VJ's Rant on Queues from 2006 here: > Suggestions (cont.) > additional delay > . Never introducewith packets): (apps will > just try to fill it > /? Let apps do their own FEC;// > //avoid link layer Reed-Solomon and ARQ./ > ? Use smooth, simple downlink schedulers > ? Use predictive and anticipatory uplink > schedulers (Accentuation done by me) and RFC 791. > 1.2. Scope > > The internet protocol is specifically limited in scope to provide the > functions necessary to deliver a package of bits (an internet > datagram) from a source to a destination over an interconnected system > of networks./ There are no mechanisms to augment end-to-end data > reliability/, flow control, sequencing, or other services commonly > found in host-to-host protocols. The internet protocol can capitalize > on the services of its supporting networks to provide various types > and qualities of service. (Acc. d. b.me ) And please note that the assumption "no flow control" should be understood as "we must not do flow control on link layer", e.g. in Ethernet. Please note that enabling flow control on link layer may lead to communication deadlocks which are, without appropriate means, neither detected nor handled by IP. So, it is a valid position to request that any retransmission of lost packets should be done end to end, as required by VJ. However, I think, this position does not really hold. > > IOW, since most loss above the Data Link Layer is due to congestion, > the Data Link Layer should provide enough error recovery to stay well > within acceptable losses due to congestion in the layers above. That should be the correct position, however to my understanding it is not fully supported in literature and apparently, this is not common sense at the moment. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130225/873f6c96/attachment.html From detlef.bosau at web.de Mon Feb 25 07:13:40 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 25 Feb 2013 16:13:40 +0100 Subject: [e2e] How do we deal with mobile networks? In-Reply-To: <512B3FA2.2060703@web.de> References: <512A2CF4.6070703@web.de> <512B3FA2.2060703@web.de> Message-ID: <512B7FA4.5030408@web.de> I received some pointers to RS codes and similar on application layer. First of all: Adding redundancy to a flow on application layer increases the amount of data which is to be transferred and hence the flow's resource consumption. Second: In the particular case of TCP, packet corruption is taken as an indication for congestion, so we want to avoid packet corruption even if the lost packet may be recovered from surrounding packets. Hence, for TCP, a low packet corruption rate should be provided by the link layer. Do I miss something? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Mon Feb 25 08:32:28 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 25 Feb 2013 17:32:28 +0100 Subject: [e2e] How do we deal with mobile networks? In-Reply-To: <1361808414.620213565@apps.rackspace.com> References: <512A2CF4.6070703@web.de> <512B3FA2.2060703@web.de> <512B7FA4.5030408@web.de> <1361808414.620213565@apps.rackspace.com> Message-ID: <512B921C.3050908@web.de> Am 25.02.2013 17:06, schrieb dpreed at reed.com: > > As I responded to Martin Geddes, I would suggest that you abandon > these silly debates without quantification. > > Get data from real systems, in real use. > I don't see any reason for any personal insults. And calling questions "silly" without any reason is a personal insult. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130225/4ec99e5f/attachment.html