From detlef.bosau at web.de Mon Nov 11 04:23:57 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 11 Nov 2013 13:23:57 +0100 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? Message-ID: <5280CC5D.70401@web.de> I know this question is as heterodox as could be. Nevertheless. A TCP packet on its way from source to sink will typically travel quite some packet switching nodes, each of which introduces a potential flow control problem. A switching node typically does "store and forward" - and whenever a queue on the node cannot be drained sufficiently fast - be it due to a throughput shortage on the outgoing link, be it due to a large amount of incoming traffic - the switching node has to throttle down incoming traffic or it must discard packets. Both scenarios are possible with TCP: An unexpected throughput shortage may occur on wireless networks, unexpected traffic may be caused by any competing sender. I'm - after years - still not comfortable with the concept of "storage on the line", which is basically the motivation for our sliding window model. A line (fibre, copper, air interface) may or may not offer a certain amount of transient storage capacity, depending on quite some factors, one of which is the MAC scheme. E.g. in CSMA/CD nets, there is no more then ONE packet on the line. Hence, the concept of a "Latency-Throughput-Product" must be used with extreme caution. I'm sometimes not quite sure whether particularly VJCC actually works around a "concatenated flow control problem", which in the late 80s really WAS a concatenated flow control problem because the vast majority of a paths "capacity" was located on the switches - and not on the lines. (At least to my understanding.) And because we did not want to touch the switches, in other words we wanted to keep thins small and simple, we worked around this flow control problem using an end to end congestion control. Detlef (expecting flames...) -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From richard at bennett.com Mon Nov 11 13:59:31 2013 From: richard at bennett.com (Richard Bennett) Date: Mon, 11 Nov 2013 13:59:31 -0800 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? In-Reply-To: <5280CC5D.70401@web.de> References: <5280CC5D.70401@web.de> Message-ID: <52815343.6030803@bennett.com> It's interesting that so many people still talk about CSMA/CD as if it were still real. IEEE 802.3i added a full duplex mode to Ethernet (known as 10BASE-T) in 1990, and 802.3u (100 BASE-TX) effectively eliminated the half duplex, CSMA/CD mode for speeds of 100 Mbps and higher on Ethernet. The full duplex modes of Ethernet have a flow control signal that moderates congestion, and on-switch buffers for storage of frames (either full or partial) in-flight. Nowdays, you only see CSMA on unlicensed broadcast networks, e. g. Wi-Fi. You can't effectively manage congestion solely with the knowledge end points have, but any congestion management scheme needs end point cooperation since end points are the sources of congestion. The Internet's congestion problem is insufficient signalling from the elements that can immediately detect congestion - the switches - back to the elements that create it. There have been and are efforts to improve the signalling - ECN and PCN - and there was a botched attempt to deal with it in the original versions of IP with source quench. The longevity of the Jacobson CC and efforts to extend its life like AQM are quite unfortunate. The problem with end-to-end is the lack of overall system knowledge in any given end point. On 11/11/2013 4:23 AM, Detlef Bosau wrote: > I know this question is as heterodox as could be. > > Nevertheless. A TCP packet on its way from source to sink will typically > travel quite some packet switching nodes, each of which > introduces a potential flow control problem. A switching node typically > does "store and forward" - and whenever a queue on the node cannot be > drained sufficiently fast - be it due to a throughput shortage on the > outgoing link, be it due to a large amount of incoming traffic - the > switching node has to throttle down incoming traffic or it must discard > packets. > > Both scenarios are possible with TCP: An unexpected throughput shortage > may occur on wireless networks, unexpected traffic may be caused > by any competing sender. > > I'm - after years - still not comfortable with the concept of "storage > on the line", which is basically the motivation for our sliding window > model. > A line (fibre, copper, air interface) may or may not offer a certain > amount of transient storage capacity, depending on quite some factors, > one of which is the MAC scheme. E.g. in CSMA/CD nets, there is no more > then ONE packet on the line. Hence, the concept of a > "Latency-Throughput-Product" must be used with extreme caution. > > I'm sometimes not quite sure whether particularly VJCC actually works > around a "concatenated flow control problem", which in the late 80s > really WAS a concatenated flow control problem because the vast majority > of a paths "capacity" was located on the switches - and not on the lines. > (At least to my understanding.) And because we did not want to touch the > switches, in other words we wanted to keep thins small and simple, we > worked around this flow control problem using an end to end congestion > control. > > Detlef > > (expecting flames...) > -- Richard Bennett Visiting Fellow, American Enterprise Institute Center for Internet, Communications, and Technology Policy Editor, High Tech Forum (408) 829-4944 (mobile) (415) 967-2900 (office) From detlef.bosau at web.de Mon Nov 11 15:35:01 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 12 Nov 2013 00:35:01 +0100 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? In-Reply-To: <52815343.6030803@bennett.com> References: <5280CC5D.70401@web.de> <52815343.6030803@bennett.com> Message-ID: <528169A5.8080201@web.de> Am 11.11.2013 22:59, schrieb Richard Bennett: > It's interesting that so many people still talk about CSMA/CD as if it > were still real. I beg you pardon? > IEEE 802.3i added a full duplex mode to Ethernet (known as 10BASE-T) > in 1990, and 802.3u (100 BASE-TX) effectively eliminated the half > duplex, CSMA/CD mode for speeds of 100 Mbps and higher on Ethernet. > The full duplex modes of Ethernet have a flow control signal that You certainly know that RFC 791 practically excludes L2 flow control for IP networks and that L2 flow control may cause deadlock problems is known. > > @inproceedings{reinemo, > author = "Sven-Arne Reinemo and Tor Skeie", > title = "Ethernet as a lossless deadlock free system area network", > booktitle= "Parallel and Distributed Processing and Applications", > publisher ="Springer Berlin Heidelberg", > year="2005", > pages = "901--914"} > moderates congestion, According to the accepted definition of congestion control and flow control you mix up these two. > and on-switch buffers for storage of frames (either full or partial) > in-flight. Nowdays, you only see CSMA on unlicensed broadcast > networks, e. g. Wi-Fi. Which are certainly not very wide spread ;-) (I'm actually using one and can listen to six others which can be reached from my apartment. And I know for sure, that there are additional ones which I cannot reach because I'm living on the 2nd floor, the other WLAN is on the 7th floor, I set up the network myself.) > > You can't effectively manage congestion solely with the knowledge end > points have, This is what I guessed. And what I'm more and more convinced of. > but any congestion management scheme needs end point cooperation agreed. > since end points are the sources of congestion. Nonsense. When a wireless line on the path suffers from throughput shortage and a Wi-Fi router cannot drain a buffer sufficiently fast, you have dropped packets. This is typically taken as "congestion" - however, the problem results from noise along the path. > The Internet's congestion problem is insufficient signalling from the > elements that can immediately detect congestion - the switches - back > to the elements that create it. Agreed. Where I don't agree is that congestion is caused solely by the end points. We sometimes made ourselves think in the way a "TCP connection" would be Endpoint 1 --------------P---------A------------T------------H-------------Endpoint 2 and the PATH is some extremely well behaved, homogeneous miracle - whereas the PATH is actually a mess. > There have been and are efforts to improve the signalling - ECN and > PCN - and there was a botched attempt to deal with it in the original > versions of IP with source quench. The longevity of the Jacobson CC > and efforts to extend its life like AQM are quite unfortunate. Although some guys say, I'm often quite arrogant in my attitude, I will make one point very clear. Actually, VJCC is the very core of TCP congestion control and it survived not only for more than twenty years now but it survived an Internet growth by six orders of magnitude. First of all, I owe respect to this achivevement. What, however, encourages me to ask even critical questions are VJ's own words, that the old solutions aren't bad - however the problems have changed. So I have no lack of respect towards VJCC, but I realize that VJCC is sometimes facilitated by more (Coddle) or less (Splitting) sophisticated mechanisms which intentionally circumvent problems which may cause grief to the VJCC world. (And which make the mess of the actually heterogeneous nature of the aforementioned path even worse.) > > The problem with end-to-end is the lack of overall system knowledge in > any given end point. Agreed. Detlef -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From jnc at mercury.lcs.mit.edu Mon Nov 11 15:31:18 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 11 Nov 2013 18:31:18 -0500 (EST) Subject: [e2e] Is the end to end paradigm appropriate for congestion control? Message-ID: <20131111233118.D60D018C106@mercury.lcs.mit.edu> > From: Detlef Bosau > And because we did not want to touch the switches I think that's a bit excessive - I think we'd have been OK with making some changes to the switches to produce a viable systemic congestion control system. But I do think we would have (rightly IMO) resisted imposition of e.g. a full hop-by-hop congestion control system (i.e. on path A->B->C->D->E, if E experiences congestion it tells D, which then tells C, etc). Remember the times, though: We tried SQ, that 'didn't work' (although I'm personally still not sure we really understood the fundamental limitations of direct source congestion notification - I think I've written about this before here, too lazy to look in the archives to find it); VJCC 'worked', OK, there are 17 other huge alligators biting at my ankles, time to move on to one of them... > (expecting flames...) Why? I didn't see anything particularly objectionable in your note? Noel From Jon.Crowcroft at cl.cam.ac.uk Mon Nov 11 23:19:39 2013 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Tue, 12 Nov 2013 07:19:39 +0000 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? In-Reply-To: <5280CC5D.70401@web.de> References: <5280CC5D.70401@web.de> Message-ID: don't see much to object to in your post - good question - the "in flight" or "outstanding packets" or "in the pipe" or whichever phrase you use for packets that aren't at least still mostly being serialised or de-serialised in a nic/transceiver at one or other end of a transmit/receive pair on a link.... yes, i suspect these are a rare case in practice nowadays... back in the day, when testing VJCC on the internet in 87/88, one of the "interesting" cases was the SATNET which used geostationary satellites - probasbly one of the very few links for which you could actually lauch quite a few packets into orbit back then (or now, due to horrendous RTT) - being 35000km up (in geosync orbit) and same down, and then back again) even though only 64kbps per sec, you could get a few packets out there before the first bit traveling at the speed of light (so much faster than fiber:) hit the far end downlink over the (bent stovepipe) satellite .36 secs later... ...and as satellitle transponder speed/capacity went up and got cheaper over time, this became more true... but its not a common case... most of the "pipe" is not made of packets "in flight" but more accurately, just in buffers... and of course, any LAN tech has to start receiveing bits before you finish sending them.... (unless you choose the cambridge ring model with 16 bit minipackets - a precursor to atm) on the related topic... layer 2 flow control in switches is being mucked about with as we speak for various special case data center net magics to avoid tcp incast hassles but its a nich use afaik... (as discussed before on this list)... In missive <5280CC5D.70401 at web.de>, Detlef Bosau typed: >>I know this question is as heterodox as could be. >> >>Nevertheless. A TCP packet on its way from source to sink will typically >>travel quite some packet switching nodes, each of which >>introduces a potential flow control problem. A switching node typically >>does "store and forward" - and whenever a queue on the node cannot be >>drained sufficiently fast - be it due to a throughput shortage on the >>outgoing link, be it due to a large amount of incoming traffic - the >>switching node has to throttle down incoming traffic or it must discard >>packets. >> >>Both scenarios are possible with TCP: An unexpected throughput shortage >>may occur on wireless networks, unexpected traffic may be caused >>by any competing sender. >> >>I'm - after years - still not comfortable with the concept of "storage >>on the line", which is basically the motivation for our sliding window >>model. >>A line (fibre, copper, air interface) may or may not offer a certain >>amount of transient storage capacity, depending on quite some factors, >>one of which is the MAC scheme. E.g. in CSMA/CD nets, there is no more >>then ONE packet on the line. Hence, the concept of a >>"Latency-Throughput-Product" must be used with extreme caution. >> >>I'm sometimes not quite sure whether particularly VJCC actually works >>around a "concatenated flow control problem", which in the late 80s >>really WAS a concatenated flow control problem because the vast majority >>of a paths "capacity" was located on the switches - and not on the lines. >>(At least to my understanding.) And because we did not want to touch the >>switches, in other words we wanted to keep thins small and simple, we >>worked around this flow control problem using an end to end congestion >>control. >> >>Detlef >> >>(expecting flames...) >> >>-- >>------------------------------------------------------------------ >>Detlef Bosau >>Galileistra?e 30 >>70565 Stuttgart Tel.: +49 711 5208031 >> mobile: +49 172 6819937 >> skype: detlef.bosau >> ICQ: 566129673 >>detlef.bosau at web.de http://www.detlef-bosau.de >> cheers jon From Martin.Heusse at imag.fr Tue Nov 12 04:08:44 2013 From: Martin.Heusse at imag.fr (Martin Heusse) Date: Tue, 12 Nov 2013 13:08:44 +0100 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? In-Reply-To: References: <5280CC5D.70401@web.de> Message-ID: <26103FD4-5F03-4822-B5A2-8ADA20013B43@imag.fr> This is so true. Along those lines, we need to remember why, back in the days, they decided to increment the TTL of the packets at the moment they were going through a router and not at the physical layer. Because, you know, the packets are traveling at the speed of light, or very close. So as soon as they are on the line, time pretty much stops for them, according to the laws of relativity. And the speed of light has never been higher than today. Martin Le 12 nov. 2013 ? 08:19, Jon Crowcroft a ?crit : > but its not a common case... most of the "pipe" is not made of packets > "in flight" but more accurately, just in buffers... From detlef.bosau at web.de Tue Nov 12 04:40:24 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 12 Nov 2013 13:40:24 +0100 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? In-Reply-To: References: <5280CC5D.70401@web.de> Message-ID: <528221B8.2060309@web.de> Am 12.11.2013 08:19, schrieb Jon Crowcroft: > > but its not a common case... most of the "pipe" is not made of packets > "in flight" but more accurately, just in buffers... However, it depends. Years ago, I was told here on the list (IIRC bei Steinar Haug?) that particularly extremely long fibres with high data rates may keep quite some data "in flight", however to my knowledge, you are physicist, hence I assume that your knowledge here is far more sound than mine. > and of course, any LAN tech has to start receiveing bits before you > finish sending them.... (unless you choose the cambridge ring model > with 16 bit minipackets - a precursor to atm) Unfortunately, I don't know the cambridge ring (I always was bad in history ;-)) - however, some fundamental principles in networking have hardly changed during the last 40 years. One of these is the use of some kind of "minipackets" which still exists in various flavours from cell relay (wireline) to HSDPA (wireless). Actually, the traditional packets in the ARPAnet were quite the same. (Why not? A car has four wheels, a steering wheel, a brake. Only some flavours of black have been added since Henry Ford.) Together with Little's Law, which in general does not apply in computer networks!, we often take an "abstract view" where a path is some kind of bit pipe with some (ideally quite constant) capacity. (Even the term "serialization" is to be used with care here, particularly a "serialization latency", which depends on the line coding and channel coding scheme being in use and which may change quite often.) The term "serialization latency" becomes highly complicated when it comes to networks with a local recovery scheme. E.g. in WLAN, "packets in flight" may be "packets in progress". You issue an ICMP echo request - and go for a cup of tea while your notebook and your WLAN AP have a seminal conversation conveying your packet over the air interface.... Aterwards, anything is put into a black box where a packet is inserted at time Ti and delivered at time Td and from the temporal differences between Td and Ti together with the packet lengths, we derive a "throughput" and a "propagation latency" so that Td - Ti = propagation latency + serializiation latency = propagation latency + packet lengh / throughput. (And that's the way you find it in discrete event simulators.) There is no packet loss, no retransmission. And no varying throughput. (You talked about satellite networks, the channel coding of which may change depending on the line conditions.) Anything is hidden behind the (to my understanding highly questionable) term "latency bandwidth product" - so the whole link, or even more: the path (be it a bridged Ethernet or carrier pigeons with recovery extension) appears as a "black box" with a certain "bandwidth" *ouch* and a certain "LBP". And it is exactly this LBP which is shared among competing flows by VJCC. > > on the related topic... > layer 2 flow control in switches is being mucked about with > as we speak for various special case data center net > magics to avoid tcp incast hassles but its a nich use afaik... > (as discussed before on this list)... When I think about it, L2 flow control was dropped in RFC 791 to avoid head of line blocking, is this correct? Packets from one conversation must be allowed to "overtake" packets from another conversation. I think, that's one issue in L2 flow control in conjunction with TCP/IP. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From richard at bennett.com Tue Nov 12 14:44:17 2013 From: richard at bennett.com (Richard Bennett) Date: Tue, 12 Nov 2013 14:44:17 -0800 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? In-Reply-To: References: <5280CC5D.70401@web.de> Message-ID: <5282AF41.2030107@bennett.com> The arithmetic on the size of bits on a network is pretty interesting. At 1 Mpbs, a bit is 300m long, at 1 Gbps it's .3m, etc. So those old ring systems never had more than one token circulating at a time, but I saw 100 presentations that showed several in the network at the same time. On 11/11/2013 11:19 PM, Jon Crowcroft wrote: > and of course, any LAN tech has to start receiveing bits before you > finish sending them.... (unless you choose the cambridge ring model > with 16 bit minipackets - a precursor to atm) -- Richard Bennett Visiting Fellow, American Enterprise Institute Center for Internet, Communications, and Technology Policy Editor, High Tech Forum (408) 829-4944 (mobile) (415) 967-2900 (office) From detlef.bosau at web.de Wed Nov 13 03:18:51 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 13 Nov 2013 12:18:51 +0100 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? In-Reply-To: <5282AF41.2030107@bennett.com> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> Message-ID: <5283601B.9070800@web.de> Nevertheless, depending on the MAC scheme, in some cases no more than one packet may reside on the line. (On ring systems, of course, there may be several tokens/frames in flight.) However. As long as a receiver cannot accept the data, putting it on the line does not make any sense. (As in TCP: Flow control dominates congestion control.) More drastically spoken: The line capacity is of secondary interest, the primary interest is the next hop's capacity. Wouldn't it make sense to - at least experimental - consider a concatenated hop by hop flow control system? What would be the major problems in such a design? Detlef Am 12.11.2013 23:44, schrieb Richard Bennett: > The arithmetic on the size of bits on a network is pretty interesting. > At 1 Mpbs, a bit is 300m long, at 1 Gbps it's .3m, etc. So those old > ring systems never had more than one token circulating at a time, but > I saw 100 presentations that showed several in the network at the same > time. > > On 11/11/2013 11:19 PM, Jon Crowcroft wrote: >> and of course, any LAN tech has to start receiveing bits before you >> finish sending them.... (unless you choose the cambridge ring model >> with 16 bit minipackets - a precursor to atm) > -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From emmanuel.lochin at isae.fr Wed Nov 13 04:58:19 2013 From: emmanuel.lochin at isae.fr (Emmanuel Lochin) Date: Wed, 13 Nov 2013 13:58:19 +0100 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? In-Reply-To: <5283601B.9070800@web.de> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> Message-ID: <5283776B.60209@isae.fr> On 13/11/2013 12:18, Detlef Bosau wrote: > Nevertheless, depending on the MAC scheme, in some cases no more than > one packet may reside on the line. (On ring systems, of course, there > may be several tokens/frames in flight.) > > However. As long as a receiver cannot accept the data, putting it on the > line does not make any sense. (As in TCP: Flow control dominates > congestion control.) More drastically spoken: The line capacity is of > secondary interest, the primary interest is the next hop's capacity. > > Wouldn't it make sense to - at least experimental - consider a > concatenated hop by hop flow control system? > > What would be the major problems in such a design? Hi Detlef, Such idea has already been proposed at the IETF'73 see http://www.brynosaurus.com/pub/net/logjam-slides-ietf73.pdf and I found the meeting minutes here: *http://www.ietf.org/proceedings/73/minutes/tsvarea.txt* It seems that this might introduce problem for secure communications in particular when using IPSec. Emmanuel > > Detlef > > > Am 12.11.2013 23:44, schrieb Richard Bennett: >> The arithmetic on the size of bits on a network is pretty interesting. >> At 1 Mpbs, a bit is 300m long, at 1 Gbps it's .3m, etc. So those old >> ring systems never had more than one token circulating at a time, but >> I saw 100 presentations that showed several in the network at the same >> time. >> >> On 11/11/2013 11:19 PM, Jon Crowcroft wrote: >>> and of course, any LAN tech has to start receiveing bits before you >>> finish sending them.... (unless you choose the cambridge ring model >>> with 16 bit minipackets - a precursor to atm) > -- Emmanuel Lochin Professeur ISAE - OSSI Institut Sup?rieur de l'A?ronautique et de l'Espace (ISAE) Issu du rapprochement SUPAERO et ENSICA 10 avenue Edouard Belin - BP 54032 - 31055 Toulouse cedex 4 Tel : 05 61 33 91 85 - Fax : 05 61 33 91 88 Web : http://personnel.isae.fr/emmanuel-lochin/ --- "This email and any attachments are confidential. They may contain legally privileged information or copyright material. You should not read, copy, use or disclose them without authorisation. If you are not an intended recipient, please contact us at once by return email and then delete both messages. We do not accept liability in connection with computer virus, data corruption, delay, interruption, unauthorised access or unauthorised amendment. This notice should not be removed" From detlef.bosau at web.de Wed Nov 13 07:38:02 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 13 Nov 2013 16:38:02 +0100 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? In-Reply-To: <5283776B.60209@isae.fr> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> Message-ID: <52839CDA.9010703@web.de> Hi Manu, Am 13.11.2013 13:58, schrieb Emmanuel Lochin: > > Hi Detlef, > > Such idea has already been proposed at the IETF'73 see > http://www.brynosaurus.com/pub/net/logjam-slides-ietf73.pdf > and I found the meeting minutes here: > *http://www.ietf.org/proceedings/73/minutes/tsvarea.txt* > It seems that this might introduce problem for secure communications > in particular when using IPSec. > Actually my thoughts are quite similar - you pointed me to this work quite some years ago. One problem in this approach is the buffer sharing concept at switching nodes - I'm not quite sure whether flow- and congestion-control of adjacent segments are fully decoupled here, another problem is the use of splitting PEP here. We can work around the semantics problem of splitting PEP by introducing an additional ACK in TCP, where the "normal" ACK works for clocking and a second "End to End ACK" conveys the receiver's ACK to the sender. A really tough problem however would be caused by the extensive use of splitting gatways and hence flow related state information on each node. Does this really scale with a huge number of flows? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From richard at bennett.com Wed Nov 13 13:50:52 2013 From: richard at bennett.com (Richard Bennett) Date: Wed, 13 Nov 2013 13:50:52 -0800 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? In-Reply-To: <52839CDA.9010703@web.de> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> Message-ID: <5283F43C.9000008@bennett.com> PCN is working on an experimental flow control system for the Internet, https://www.ietf.org/mailman/listinfo/pcn It's a problem many people are aware of, and many people would like to solve. RB On 11/13/2013 7:38 AM, Detlef Bosau wrote: > Hi Manu, > > Am 13.11.2013 13:58, schrieb Emmanuel Lochin: >> Hi Detlef, >> >> Such idea has already been proposed at the IETF'73 see >> http://www.brynosaurus.com/pub/net/logjam-slides-ietf73.pdf >> and I found the meeting minutes here: >> *http://www.ietf.org/proceedings/73/minutes/tsvarea.txt* >> It seems that this might introduce problem for secure communications >> in particular when using IPSec. >> > > Actually my thoughts are quite similar - you pointed me to this work > quite some years ago. > > One problem in this approach is the buffer sharing concept at switching > nodes - I'm not quite sure whether flow- and congestion-control of > adjacent segments are fully decoupled here, another problem is the use > of splitting PEP here. > > We can work around the semantics problem of splitting PEP by introducing > an additional ACK in TCP, where the "normal" ACK works > for clocking and a second "End to End ACK" conveys the receiver's ACK to > the sender. > > A really tough problem however would be caused by the extensive use of > splitting gatways and hence flow related state information on each node. > Does this really scale with a huge number of flows? > > -- Richard Bennett Visiting Fellow, American Enterprise Institute Center for Internet, Communications, and Technology Policy Editor, High Tech Forum (408) 829-4944 (mobile) (415) 967-2900 (office) From detlef.bosau at web.de Fri Nov 15 05:36:59 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 15 Nov 2013 14:36:59 +0100 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? In-Reply-To: <5283F43C.9000008@bennett.com> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> Message-ID: <5286237B.8000101@web.de> isn't it the same old story? We (however) detect congestion and make the sender shrink his window or slow down its rate? Am 13.11.2013 22:50, schrieb Richard Bennett: > PCN is working on an experimental flow control system for the Internet, > > https://www.ietf.org/mailman/listinfo/pcn > > It's a problem many people are aware of, and many people would like to > solve. > > RB -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Fri Nov 15 07:06:22 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 15 Nov 2013 16:06:22 +0100 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? In-Reply-To: <5286237B.8000101@web.de> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <5286237B.8000101@web.de> Message-ID: <5286386E.9020408@web.de> In other words: First cause congestion, afterwards try to get rid of it? Am 15.11.2013 14:36, schrieb Detlef Bosau: > isn't it the same old story? We (however) detect congestion and make the > sender shrink his window or slow down its rate? > > > > > > Am 13.11.2013 22:50, schrieb Richard Bennett: >> PCN is working on an experimental flow control system for the Internet, >> >> https://www.ietf.org/mailman/listinfo/pcn >> >> It's a problem many people are aware of, and many people would like to >> solve. >> >> RB > -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Fri Nov 15 13:42:59 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 15 Nov 2013 22:42:59 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <5283F43C.9000008@bennett.com> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> Message-ID: <52869563.4010201@web.de> Why do we need congestion at all? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From j.vimal at gmail.com Fri Nov 15 16:12:38 2013 From: j.vimal at gmail.com (Vimal) Date: Fri, 15 Nov 2013 16:12:38 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <52869563.4010201@web.de> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> Message-ID: I think it depends on how you define congestion. For a simple single-link resource, if you define congestion as "long term utilisation of a link >= capacity" then in some sense we "need" congestion as we would like to fully utilise the network (i.e., we want utilisation = capacity). If you define congestion as "queue build up at all timescales" then congestion is inevitable at any load > 0 if arrivals are random. There has been some work on defining the right operating point of the network for a certain metric. For instance, it is known (due to Leonard Kleinrock as far as I can recall) that the "right" operating point for an M/M/1 queue to optimise the average throughput/delay^r for flows is to operate it at a point where the expected queue occupancy is exactly r. I am not a super expert on this topic, so I am attaching references here: References: Kleinrock, L., "On Flow Control in Computer Networks", Conference Record, Proceedings of the International Conference on Communications, Vol. II, Toronto, Ontario, pp. 27.2.1 to 27.2.5, June 1978. Kleinrock, L., "Power and Deterministic Rules of Thumb for Probabilistic Problems in Computer Communications", Conference Record, International Conference on Communications, Boston, Massachusetts, pp. 43.1.1 to 43.1.10, June 1979 Gail, R. and L. Kleinrock, "An Invariant Property of Computer Network Power", Proceedings of the International Conference on Communications, Denver, Colorado, June 14-18, 1981, pp. 63.1.1-63.1.5, 1981. On 15 November 2013 13:42, Detlef Bosau wrote: > Why do we need congestion at all? > > -- > ------------------------------------------------------------------ > Detlef Bosau > Galileistra?e 30 > 70565 Stuttgart Tel.: +49 711 5208031 > mobile: +49 172 6819937 > skype: detlef.bosau > ICQ: 566129673 > detlef.bosau at web.de http://www.detlef-bosau.de > > -- Vimal From detlef.bosau at web.de Sat Nov 16 02:34:53 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 16 Nov 2013 11:34:53 +0100 Subject: [e2e] Question the other way round: In-Reply-To: References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> Message-ID: <52874A4D.2020403@web.de> Am 16.11.2013 01:12, schrieb Vimal: > I think it depends on how you define congestion. I don't think so. We don't need congestion - and we don't want congestion. > > For a simple single-link resource, if you define congestion as "long > term utilisation of a link >= capacity" then in some sense we "need" > congestion as we would like to fully utilise the network (i.e., we > want utilisation = capacity). > So for a single link we need a mechanism which ensures full link utilization. Correct? That's a possible motivation for the sliding window scheme. However, do we really need sliding window for fully utilizing the link? > If you define congestion as "queue build up at all timescales" then > congestion is inevitable at any load > 0 if arrivals are random. Forget about queueing theory for the moment, I want to talk about real systems. A few days ago, JC and me discussed where the workload actually resides. As long as the workload resides on links, there is hardly any problem. At least in wired networks. A wired link has a certain throughput and a certain propagation delay, neither of which depends on the actual load. Hence the only problem you may have is that you pay for an underutilized link and hence the argument is work conservation. As long as the workload resides in buffers, the situation becomes a bit more complicated. At the latest when buffers are too large (whatever that will mean, please have a look at the most recent "best current practice RFC" which is presumably going to change on a monthly basis ;-)) buffers introduce both, latency and costs. Both is bad. You can hardly decrease propagation latency because you cannot speed up the light. But you should be careful not to introduce too much queueing latency. > > There has been some work on defining the right operating point of the > network for a certain metric. For instance, it is known (due to > Leonard Kleinrock as far as I can recall) that the "right" operating > point for an M/M/1 queue to optimise the average throughput/delay^r > for flows is to operate it at a point where the expected queue > occupancy is exactly r. > That's the queueing theory stuff - which is, in my opinion (and I'm ready to take flames ;-) http://www.youtube.com/watch?v=H5yUnnH9nu8) simply misleading here. In some special cases, particularly in wireless networks, buffers may enhance average throughput. And in case a huge average throughput is your objective you may consider using them. In switching nodes you encounter asynchronism which must be handled and which may requires a certain amount of buffer. (Actually, dealing with variable throughput in mobile links is nothing else than dealing with asynchronism, it is only more asynchronous than perhaps in an Ethernet switch.) However, sometimes my impression is that we use "intended congestion" in order to assess a system's capacity. And by and by I doubt, whether this is really the right way to go. Back to Kleinrock, the problem is to assess the right workload for the famous "knee". And as we cannot assess this we offer that much load to a network that it starts squirming, crying and vomiting - and eventually we try to ease the situation. I'm not fully convinced that this is the ideal way to go. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Sat Nov 16 04:26:22 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 16 Nov 2013 13:26:22 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <52874A4D.2020403@web.de> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> Message-ID: <5287646E.7090909@web.de> One quick remark: As Vimal said, Len Kleinrock assessed his queueing systems with ONE metric. It may well be, that different flows have different objectives. One flow may prefer a short RTT, another one high throughput. Both is lumped together by using "the" one and only metric. Perhaps, a buffer for a wireless link will be made larger, when we want a high average throughput, and smaller when we want a smal RTT. With VJCC we first feed the net. Nearly regardless of the consequences. And afterwards, we try to escape the home made disaster. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From j.vimal at gmail.com Sat Nov 16 08:43:56 2013 From: j.vimal at gmail.com (Vimal) Date: Sat, 16 Nov 2013 08:43:56 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <5287646E.7090909@web.de> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> Message-ID: I am not sure I entirely followed your previous email, but you seem to point out that buffers are inevitable. Yes, I think buffers are necessary for good utilisation as arrivals are hard to predict. In an ideal world the buffer would be sized just about right to fully utilise link (assuming that is something we care about). As you pointed out rightly -- so far, again as far as I am aware -- we have designed congestion control algorithms for a specific objective (which seems hard enough). From an optimisation perspective, the moment you have two objectives, it is not clear, and not meaningful to talk about "optimising" anything. It exposes a tradeoff -- I don't think there is hope of finding a universal scheduling algorithm that works best for all objectives. What is the 'right' tradeoff? I have no idea. You mentioned buffer sizing for low RTT and high throughput. I think achieving a particular objective might also need cooperation from end-hosts. Also, instead of one size fits all, you can have a hierarchical scheduler setup: - At the top level, divide bandwidth in some fashion between class A and B (say equally) - Class A has small buffers. - Class B has large buffers. - Flows that need low delay are directed to class A's queues. - Flows that need high throughput are directed to class B's queues. This way you can get a "bit" of both objectives while ensuring each class gets a certain bandwidth guarantee. On 16 November 2013 04:26, Detlef Bosau wrote: > One quick remark: As Vimal said, Len Kleinrock assessed his queueing > systems with ONE metric. > > It may well be, that different flows have different objectives. One flow > may prefer a short RTT, another one high throughput. Both is lumped > together by using "the" one and only metric. Perhaps, a buffer for a > wireless link will be made larger, when we want a high average > throughput, and smaller when we want a smal RTT. > > With VJCC we first feed the net. Nearly regardless of the consequences. > > And afterwards, we try to escape the home made disaster. > > > -- > ------------------------------------------------------------------ > Detlef Bosau > Galileistra?e 30 > 70565 Stuttgart Tel.: +49 711 5208031 > mobile: +49 172 6819937 > skype: detlef.bosau > ICQ: 566129673 > detlef.bosau at web.de http://www.detlef-bosau.de > > -- Vimal From detlef.bosau at web.de Sat Nov 16 09:28:01 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 16 Nov 2013 18:28:01 +0100 Subject: [e2e] Question the other way round: In-Reply-To: References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> Message-ID: <5287AB21.10301@web.de> Am 16.11.2013 17:43, schrieb Vimal: > I am not sure I entirely followed your previous email, but you seem to > point out that buffers are inevitable. Yes, I think buffers are > necessary for good utilisation as arrivals are hard to predict. In an > ideal world the buffer would be sized just about right to fully > utilise link (assuming that is something we care about). Basically: Yes, buffers are nearly inevitable in an asynchronous system. However, I think we should not focus too much on link utilization. Sender -----(1 G link) ---- Switch with buffer -------------------(100 MBit/s long distance link) -----------------------Receiver In a scenario like that, if we added 16 GByte memory to the switch, a "greedy source" (which is rate, thanks to god) would blast 16 MByte data into the net - to utilize the buffer. And if the long distance link is long enough and we would use BIC, we would even care for blasting the data into the net extremely fast. And afterwards, we would complain about buffer bloat problems and unsatisfactory RTT. Yes, we would utilize the buffer then ;-) > > As you pointed out rightly -- so far, again as far as I am aware -- we > have designed congestion control algorithms for a specific objective > (which seems hard enough). From an optimisation perspective, the > moment you have two objectives, it is not clear, and not meaningful to > talk about "optimising" anything. It exposes a tradeoff -- I don't > think there is hope of finding a universal scheduling algorithm that > works best for all objectives. What is the 'right' tradeoff? I have > no idea. Me neither, so that's another concern (I'm still expecting flames) in a pure end to end way of thinking, that we use THE ONE scheduling algorithm, wich is mainly the self clocking / self scheduling algorithm used by VJCC. Although alternatives could make sense in some cases. > > You mentioned buffer sizing for low RTT and high throughput. I think > achieving a particular objective might also need cooperation from > end-hosts. May be. > Also, instead of one size fits all, you can have a hierarchical > scheduler setup: !!!!! (looking for the thumb up emoticon :-)) > > - At the top level, divide bandwidth in some fashion between class A > and B (say equally) or defined appropriately. > - Class A has small buffers. > - Class B has large buffers. and now upon something completey different. *eg*. How is that achieved in the approach by Ford and Iyengar, Manu Lochin pointed to? (nasty question, I know.) > - Flows that need low delay are directed to class A's queues. > - Flows that need high throughput are directed to class B's queues. > Absolutely. The good old DiffServ ID. Or (some people claim, I would have the first grey hairs on my head) the good ol' TOS bits. > This way you can get a "bit" of both objectives while ensuring each > class gets a certain bandwidth guarantee. > I'm not thinking in guarantees here. In my opinion, the success of the Internet is mainly due to the best effort concept. However, what is "best effort" all about? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From sergey.gorinsky at imdea.org Sat Nov 16 12:21:57 2013 From: sergey.gorinsky at imdea.org (Sergey Gorinsky) Date: Sat, 16 Nov 2013 21:21:57 +0100 Subject: [e2e] Question the other way round: In-Reply-To: References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> Message-ID: <000001cee309$805b7010$81125030$@gorinsky@imdea.org> Dear Vimal, > Also, instead of one size fits all, you can have a hierarchical scheduler setup: > > - At the top level, divide bandwidth in some fashion between class A and B > - Class A has small buffers. > - Class B has large buffers. > - Flows that need low delay are directed to class A's queues. > - Flows that need high throughput are directed to class B's queues. > > This way you can get a "bit" of both objectives while ensuring each class > gets a certain bandwidth guarantee. The RD (Rate-Delay) design gives a choice between two best-effort services, one with low delay and the other with higher throughput: "Leveraging the Rate-Delay Trade-off for Service Differentiation in Multi-Provider Networks" by M. Podlesny and S. Gorinsky, IEEE Journal on Selected Areas in Communications, 29(5), pp. 997-1008, May 2011, http://fourier.networks.imdea.org/~sergey_gorinsky/pdf/JSAC_Leveraging_Rate- Delay.pdf Thank you, Sergey From detlef.bosau at web.de Sat Nov 16 14:09:14 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 16 Nov 2013 23:09:14 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <000001cee309$805b7010$81125030$@gorinsky@imdea.org> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> <000001cee309$805b7010$81125030$@gorinsky@imdea.org> Message-ID: <5287ED0A.1060604@web.de> Am 16.11.2013 21:21, schrieb Sergey Gorinsky: > http://fourier.networks.imdea.org/~sergey_gorinsky/pdf/JSAC_Leveraging_Rate- > Delay.pdf At a very first glance, I see a bunch of formulae. I dealt with mobile networks for about 14 years now - and the major lesson learned was: I don't believe in formulae. (Or at least: Not too much.) When I consider providing large buffers to enhance average throughput, my focus are "last mile routers" for wireless access networks. I.e. a very particular situation. My general attitude is: The major purpose of networks is forwarding data from the sender to the receiver. Not buffering it. And with respect to links: The purpose of links and networks is to provide service for the application. It's not the application's job to utilize the network. That's why I put our probing and congestion reaction in question because my impression is, that in these strategies sometimes the tail wags the dog. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From sergey.gorinsky at imdea.org Sun Nov 17 02:20:57 2013 From: sergey.gorinsky at imdea.org (Sergey Gorinsky) Date: Sun, 17 Nov 2013 11:20:57 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <5287ED0A.1060604@web.de> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> <000001cee309$805b7010$81125030$@gorinsky@imdea.org> <5287ED0A.1060604@web.de> Message-ID: <000001cee37e$b4f67c00$1ee37400$@gorinsky@imdea.org> Dear Detlef, >> http://fourier.networks.imdea.org/~sergey_gorinsky/pdf/JSAC_Leveraging >> _Rate-Delay.pdf > > At a very first glance, I see a bunch of formulae. > > I dealt with mobile networks for about 14 years now - and > the major lesson learned was: I don't believe in formulae. Actually, the formulae are contained within a page or so. The main idea of the RD network services is quite simple: instead of differentiated pricing or admission control (which seem difficult in the multi-provider Internet), the rate-delay trade-off can serve as a natural basis for performance differentiation. The design comes with built-in incentives for delay-sensitive apps to use the low-delay D service and for throughput-sensitive apps to communicate over the higher-throughput R service. > a very particular situation. My general attitude is Making general conclusions based on very particular situations? Is there an experimental evidence against existence of rate-delay trade-offs, or rate-loss trade-offs in networks without buffers, in wireless settings? > I put our probing and congestion reaction in question Available network capacity is not fully predictable, especially in mobile networks. While the uncertainty cannot be eliminated, it can be reduced. Probing is a fundamental way of doing so. Without being too ambitions and risking losses or delays (with buffers), how can one discover the full transmission potential? Do you have in mind an alternative way of determining an appropriate transmission rate? Thank you, Sergey From detlef.bosau at web.de Sun Nov 17 05:23:39 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 17 Nov 2013 14:23:39 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <000001cee37e$b4f67c00$1ee37400$@gorinsky@imdea.org> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> <000001cee309$805b7010$81125030$@gorinsky@imdea.org> <5287ED0A.1060604@web.de> <000001cee37e$b4f67c00$1ee37400$@gorinsky@imdea.org> Message-ID: <5288C35B.7020508@web.de> Am 17.11.2013 11:20, schrieb Sergey Gorinsky: > Actually, the formulae are contained within a page or so. The main idea of > the RD network services is quite simple: instead of differentiated pricing > or admission control (which seem difficult in the multi-provider Internet), > the rate-delay trade-off can serve as a natural basis for performance > differentiation. The design comes with built-in incentives for > delay-sensitive apps to use the low-delay D service and for > throughput-sensitive apps to communicate over the higher-throughput R > service. The good ol' "Metro Pricing" :-) (Rumour says that the "Metro" in Paris introduced higher prices for first class travellers. So, only persons who could afford this price travelled first class. A certain "entre nous" feeling for the rich. I never checked whether or not this is an urban legend.) >> a very particular situation. My general attitude is > Making general conclusions based on very particular situations? No. And I did not even consider a rate-delay trade off in this attitude. Of course, we can conduct a discussion which buffer design will cause which delay and which loss. Too small buffers causes loss, too large buffer cause delay. At least in today's Internet. > Is there > an experimental evidence against existence of rate-delay trade-offs, or > rate-loss trade-offs in networks without buffers, in wireless settings? Hardly. And beware of experimental results in wireless settings. Wireless experiments are hardly reproducible. Remember the "spurious timeout" discussion in the late 90s. For about five years, a couple of PhD students "rocked the conferences" with "spurious timeout" results and approaches - at the end of the day, Hasenleither et al. made huge efforts for finding those in UMTS networks. With negative result. Many doctoral hats, some of which perhaps with honours, gifted away for solving a non problem. We should learn the lesson, that loss delay considerations are extremely difficult in wireless networks. (In the sense of: practically impossible.) But this is an old debate, Hasenleithner et al. published their results, the hats were given, the issue moved out of sight and we turned to the next story. > > Available network capacity is not fully predictable, especially in mobile What is "network capacity" all about? I often read: "Bandwidth Delay Product." Bandwidth (interestingly given in Bit/s) times delay. Actually, this is taken from Little's Law: Average number of nobs ("Bytes") in the system = Average arrival rate * Average sojourn time. Think of Gig Ethernet: rate = 10^9 Bit/s, example: sojourn time = 100 ms, => Number of bits in the system = 10^9 bit/s * 100 * 10^-3 s = 10^6 * 100 bit = 10^8 bit. No one gives a thought, were first the bits reside and second whether Little's Law applies at all. Nevertheless, the network capacity is given in Bit - and we are comfortable. With respect to VJCC, we have a capacity which is stable and where we can apply the conservation principle, because the capacity is stable and the network must reach an equilibrium where no bit is offered to the net until another one has been taken from the net. And because the network capacity is that nice stable - we share it among competing flows. Although we don't have the least idea, what we are sharing here at all. Not to be misumderstood, I did so myself for years. (Funny remark: We put buffers into the system in order to increase a system's capacity, and afterwards we complain about buffer bloat and increasing sojourn times. Didn't I write already that the sojourn time is the quotient "number of bytes in the sytem / average arrival rate"? So what's going to happen when I keep my rate constant and increase the capacity by adding buffers? That's really a 1 million dollar question.) However, the basic assumption for Little's Law is: The mentioned averages do at least exist - hence a sojourn time has a finite expectation, spoken more drastically: There is no loss of jobs! As soon as dropped packets come into play, Little's Law doesn't apply and we can forget about all the nice formulae. (Do you know the song Kodachrome by Simon and Garfunkel? http://www.youtube.com/watch?v=QXZTBu_3ioI > Mama don't take my Kodachrome away > Mama don't take my Kodachrome away > Mama don't take my Kodachrome away ) May be, I'm bitter here. However, sometimes I think that all this Little's Law stuff - and hence much of the formulae derived from that - is simply Kodachrome. Which gives us that nice bright colours, which gives us the greens of summers and makes us think all the Net's a sunny day. And the larger the network grows, the worse becomes the situation. Basically, formulae aren't bad. No way. But even Sir Peter, JC will certainly agree here as one of the physicists on the list, wasn't satisfied until the Higgs Boson was actually found. That's the difference between a PhD thesis, which claims something which is perhaps never found ("spurious timeous") and the Nobel price. And as far as I know, Sir Peter put his theory in question until the final proof. The problem is that we deal with "capacity" in the same way with any "real" resource and don't think about its physical realization. And we don't think about the resource which is actually shared. Which may be (Ethernet) transmission time. Or (UMTS) transmission power. > networks. While the uncertainty cannot be eliminated, it can be reduced. > Probing is a fundamental way of doing so. Without being too ambitions and > risking losses or delays (with buffers), how can one discover the full > transmission potential? Do you have in mind an alternative way of > determining an appropriate transmission rate? Not only in mind. You're sitting in front of it. How does your computer share computing time - although it doesn't know the time needed by each Job? It does time slicing. More general: There is a scheduler who makes a choice to whom (of the feasible jobs) computation time should be granted. I would like to consider this in the network as well. IP scheduling is done "implicitly". Of course, there are schedulers. Any switching node has schedulers. And these cooperate (or interfere) with (e.g.) TCP's self scheduling. However: A process scheduler knows about available resources. A TCP sender "probes" for "capacity" - whatever that may be. (And very drastically put: This capacity is basically a mathematical artifact.) Detlef > Thank you, > > Sergey > > > -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From richard at richardclegg.org Sun Nov 17 09:08:30 2013 From: richard at richardclegg.org (Richard G. Clegg) Date: Sun, 17 Nov 2013 17:08:30 +0000 Subject: [e2e] Question the other way round: In-Reply-To: <5288C35B.7020508@web.de> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> <000001cee309$805b7010$81125030$@gorinsky@imdea.org> <5287ED0A.1060604@web.de> <000001cee37e$b4f67c00$1ee37400$@gorinsky@imdea.org> <5288C35B.7020508@web.de> Message-ID: <5288F80E.2040908@richardclegg.org> On 17/11/13 13:23, Detlef Bosau wrote: > The good ol' "Metro Pricing" :-) (Rumour says that the "Metro" in > Paris introduced higher prices for first class travellers. So, only > persons who could afford this price travelled first class. A certain > "entre nous" feeling for the rich. I never checked whether or not this > is an urban legend.) It was always a classic example when I studied transport systems. The idea that some people pay more for "the same" to ensure a relatively quieter carriage (as fewer people buy that). Abolished in 1991. http://www.senat.fr/questions/base/1991/qSEQ910816839.html -- Richard G. Clegg, Dept of Elec. Eng., University College London http://www.richardclegg.org/ From sergey.gorinsky at imdea.org Sun Nov 17 11:13:19 2013 From: sergey.gorinsky at imdea.org (Sergey Gorinsky) Date: Sun, 17 Nov 2013 20:13:19 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <5288C35B.7020508@web.de> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> <000001cee309$805b7010$81125030$@gorinsky@imdea.org> <5287ED0A.1060604@web.de> <000001cee37e$b4f67c00$1ee37400$@gorinsky@imdea.org> <5288C35B.7020508@web.de> Message-ID: <000e01cee3c9$141d76d0$3c586470$@gorinsky@imdea.org> Dear Detlef, > The good ol' "Metro Pricing" :-) No. PMP (Paris Metro Pricing) by A. Odlyzko charges different prices for the same service. The RD design charges the same price for different services. > How does your computer share computing time ... > It does time slicing... > A process scheduler knows about available resources. In networks, the resources of bottlenecks are shared by remote senders. Besides, the bottlenecks migrate as the distributed load changes. Not knowing which resources to slice makes it difficult to do the slicing. Whereas global scheduling of Internet transmission resources is an appealing idea advocated by a number of (excellent) researchers over a number of years, there has not been much concrete progress in this direction. Schemes like XCP come the closest, while still staying in the probing congestion-control camp. Best regards, Sergey From detlef.bosau at web.de Sun Nov 17 12:22:17 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 17 Nov 2013 21:22:17 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <000e01cee3c9$141d76d0$3c586470$@gorinsky@imdea.org> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> <000001cee309$805b7010$81125030$@gorinsky@imdea.org> <5287ED0A.1060604@web.de> <000001cee37e$b4f67c00$1ee37400$@gorinsky@imdea.org> <5288C35B.7020508@web.de> <000e01cee3c9$141d76d0$3c586470$@gorinsky@imdea.org> Message-ID: <52892579.5010703@web.de> Am 17.11.2013 20:13, schrieb Sergey Gorinsky: > Dear Detlef, > >> The good ol' "Metro Pricing" :-) > No. PMP (Paris Metro Pricing) by A. Odlyzko charges different prices for > the same service. The RD design charges the same price for different > services. Oh my goodness! "The same service!" Don't ever tell this to the first class passengers ;-) > >> How does your computer share computing time ... >> It does time slicing... >> A process scheduler knows about available resources. > In networks, the resources of bottlenecks are shared by remote senders. That's a matter of perspective. They are shared by flows / remote senders / packets respectively. You could even choose "packets with the same TOS" or "packets belonging to the same QoS class". In VJCC, you don't even ask that sophisticated questions, in VJCC, the resources of bottlenecks is shared among those packets which aren't dropped. > Besides, the bottlenecks migrate as the distributed load changes. Right your are - however: VJCC (and flavours based upon this) don't ask for a bottleneck. VJCC shares a "capacity". > Not > knowing which resources to slice makes it difficult to do the slicing. There is even no need for slicing. In operating systems, you have to suspend tasks, take up tasks etc. In packet switched networks you have packets and queues. And the decision to take is to choose a queue which gets service. No more no less. > Whereas global scheduling of Internet transmission resources is an appealing > idea advocated by a number of (excellent) researchers over a number of > years, there has not been much concrete progress in this direction. Hm. Perhaps the problem is that a scheduler has only a very limited view of "the Internet"? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From jon.crowcroft at cl.cam.ac.uk Sun Nov 17 12:41:07 2013 From: jon.crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Sun, 17 Nov 2013 20:41:07 +0000 Subject: [e2e] Question the other way round: In-Reply-To: <52891d8f.09d4b40a.3bec.ffffdaaaSMTPIN_ADDED_BROKEN@mx.google.com> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> <5287ED0A.1060604@web.de> <5288C35B.7020508@web.de> <52891d8f.09d4b40a.3bec.ffffdaaaSMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: indeed.... perfect schedules would work for fixed rate traffic...but in the age of data nets (eve with packet video and audio) we decided to go with statistical multiplexing and work conserving, so we keep links busy whenever packets arrive but we don't know when packets will arrive so we have to approximate as best we can to having packets arrive when there's just enough capacity on an output link for the current input flow rate at any switch so in some sense there's either 0 or 1 packet ahead - in the fluid flow approximation world, you get infinitesimal slices of packets, so you can adjust your flow rate to be nearer a perfect fit - in the discrete packet world, you are off by edge-of-packet's worth at least - but worse, the traffic matrix variation over time (and in the presence of wireless and mobile wibbly wobbly links, the actual output link capacity) vary unpredictably with time, so your estimates of what you can do are off by as much as an RTT's worth of lumpy squeezy stuff (going for R rather than D)- of course if a lot of sources last a really long time, and the number of them doesn't vary, and they are all application limited,you might do very well (i.e. go for D rather than R) seems ok to me On Sun, Nov 17, 2013 at 7:13 PM, Sergey Gorinsky wrote: > Dear Detlef, > > > The good ol' "Metro Pricing" :-) > > No. PMP (Paris Metro Pricing) by A. Odlyzko charges different prices for > the same service. The RD design charges the same price for different > services. > > > How does your computer share computing time ... > > It does time slicing... > > A process scheduler knows about available resources. > > In networks, the resources of bottlenecks are shared by remote senders. > Besides, the bottlenecks migrate as the distributed load changes. Not > knowing which resources to slice makes it difficult to do the slicing. > Whereas global scheduling of Internet transmission resources is an > appealing > idea advocated by a number of (excellent) researchers over a number of > years, there has not been much concrete progress in this direction. Schemes > like XCP come the closest, while still staying in the probing > congestion-control camp. > > Best regards, > > Sergey > > > From detlef.bosau at web.de Sun Nov 17 23:11:14 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 18 Nov 2013 08:11:14 +0100 Subject: [e2e] Question the other way round: In-Reply-To: References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> <5287ED0A.1060604@web.de> <5288C35B.7020508@web.de> <52891d8f.09d4b40a.3bec.ffffdaaaSMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: <5289BD92.1090102@web.de> Jon, I'm with you in most points, particularly with respect to wireless networks ware a "rate" is hardly predictable. Particularly in wireless networks, we are interested in high link utilization. (The German translation of Ephesians, 5, 16 would fit in here, for German speaking readers: "Kaufet die Zeit aus, denn es ist b?se Zeit.") So we often use opportunistic scheduling: The channel is granted to sources which are likely to see a good throughput, while others are postponed. Actually, this makes wireless networks affordable at all. On the other hand, a strict work conservation may lead to unwanted effects in large BDP networks - particularly when the tail wags the dog and it is up to the application to keep the line busy. I had a discussion with Martin Geddes about this matter where Martin put a strong emphasis on the avoidance of work in progress, so buffers should stay empty under all circumstances. This is exactly an uneconomic way for lines with variable and unpredictable throughput (or more generally spoken for expensive machinery particularly with expensive set-up times, e.g. a furnace or a production line for cars) (I once attended a talk by Hans Reiser, who then talked about Adam Smith, I shouldn't, so I leave economics here). On the other hand I think of large BDP networks, where the capacity (and hence the sojourn time) is DOUBLED by buffers, for the only reason that the line is fully utilized by a flow's actual workload even when the workload reaches its lower limit in the "congestion sawtooth". And I don't see a compelling reason for doing so. When the line is utilized that's fine. And it is (again economic thinking) up to the provider to find an appropriate match of offer and demand here. (Underutilization is expensive, overload causes waiting times.) In my opinion, this should be a reason for well designed scheduling - which may well pursue different objectives for different lines. Simply doubling a lines "capacity" by buffers (large BDP lines) is questionable. A simple round robin scheduler would do better. More than that: It would spare us these annoying probing activities which we tried to overcome with BIC and the like. And which lead to the mice elephant problem where elephants outperform mice. That doesn't mean that self clocking and self scheduling, as done in TCP, is generally bad. But that should say that, depending on the scenario, there may be alternatives which are more appropriate. And with particular respect to the e2e consideration in Salzer's paper: I think indeed that the actual scheduling strategy for a link should be chosen locally and not on the end points. Am 17.11.2013 21:41, schrieb Jon Crowcroft: > indeed.... > > perfect schedules would work for fixed rate traffic...but in the age of > data nets (eve with packet video and audio) we decided to go with > statistical multiplexing and work conserving, so we keep links busy > whenever packets arrive but we don't know when packets will arrive so we > have to approximate as best we can to having packets arrive when there's > just enough capacity on an output link for the current input flow rate at > any switch so in some sense there's either 0 or 1 packet ahead - in the > fluid flow approximation world, you get infinitesimal slices of packets, so > you can adjust your flow rate to be nearer a perfect fit - in the discrete > packet world, you are off by edge-of-packet's worth at least - but worse, > the traffic matrix variation over time (and in the presence of wireless and > mobile wibbly wobbly links, the actual output link capacity) vary > unpredictably with time, so your estimates of what you can do are off by as > much as an RTT's worth of lumpy squeezy stuff (going for R rather than D)- > of course if a lot of sources last a really long time, and the number of > them doesn't vary, and they are all application limited,you might do very > well (i.e. go for D rather than R) > > seems ok to me -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Tue Nov 19 02:54:38 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 19 Nov 2013 11:54:38 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <000e01cee3c9$141d76d0$3c586470$@gorinsky@imdea.org> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> <000001cee309$805b7010$81125030$@gorinsky@imdea.org> <5287ED0A.1060604@web.de> <000001cee37e$b4f67c00$1ee37400$@gorinsky@imdea.org> <5288C35B.7020508@web.de> <000e01cee3c9$141d76d0$3c586470$@gorinsky@imdea.org> Message-ID: <528B436E.30809@web.de> Sergey, what happen in your approach, when a sender chooses class "D" for all pakets and flows? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Tue Nov 19 03:05:17 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 19 Nov 2013 12:05:17 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <528B436E.30809@web.de> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> <000001cee309$805b7010$81125030$@gorinsky@imdea.org> <5287ED0A.1060604@web.de> <000001cee37e$b4f67c00$1ee37400$@gorinsky@imdea.org> <5288C35B.7020508@web.de> <000e01cee3c9$141d76d0$3c586470$@gorinsky@imdea.org> <528B436E.30809@web.de> Message-ID: <528B45ED.60101@web.de> > Sergey, > > what happen in your approach, when a sender chooses class "D" for all > pakets and flows? > Don't you achieve QoS by underutilization in your approach? So, while with metro pricing you make the crowds stay away from the upper class carriage by the ticket price - while in your approach the sender has to care for a quiet carriage by appropriate pre selection - hence, the tickets can be offered for the same price? So, the selection is the same, only the bad guy who does the selection changed the location? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From touch at isi.edu Tue Nov 19 09:02:40 2013 From: touch at isi.edu (Joe Touch) Date: Tue, 19 Nov 2013 09:02:40 -0800 Subject: [e2e] Is the end to end paradigm appropriate for congestion control? In-Reply-To: References: <5280CC5D.70401@web.de> Message-ID: <528B99B0.5020306@isi.edu> On 11/11/2013 11:19 PM, Jon Crowcroft wrote: > don't see much to object to in your post - good question - > > the "in flight" or "outstanding packets" or "in the pipe" > or whichever phrase you use for packets that > aren't at least still mostly being serialised or de-serialised > in a nic/transceiver at one or other end > of a transmit/receive pair on a link.... > > yes, i suspect these are a rare case in practice nowadays... > > back in the day, when testing VJCC on the internet in 87/88, > one of the "interesting" cases was the > SATNET which used geostationary satellites - And Internet service is still provided over such satellites today (DirectPC in the US). I'm not so sure I would write off in-flight packets; at common home speeds (5Mbps), a single packet is only 2.5ms, which is one packet ever 700 miles in fiber. When I get stuff from the UK, that's roughly 15 packets in flight - and that's over the ground (I picked the great-circle route, which is the shortest possible; actual fiber is probably at least 20%-50% longer). Joe Joe From touch at isi.edu Tue Nov 19 09:07:17 2013 From: touch at isi.edu (Joe Touch) Date: Tue, 19 Nov 2013 09:07:17 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <52869563.4010201@web.de> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> Message-ID: <528B9AC5.3010608@isi.edu> On 11/15/2013 1:42 PM, Detlef Bosau wrote: > Why do we need congestion at all? No, but only if you use circuits. But then you've pushed the "congestion" overload situation to the circuit setup time, and not all circuits will succeed. Or you could be omniscient ;-) Otherwise, you need to deal with the fact that sometimes two packets want to go to the same output port and can't, and you didn't find that out until they collided. Joe From dhc2 at dcrocker.net Tue Nov 19 10:10:11 2013 From: dhc2 at dcrocker.net (Dave Crocker) Date: Tue, 19 Nov 2013 10:10:11 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <528B9AC5.3010608@isi.edu> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> Message-ID: <528BA983.9060208@dcrocker.net> On 11/19/2013 9:07 AM, Joe Touch wrote: > > > On 11/15/2013 1:42 PM, Detlef Bosau wrote: >> Why do we need congestion at all? > > No, but only if you use circuits. But then you've pushed the > "congestion" overload situation to the circuit setup time, and not all > circuits will succeed. > > Or you could be omniscient ;-) > > Otherwise, you need to deal with the fact that sometimes two packets > want to go to the same output port and can't, and you didn't find that > out until they collided. Given the complete generality of the question that was asked, is there something fundamentally deficient in the answer in: http://en.wikipedia.org/wiki/Congestion_control#Congestion_control ? In particular, I think it's opening sentence is quite reasonable. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From touch at isi.edu Tue Nov 19 10:15:23 2013 From: touch at isi.edu (Joe Touch) Date: Tue, 19 Nov 2013 10:15:23 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <528BA958.9050203@bbiw.net> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> Message-ID: <528BAABB.90004@isi.edu> On 11/19/2013 10:09 AM, Dave Crocker wrote: > On 11/19/2013 9:07 AM, Joe Touch wrote: >> >> >> On 11/15/2013 1:42 PM, Detlef Bosau wrote: >>> Why do we need congestion at all? >> >> No, but only if you use circuits. But then you've pushed the >> "congestion" overload situation to the circuit setup time, and not all >> circuits will succeed. >> >> Or you could be omniscient ;-) >> >> Otherwise, you need to deal with the fact that sometimes two packets >> want to go to the same output port and can't, and you didn't find that >> out until they collided. > > > > Given the complete generality of the question that was asked, is there > something fundamentally deficient in the answer in: > > http://en.wikipedia.org/wiki/Congestion_control#Congestion_control > > ? > > In particular, I think it's opening sentence is quite reasonable. I agree, but it jumps in assuming packets. Given packets, it's easy to assume that oversubscription is the natural consequence of avoiding congestion. But it isn't if you have a-priori known traffic patterns - as are increasingly common inside data centers, as well as for some past circuit use cases. I.e., the opening sentence assumes that all congestion control is reactive. It can be proactive given the right information. Joe From dhc2 at dcrocker.net Tue Nov 19 10:23:54 2013 From: dhc2 at dcrocker.net (Dave Crocker) Date: Tue, 19 Nov 2013 10:23:54 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <528BAABB.90004@isi.edu> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> Message-ID: <528BACBA.7090909@dcrocker.net> On 11/19/2013 10:15 AM, Joe Touch wrote: > I.e., the opening sentence assumes that all congestion control is > reactive. It can be proactive given the right information. Proactive would be nice. We should put more effort into /creating/ congestion. Oh wait... d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From touch at isi.edu Tue Nov 19 10:34:50 2013 From: touch at isi.edu (Joe Touch) Date: Tue, 19 Nov 2013 10:34:50 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <528BACBA.7090909@dcrocker.net> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528BACBA.7090909@dcrocker.net> Message-ID: <528BAF4A.1030301@isi.edu> On 11/19/2013 10:23 AM, Dave Crocker wrote: > On 11/19/2013 10:15 AM, Joe Touch wrote: >> I.e., the opening sentence assumes that all congestion control is >> reactive. It can be proactive given the right information. > > > Proactive would be nice. We should put more effort into /creating/ > congestion. Oh wait... Circuits are proactive (when everyone uses them) - you indicate your needs in advance, and you use / don't use them based on the advance reservation. Joe From richard at bennett.com Tue Nov 19 11:59:06 2013 From: richard at bennett.com (Richard Bennett) Date: Tue, 19 Nov 2013 11:59:06 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <528BAF4A.1030301@isi.edu> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528BACBA.7090909@dcrocker.net> <528BAF4A.1030301@isi.edu> Message-ID: <528BC30A.8030106@bennett.com> Admission Control is pro-active in the same way that circuits are, as is differential provisioning. Last mile broadband connections in the US have an average capacity of 30 Mbps while aggregation and transit links are much fatter, but the differential isn't enough to prevent congestion. On 11/19/2013 10:34 AM, Joe Touch wrote: > > > On 11/19/2013 10:23 AM, Dave Crocker wrote: >> On 11/19/2013 10:15 AM, Joe Touch wrote: >>> I.e., the opening sentence assumes that all congestion control is >>> reactive. It can be proactive given the right information. >> >> >> Proactive would be nice. We should put more effort into /creating/ >> congestion. Oh wait... > > Circuits are proactive (when everyone uses them) - you indicate your > needs in advance, and you use / don't use them based on the advance > reservation. > > Joe > -- Richard Bennett Visiting Fellow, American Enterprise Institute Center for Internet, Communications, and Technology Policy Editor, High Tech Forum (408) 829-4944 (mobile) (415) 967-2900 (office) From faber at isi.edu Tue Nov 19 19:14:41 2013 From: faber at isi.edu (Ted Faber) Date: Tue, 19 Nov 2013 19:14:41 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <528BAABB.90004@isi.edu> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> Message-ID: <528C2921.6030002@isi.edu> On 11/19/2013 10:15, Joe Touch wrote: > On 11/19/2013 10:09 AM, Dave Crocker wrote: >> Given the complete generality of the question that was asked, is there >> something fundamentally deficient in the answer in: >> >> http://en.wikipedia.org/wiki/Congestion_control#Congestion_control >> >> ? >> >> In particular, I think it's opening sentence is quite reasonable. > > I agree, but it jumps in assuming packets. Given packets, it's easy to > assume that oversubscription is the natural consequence of avoiding > congestion. Unless someone's edited it, you should read the first sentence again. I see: > Congestion control concerns controlling traffic entry into a telecommunications network, so as to avoid congestive collapse by attempting to avoid oversubscription of any of the processing or link capabilities of the intermediate nodes and networks and taking resource reducing steps, such as reducing the rate of sending packets. I read the reference to packets as an example. And I would end the sentence with "if necessary" to indicate that reducing resource utilization is done only when needed (which it wouldn't be in a non-oversubscribed system). Overall I think it reasonably encompasses proactive congestion control. > > But it isn't if you have a-priori known traffic patterns - as are > increasingly common inside data centers, as well as for some past > circuit use cases. There's a lot to explore, of course. When I was in graduate school some nut was doing congestion control keeping average sending rates constant and modulating the burstiness of the traffic by adjusting the time over which the average rate was policed. It's a fascinating field, in which one can find the trees attractive to the point of beaching the ship. The forest is at least as interesting, especially if one is interested in economics. Alas, similar to economics, it's also hard to do a real investigation at any scale anymore. -- Ted Faber http://www.isi.edu/~faber PGP: http://www.isi.edu/~faber/pubkeys.asc Unexpected attachment on this mail? See http://www.isi.edu/~faber/FAQ.html#SIG From sergey.gorinsky at imdea.org Wed Nov 20 06:21:58 2013 From: sergey.gorinsky at imdea.org (Sergey Gorinsky) Date: Wed, 20 Nov 2013 15:21:58 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <528B45ED.60101@web.de> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> <000001cee309$805b7010$81125030$@gorinsky@imdea.org> <5287ED0A.1060604@web.de> <000001cee37e$b4f67c00$1ee37400$@gorinsky@imdea.org> <5288C35B.7020508@web.de> <000e01cee3c9$141d76d0$3c586470$@gorinsky@imdea.org> <528B436E.30809@web.de> <528B45ED.60101@web.de> Message-ID: Dear Detlef, If a sender chooses the D class for all its flows, each of its flows is served with small queuing delay at the bottleneck link (but with a smaller forwarding rate than the rate given to a throughput-greedy flow of another sender that chooses the R class). The RD design is work-conserving and does not cause underutilization. The partition of the bottleneck-link capacity between the two classes is dynamic and depends on the numbers of R and D flows. Trying to fit this into your metro-carriage analogy, one can think of the D class as quiet (low-delay) carriages and the R class as noisier (higher-throughput) carriages. The point is that some apps naturally prefer noisier carriages, i.e., a higher forwarding rate regardless of queuing delay. Charging differently for the R and D services would only distort the natural preferences of the customers. Thus, the RD service differentiation is not an issue of goodness vs. badness (or wealth vs. poverty) - it is just that some customers prefer noise and the others like quiet. Best regards, Sergey On 11/19/13 12:05 PM, "Detlef Bosau" wrote: > >> Sergey, >> >> what happen in your approach, when a sender chooses class "D" for all >> pakets and flows? >> > >Don't you achieve QoS by underutilization in your approach? > >So, while with metro pricing you make the crowds stay away from the >upper class carriage by the ticket price - while in your approach >the sender has to care for a quiet carriage by appropriate pre selection >- hence, the tickets can be offered for the same price? > >So, the selection is the same, only the bad guy who does the selection >changed the location? > >-- >------------------------------------------------------------------ >Detlef Bosau >Galileistra?e 30 >70565 Stuttgart Tel.: +49 711 5208031 > mobile: +49 172 6819937 > skype: detlef.bosau > ICQ: 566129673 >detlef.bosau at web.de http://www.detlef-bosau.de > From detlef.bosau at web.de Wed Nov 20 06:36:20 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 20 Nov 2013 15:36:20 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <528BA958.9050203@bbiw.net> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> Message-ID: <528CC8E4.4030100@web.de> Am 19.11.2013 19:09, schrieb Dave Crocker: > > Given the complete generality of the question that was asked, is there > something fundamentally deficient in the answer in: > > http://en.wikipedia.org/wiki/Congestion_control#Congestion_control > > ? Yes. Particularly the separation of flow control and congestion control does not come naturally but is a definition. In a packet switching network a packet travels through a number of switching nodes. So you have a sequence of flow control problems. And not "a sender", some "nebulous path which may be congested", "a receiver with a flow control problem". Detlef -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Wed Nov 20 09:18:01 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 20 Nov 2013 18:18:01 +0100 Subject: [e2e] Question the other way round: In-Reply-To: References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <52874A4D.2020403@web.de> <5287646E.7090909@web.de> <000001cee309$805b7010$81125030$@gorinsky@imdea.org> <5287ED0A.1060604@web.de> <000001cee37e$b4f67c00$1ee37400$@gorinsky@imdea.org> <5288C35B.7020508@web.de> <000e01cee3c9$141d76d0$3c586470$@gorinsky@imdea.org> <528B436E.30809@web.de> <528B45ED.60101@web.de> Message-ID: <528CEEC9.9020901@web.de> Am 20.11.2013 15:21, schrieb Sergey Gorinsky: > Dear Detlef, > > If a sender chooses the D class for all its flows, each of its flows is > served with small queuing delay at the bottleneck link (but with a smaller > forwarding rate than the rate given to a throughput-greedy flow of another > sender that chooses the R class). Do you really serve the two queues with different rates? Or am I confused between rate and throughput? > > The RD design is work-conserving and does not cause underutilization. Not physically. But the sender delivers the D class from some load by putting it into the R class. > The partition of the bottleneck-link capacity between the two classes is > dynamic and depends on the numbers of R and D flows. And when the D class alone does not utilize the link you have some kind of unterutilization ;-) > > Trying to fit this into your metro-carriage analogy, one can think of > the D class as quiet (low-delay) carriages and the R class as noisier > (higher-throughput) carriages. Yes, I thought so. > The point is that some apps naturally > prefer noisier carriages, i.e., a higher forwarding rate regardless of > queuing delay. Charging differently for the R and D services would only > distort the natural preferences of the customers. Thus, the RD service > differentiation is not an issue of goodness vs. badness (or wealth vs. > poverty) - it is just that some customers prefer noise and the others like > quiet. Hm. I don't think that customers _like_ noise. Nevertheless, they may distinguish their flows in ones which are more delay sensitive and others which are less. And you provide an opportunity to reflect this in your system. Is this correct? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From touch at isi.edu Wed Nov 20 10:15:41 2013 From: touch at isi.edu (Joe Touch) Date: Wed, 20 Nov 2013 10:15:41 -0800 Subject: [e2e] Fwd: Re: Question the other way round: In-Reply-To: <1384959448.479631159@apps.rackspace.com> References: <1384959448.479631159@apps.rackspace.com> Message-ID: <528CFC4D.4070709@isi.edu> Forwarded for David Reed. Joe (as list admin) -------- Original Message -------- Subject: Re: [e2e] Question the other way round: Date: Wed, 20 Nov 2013 09:57:28 -0500 (EST) From: dpreed at reed.com To: Ted Faber CC: end2end-interest at postel.org, "Joe Touch" (@Joe - admit if you like) In information theory, one characterizes multi-channel systems (such as networks) using a framing called "achievable rate region" (ARR). This is convex polytope in a space of P dimensions where P is the number of distinct pairs who might communicate. (The assumption that all dimensions are "independent" channels is a strong one - in real networks, traffic is either causally or probabilistically correlated by protocols at higher layers and events outside the system). Thus P grows as N**2, if N is the number of terminals of the network. The architecture of the network affects the ARR, whether it is a wireless network or a wired one. What's useful about this framing is that it points out (because the ARR is a convex polytope) that there is no single optimum, but there are many, many reasonable operating points inside the polytope. What is achievable may not be achieved. Since it is a "rate" region, it doesn't give you any information about the evolution of the optimal operating point over time - how fast can the system migrate from one point to another in the achievable space. (control latencies are crucial here). The other useful thing about this framing is that given a set of desired messages with latency requirements, one could in principle develop an algorithm that uses the polytope (which can be described by its vertices) to calculate a set of "operating points" that merely need be selected based on the traffic presented. As in the standard "linear programming" algorithm, the solution is always choose a vertex. The list of vertices is small compared to the entire ARR, and no other operating points are better, so one merely does a lookup of a vertex that satisfies all requirements, and configure the network to use that routing setup. But there is a catch. You have to have some algorithm that knows the acceptable rates to deliver to each pair. Unless you look at the forest, rather than the little tiny trees (the switches) you can't do very much. In particular, if you have recalculated routing tables, your ARR shrinks from a polytope to a much smaller one. If you have buffering (especially large buffering), you can't move from one vertex in the ARR to another vertex until the buffered packets have made it out of the system (which means that there is an Achievable Slew Rate between operating configurations of the system). In practice, what this all means is that while we can fully describe congestion control, anyone who claims they have a practical, optimal algorithm has either tailored the assumptions so that their algorithm has an optimum in that special case, or is claiming that "all uses" of the network share a common characteristic. The latter claim is a complete hand-wave without data. The idea that a datacenter (like the AWS) have "predictable" traffic patterns is truly weird. At the end of the day, AWS serves all comers, and deals with incredible (and fractal) demands based on end user interactions. And of course, they have computers, storage, and application programs that behave unpredictably. *Even in datacenters* - which is what I've spent the last few years working to improve, so I have a lot of experience - there is little or no predictability, and huge potential for congestion. We see it every day in typical data centers on the fabrics that are shared and multiplexed. There are a few principles that seem to work: 1) *Don't allow buffering in the network*. Buffer at the edges. Otherwise the ability to shift to a new operating point becomes so slow that the system is always operating in a pessimal state. 2) Keep number of hops short. 3) Try to keep the rates "balanced", which in practice means that all links of the network should all be capable of operating within a few dB of magnitude of the same rate. (e.g. don't mix 1 Mb links with 40Gb links on a switch). 4) Don't expect to operate at anything like the performance at a corner of the ARR. Because you can't change the operating point to another corner quickly, and the intermediate operating points build up buffering in the network, and force it to operate far from the edges of the ARR. On Tuesday, November 19, 2013 10:14pm, "Ted Faber" said: > On 11/19/2013 10:15, Joe Touch wrote: > > On 11/19/2013 10:09 AM, Dave Crocker wrote: > >> Given the complete generality of the question that was asked, is there > >> something fundamentally deficient in the answer in: > >> > >> http://en.wikipedia.org/wiki/Congestion_control#Congestion_control > >> > >> ? > >> > >> In particular, I think it's opening sentence is quite reasonable. > > > > I agree, but it jumps in assuming packets. Given packets, it's easy to > > assume that oversubscription is the natural consequence of avoiding > > congestion. > > Unless someone's edited it, you should read the first sentence again. I > see: > > > Congestion control concerns controlling traffic entry into a > telecommunications network, so as to avoid congestive collapse by attempting to > avoid oversubscription of any of the processing or link capabilities of the > intermediate nodes and networks and taking resource reducing steps, such as > reducing the rate of sending packets. > > I read the reference to packets as an example. And I would end the > sentence with "if necessary" to indicate that reducing resource > utilization is done only when needed (which it wouldn't be in a > non-oversubscribed system). Overall I think it reasonably encompasses > proactive congestion control. > > > > > But it isn't if you have a-priori known traffic patterns - as are > > increasingly common inside data centers, as well as for some past > > circuit use cases. > > There's a lot to explore, of course. When I was in graduate school some > nut was doing congestion control keeping average sending rates constant > and modulating the burstiness of the traffic by adjusting the time over > which the average rate was policed. > > It's a fascinating field, in which one can find the trees attractive to > the point of beaching the ship. The forest is at least as interesting, > especially if one is interested in economics. Alas, similar to > economics, it's also hard to do a real investigation at any scale anymore. > > -- > Ted Faber > http://www.isi.edu/~faber PGP: > http://www.isi.edu/~faber/pubkeys.asc > Unexpected attachment on this mail? See > http://www.isi.edu/~faber/FAQ.html#SIG > > From touch at isi.edu Wed Nov 20 10:23:17 2013 From: touch at isi.edu (Joe Touch) Date: Wed, 20 Nov 2013 10:23:17 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <528C2921.6030002@isi.edu> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528C2921.6030002@isi.edu> Message-ID: <528CFE15.7070808@isi.edu> On 11/19/2013 7:14 PM, Ted Faber wrote: > On 11/19/2013 10:15, Joe Touch wrote: >> On 11/19/2013 10:09 AM, Dave Crocker wrote: >>> Given the complete generality of the question that was asked, is there >>> something fundamentally deficient in the answer in: >>> >>> http://en.wikipedia.org/wiki/Congestion_control#Congestion_control >>> >>> ? >>> >>> In particular, I think it's opening sentence is quite reasonable. >> >> I agree, but it jumps in assuming packets. Given packets, it's easy to >> assume that oversubscription is the natural consequence of avoiding >> congestion. > > Unless someone's edited it, you should read the first sentence again. I > see: > >> Congestion control concerns controlling traffic entry into a >> telecommunications network, so as to avoid congestive collapse by >> attempting to avoid oversubscription of any of the processing or link >> capabilities of the intermediate nodes and networks and taking resource >> reducing steps, such as reducing the rate of sending packets. > > I read the reference to packets as an example. Me too. But circuits don't have a collapse or oversubscription. They simply reject calls that aren't compatible with available capacity. I'm not disagreeing with the definition; I'm disagreeing with the assumption that having a network implies congestion and thus the need for congestion control. There are a variety of mechanisms that avoid congestion, typically by a-priori reservation (circuits), or by limiting resource use implicitly (e.g., ischemic control). These are a kind of proactive control that avoid congestion in the first place. That's not to say whether these mechanisms are scalable or efficient compared to the resource sharing afforded by packet multiplexing. Joe From touch at isi.edu Wed Nov 20 10:25:05 2013 From: touch at isi.edu (Joe Touch) Date: Wed, 20 Nov 2013 10:25:05 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <528BC30A.8030106@bennett.com> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528BACBA.7090909@dcrocker.net> <528BAF4A.1030301@isi.edu> <528BC30A.8030106@bennett.com> Message-ID: <528CFE81.5090406@isi.edu> On 11/19/2013 11:59 AM, Richard Bennett wrote: > Admission Control is pro-active in the same way that circuits are, as is > differential provisioning. I'm not sure I agree; packet-based admission control is statistically similar to circuit switching, but can still statistically fail. > Last mile broadband connections in the US > have an average capacity of 30 Mbps while aggregation and transit links > are much fatter, but the differential isn't enough to prevent congestion. That's why it's not the same as circuits. Joe > > On 11/19/2013 10:34 AM, Joe Touch wrote: >> >> >> On 11/19/2013 10:23 AM, Dave Crocker wrote: >>> On 11/19/2013 10:15 AM, Joe Touch wrote: >>>> I.e., the opening sentence assumes that all congestion control is >>>> reactive. It can be proactive given the right information. >>> >>> >>> Proactive would be nice. We should put more effort into /creating/ >>> congestion. Oh wait... >> >> Circuits are proactive (when everyone uses them) - you indicate your >> needs in advance, and you use / don't use them based on the advance >> reservation. >> >> Joe >> > From faber at isi.edu Wed Nov 20 11:13:40 2013 From: faber at isi.edu (Ted Faber) Date: Wed, 20 Nov 2013 11:13:40 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <528CFE15.7070808@isi.edu> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528C2921.6030002@isi.edu> <528CFE15.7070808@isi.edu> Message-ID: <528D09E4.5020103@isi.edu> On 11/20/2013 10:23, Joe Touch wrote: > > > On 11/19/2013 7:14 PM, Ted Faber wrote: >> Unless someone's edited it, you should read the first sentence again. I >> see: >> >>> Congestion control concerns controlling traffic entry into a >>> telecommunications network, so as to avoid congestive collapse by >>> attempting to avoid oversubscription of any of the processing or link >>> capabilities of the intermediate nodes and networks and taking resource >>> reducing steps, such as reducing the rate of sending packets. >> >> I read the reference to packets as an example. > > Me too. > > But circuits don't have a collapse or oversubscription. They simply > reject calls that aren't compatible with available capacity. > > I'm not disagreeing with the definition; I'm disagreeing with the > assumption that having a network implies congestion and thus the need > for congestion control. > > There are a variety of mechanisms that avoid congestion, typically by > a-priori reservation (circuits), or by limiting resource use implicitly > (e.g., ischemic control). These are a kind of proactive control that > avoid congestion in the first place. A agree with those facts. A purist that wants to assert that all networks have congestion control would say that your admission control or ischemic control is a proactive centralized congestion control. I'm not of a purist to have that fight. I do think that it's worth noting that even the admission control problem gets difficult with scale quickly. An admission control that beings calls in slowly enough looks a lot like congestion collapse to the person trying to make a call. And there are very few pure reservation systems of any scale in the world. As you know, phone system circuits are heavily multiplexed and violating the assumptions that underlie those systems can congest them, circuits or no. > > That's not to say whether these mechanisms are scalable or efficient > compared to the resource sharing afforded by packet multiplexing. It's the same old thing. Pre-book your resources and underuse them or overbook and deal with contention. What makes congestion control an interesting endeavour from my perspective is doing that job in the presence of large scale and imperfect information. Economists will say that I've made the problem too easy by only dealing with info transfer which allows some simplifying assumptions. -- Ted Faber http://www.isi.edu/~faber PGP: http://www.isi.edu/~faber/pubkeys.asc Unexpected attachment on this mail? See http://www.isi.edu/~faber/FAQ.html#SIG From detlef.bosau at web.de Wed Nov 20 14:04:00 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 20 Nov 2013 23:04:00 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <528BAABB.90004@isi.edu> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> Message-ID: <528D31D0.80205@web.de> Am 19.11.2013 19:15, schrieb Joe Touch: > > > On 11/19/2013 10:09 AM, Dave Crocker wrote: >> On 11/19/2013 9:07 AM, Joe Touch wrote: >>> >>> >>> On 11/15/2013 1:42 PM, Detlef Bosau wrote: >>>> Why do we need congestion at all? >>> >>> No, but only if you use circuits. But then you've pushed the >>> "congestion" overload situation to the circuit setup time, and not all >>> circuits will succeed. >>> >>> Or you could be omniscient ;-) >>> >>> Otherwise, you need to deal with the fact that sometimes two packets >>> want to go to the same output port and can't, and you didn't find that >>> out until they collided. >> >> >> >> Given the complete generality of the question that was asked, is there >> something fundamentally deficient in the answer in: >> >> http://en.wikipedia.org/wiki/Congestion_control#Congestion_control >> >> ? >> >> In particular, I think it's opening sentence is quite reasonable. > > I agree, but it jumps in assuming packets. Given packets, it's easy to > assume that oversubscription is the natural consequence of avoiding > congestion. > What is "oversubscription" all about in a - connectionless - reservationless network? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Wed Nov 20 14:22:42 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 20 Nov 2013 23:22:42 +0100 Subject: [e2e] Fwd: Re: Question the other way round: In-Reply-To: <528CFC4D.4070709@isi.edu> References: <1384959448.479631159@apps.rackspace.com> <528CFC4D.4070709@isi.edu> Message-ID: <528D3632.5010801@web.de> Am 20.11.2013 19:15, schrieb Joe Touch: > Forwarded for David Reed. > > ..... > > There are a few principles that seem to work: > > 1) *Don't allow buffering in the network*. Buffer at the edges. > Otherwise the ability to shift to a new operating point becomes so > slow that the system is always operating in a pessimal state. > We hat quite some discussions on this one off list. What's the purpose of buffering at all? - allowing for asynchronism (that even covers varying data rates on the link.) Hence, in my opinion it is a local, link dependent decision whether to buffer or not. > 2) Keep number of hops short. > Obvious. > 3) Try to keep the rates "balanced", which in practice means that all > links of the network should all be capable of operating within a few dB > of magnitude of the same rate. (e.g. don't mix 1 Mb links with 40Gb > links on a switch). > You're kidding. AFAIK, net data rates along arbitrary Internet paths vary by up to 9 or 10 orders of magnitude. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Wed Nov 20 14:44:24 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 20 Nov 2013 23:44:24 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <528CFE15.7070808@isi.edu> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528C2921.6030002@isi.edu> <528CFE15.7070808@isi.edu> Message-ID: <528D3B48.7010803@web.de> Am 20.11.2013 19:23, schrieb Joe Touch: >> I read the reference to packets as an example. > > Me too. > > But circuits don't have a collapse or oversubscription. They simply > reject calls that aren't compatible with available capacity. > > I'm not disagreeing with the definition; I'm disagreeing with the > assumption that having a network implies congestion and thus the need > for congestion control. And I'm agreeing with you here. I'm convinced that a proper resource management could simply spare us the whole (as you rightly say: reactive) congestion control. > > There are a variety of mechanisms that avoid congestion, typically by > a-priori reservation (circuits), or by limiting resource use > implicitly (e.g., ischemic control). These are a kind of proactive > control that avoid congestion in the first place. > > That's not to say whether these mechanisms are scalable or efficient > compared to the resource sharing afforded by packet multiplexing. > However, it is worthwhile considering these approaches. In a sense, VJCC (and derivatives) is a kludge to work around the missing resource management in TCP in RFC 793. Resource management does not necessarily mean "resource reservation" or "admission control" - but in best effort networks it should provide for resource sharing / distribution and proper scheduling. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Wed Nov 20 14:52:21 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 20 Nov 2013 23:52:21 +0100 Subject: [e2e] Question the other way round: In-Reply-To: <528D09E4.5020103@isi.edu> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528C2921.6030002@isi.edu> <528CFE15.7070808@isi.edu> <528D09E4.5020103@isi.edu> Message-ID: <528D3D25.90509@web.de> Am 20.11.2013 20:13, schrieb Ted Faber: > > > I'm not disagreeing with the definition; I'm disagreeing with the > assumption that having a network implies congestion and thus the need > for congestion control. Me too. And I'm not comfortable with approaches like BIC and the need of huge buffers for the ONLY purpose of providing sufficiently large sending windows to make a TCP stream clock itself according to a long, fat network like a Gbit/s link on a 1000 km fibre. Providing buffer and using probing to assess a KNOWN throughput on a line, please excuse me, but this is simply foolish. That can be done in a better way. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From faber at isi.edu Wed Nov 20 15:04:30 2013 From: faber at isi.edu (Ted Faber) Date: Wed, 20 Nov 2013 15:04:30 -0800 Subject: [e2e] Question the other way round: In-Reply-To: References: Message-ID: <528D3FFE.7060106@isi.edu> On 11/20/2013 14:50, Ivancic, William D. (GRC-RHN0) wrote: >> It's the same old thing. Pre-book your resources and underuse them or >> overbook and deal with contention. > > The Airlines overbook all the time. Hopefully I am not the one dealing > with the contention. Usually someone else is willing to get paid off - > their time value is apparently less then mine. So here is an economics > example. Exactly so. It can be illuminating to apply networking solutions to those kinds of resources. If the airlines used leaky buckets to decide which flyers to bump, bursty flyers would be discriminated against. Thinking about airlines is nice in that it does give the flavor of some network congestion issues. For example, an airline might choose to offer people with more connections more money to drop out of the system early in the hopes of reducing overall contention. I'm sure you can think of more. The Internet is more interesting in that there are many more legs and passengers and much less information at a given airport about where the passengers are going. And that's just drop policy, which is a corner of the congestion control problem. -- Ted Faber http://www.isi.edu/~faber PGP: http://www.isi.edu/~faber/pubkeys.asc Unexpected attachment on this mail? See http://www.isi.edu/~faber/FAQ.html#SIG From richard at bennett.com Wed Nov 20 15:58:07 2013 From: richard at bennett.com (Richard Bennett) Date: Wed, 20 Nov 2013 15:58:07 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <528D3FFE.7060106@isi.edu> References: <528D3FFE.7060106@isi.edu> Message-ID: <528D4C8F.7060703@bennett.com> Before the IP hegemony took hold, corporate and other private networks used to deal with congested interior links by sending low priority traffic over less optimal paths that had some excess capacity rather than by dropping packets. We called this system "dynamic load balancing". I don't think it's ever been popular with the Internet crowd. On 11/20/2013 3:04 PM, Ted Faber wrote: > On 11/20/2013 14:50, Ivancic, William D. (GRC-RHN0) wrote: >>> It's the same old thing. Pre-book your resources and underuse them or >>> overbook and deal with contention. >> The Airlines overbook all the time. Hopefully I am not the one dealing >> with the contention. Usually someone else is willing to get paid off - >> their time value is apparently less then mine. So here is an economics >> example. > Exactly so. It can be illuminating to apply networking solutions to > those kinds of resources. If the airlines used leaky buckets to decide > which flyers to bump, bursty flyers would be discriminated against. > > Thinking about airlines is nice in that it does give the flavor of some > network congestion issues. For example, an airline might choose to > offer people with more connections more money to drop out of the system > early in the hopes of reducing overall contention. I'm sure you can > think of more. > > The Internet is more interesting in that there are many more legs and > passengers and much less information at a given airport about where the > passengers are going. And that's just drop policy, which is a corner of > the congestion control problem. > -- Richard Bennett Visiting Fellow, American Enterprise Institute Center for Internet, Communications, and Technology Policy Editor, High Tech Forum (408) 829-4944 (mobile) (415) 967-2900 (office) From touch at isi.edu Wed Nov 20 16:12:02 2013 From: touch at isi.edu (Joe Touch) Date: Wed, 20 Nov 2013 16:12:02 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <528D31D0.80205@web.de> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528D31D0.80205@web.de> Message-ID: <528D4FD2.2070809@isi.edu> On 11/20/2013 2:04 PM, Detlef Bosau wrote: > Am 19.11.2013 19:15, schrieb Joe Touch: ... >> I agree, but it jumps in assuming packets. Given packets, it's easy to >> assume that oversubscription is the natural consequence of avoiding >> congestion. >> > > What is "oversubscription" all about in a > - connectionless > - reservationless > network? Oversubscription is when you have more requests than you can handle. A packet in a connectionless, reservationless system is its own subscription request. Joe From fred at cisco.com Wed Nov 20 18:00:46 2013 From: fred at cisco.com (Fred Baker (fred)) Date: Thu, 21 Nov 2013 02:00:46 +0000 Subject: [e2e] Question the other way round: In-Reply-To: <528D4C8F.7060703@bennett.com> References: <528D3FFE.7060106@isi.edu> <528D4C8F.7060703@bennett.com> Message-ID: On Nov 20, 2013, at 3:58 PM, Richard Bennett wrote: > Before the IP hegemony took hold, corporate and other private networks used to deal with congested interior links by sending low priority traffic over less optimal paths that had some excess capacity rather than by dropping packets. We called this system "dynamic load balancing". I don't think it's ever been popular with the Internet crowd. I wouldn't bet on that. In this community it's often called the "fish problem", in the sense that there may be multiple paths from A to B, and the reasons that traffic takes one path or another are matters of policy. Look through the RFC series for specifications for Multi-Topology Routing. It's also done using MPLS in service provider networks - all the traffic from a given AS to some other AS uses a given LSP, and the LSP is routed in accordance with the contracts with those ISPs. > On 11/20/2013 3:04 PM, Ted Faber wrote: >> On 11/20/2013 14:50, Ivancic, William D. (GRC-RHN0) wrote: >>>> It's the same old thing. Pre-book your resources and underuse them or >>>> overbook and deal with contention. >>> The Airlines overbook all the time. Hopefully I am not the one dealing >>> with the contention. Usually someone else is willing to get paid off - >>> their time value is apparently less then mine. So here is an economics >>> example. >> Exactly so. It can be illuminating to apply networking solutions to >> those kinds of resources. If the airlines used leaky buckets to decide >> which flyers to bump, bursty flyers would be discriminated against. >> >> Thinking about airlines is nice in that it does give the flavor of some >> network congestion issues. For example, an airline might choose to >> offer people with more connections more money to drop out of the system >> early in the hopes of reducing overall contention. I'm sure you can >> think of more. >> >> The Internet is more interesting in that there are many more legs and >> passengers and much less information at a given airport about where the >> passengers are going. And that's just drop policy, which is a corner of >> the congestion control problem. >> > > -- > Richard Bennett > Visiting Fellow, American Enterprise Institute > Center for Internet, Communications, and Technology Policy > Editor, High Tech Forum > (408) 829-4944 (mobile) > (415) 967-2900 (office) > ? Make things as simple as possible, but not simpler. Albert Einstein From Jon.Crowcroft at cl.cam.ac.uk Wed Nov 20 23:20:46 2013 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Thu, 21 Nov 2013 07:20:46 +0000 Subject: [e2e] Question the other way round: In-Reply-To: <528D4C8F.7060703@bennett.com> References: <528D3FFE.7060106@isi.edu> <528D4C8F.7060703@bennett.com> Message-ID: you might want to look at DAR too which was deployed in a few networks http://www.statslab.cam.ac.uk/~frank/DAR/ In missive <528D4C8F.7060703 at bennett.com>, Richard Bennett typed: >>Before the IP hegemony took hold, corporate and other private networks >>used to deal with congested interior links by sending low priority >>traffic over less optimal paths that had some excess capacity rather >>than by dropping packets. We called this system "dynamic load >>balancing". I don't think it's ever been popular with the Internet crowd. >> >>On 11/20/2013 3:04 PM, Ted Faber wrote: >>> On 11/20/2013 14:50, Ivancic, William D. (GRC-RHN0) wrote: >>>>> It's the same old thing. Pre-book your resources and underuse them or >>>>> overbook and deal with contention. >>>> The Airlines overbook all the time. Hopefully I am not the one dealing >>>> with the contention. Usually someone else is willing to get paid off - >>>> their time value is apparently less then mine. So here is an economics >>>> example. >>> Exactly so. It can be illuminating to apply networking solutions to >>> those kinds of resources. If the airlines used leaky buckets to decide >>> which flyers to bump, bursty flyers would be discriminated against. >>> >>> Thinking about airlines is nice in that it does give the flavor of some >>> network congestion issues. For example, an airline might choose to >>> offer people with more connections more money to drop out of the system >>> early in the hopes of reducing overall contention. I'm sure you can >>> think of more. >>> >>> The Internet is more interesting in that there are many more legs and >>> passengers and much less information at a given airport about where the >>> passengers are going. And that's just drop policy, which is a corner of >>> the congestion control problem. >>> >> >>-- >>Richard Bennett >>Visiting Fellow, American Enterprise Institute >>Center for Internet, Communications, and Technology Policy >>Editor, High Tech Forum >>(408) 829-4944 (mobile) >>(415) 967-2900 (office) >> cheers jon From Jon.Crowcroft at cl.cam.ac.uk Wed Nov 20 23:29:41 2013 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Thu, 21 Nov 2013 07:29:41 +0000 Subject: [e2e] Question the other way round: In-Reply-To: <528CFE15.7070808@isi.edu> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528C2921.6030002@isi.edu> <528CFE15.7070808@isi.edu> Message-ID: i think we're mixing up two discussions here 1. congestion was the original cause of the cwnd mech in tcp, BUT the rate adaption using feedback as a way to distributed resource allocation is the solution of the optimisation problem of net + user addressed by several researchers (kelly/voice et al, also folks at caltech) - these aren't the same thing - they got conflated in protocols in practice because we couldn't get ECN out there completely (yet) - ECN (when implemented with some decent queue (see 3 below) can be part of an efficient decentralised rate allocation congestion is bad - avoiding it is good distributed rate allocation for flows that have increasing utility for higher rate transfer is also good (actually its betterer:) 2. for flows that have an a priori known rate, distributed rate allocation is a daft idea, a priori - so some sort of admission control for the flow seems better (but you can do probe/measurement based admission control if you like, and are allergic to complex signaling protocols) 3. orthogonal to both 1&2 is policing and fairness - flow state means you can do somewhat better in fairness for 1 (e.g. do fair queus, a la keshav), and a lot better for policing for 2... but then we've been round the best effort, integrated service, differentated service, core stateless fair queue, probe based admission control, ecn, pcn loop about 6 times since this list existed:) yes, to detlef's original point, causing congestion (and buffer overrun) to find out the rate is a bit of a sad story... In missive <528CFE15.7070808 at isi.edu>, Joe Touch typed: >> >> >>On 11/19/2013 7:14 PM, Ted Faber wrote: >>> On 11/19/2013 10:15, Joe Touch wrote: >>>> On 11/19/2013 10:09 AM, Dave Crocker wrote: >>>>> Given the complete generality of the question that was asked, is there >>>>> something fundamentally deficient in the answer in: >>>>> >>>>> http://en.wikipedia.org/wiki/Congestion_control#Congestion_control >>>>> >>>>> ? >>>>> >>>>> In particular, I think it's opening sentence is quite reasonable. >>>> >>>> I agree, but it jumps in assuming packets. Given packets, it's easy to >>>> assume that oversubscription is the natural consequence of avoiding >>>> congestion. >>> >>> Unless someone's edited it, you should read the first sentence again. I >>> see: >>> >>>> Congestion control concerns controlling traffic entry into a >>>> telecommunications network, so as to avoid congestive collapse by >>>> attempting to avoid oversubscription of any of the processing or link >>>> capabilities of the intermediate nodes and networks and taking resource >>>> reducing steps, such as reducing the rate of sending packets. >>> >>> I read the reference to packets as an example. >> >>Me too. >> >>But circuits don't have a collapse or oversubscription. They simply >>reject calls that aren't compatible with available capacity. >> >>I'm not disagreeing with the definition; I'm disagreeing with the >>assumption that having a network implies congestion and thus the need >>for congestion control. >> >>There are a variety of mechanisms that avoid congestion, typically by >>a-priori reservation (circuits), or by limiting resource use implicitly >>(e.g., ischemic control). These are a kind of proactive control that >>avoid congestion in the first place. >> >>That's not to say whether these mechanisms are scalable or efficient >>compared to the resource sharing afforded by packet multiplexing. >> >>Joe cheers jon From touch at isi.edu Thu Nov 21 09:55:40 2013 From: touch at isi.edu (Joe Touch) Date: Thu, 21 Nov 2013 09:55:40 -0800 Subject: [e2e] Question the other way round: In-Reply-To: <528CFE15.7070808@isi.edu> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528C2921.6030002@isi.edu> <528CFE15.7070808@isi.edu> Message-ID: <528E491C.3070803@isi.edu> Correction. It's not ischemic; it's isarithmic. I do tend to confuse the two words (the hazards of one too many biology classes). Thanks to Lars Wolf for recalling it correctly. It is applicable only in very specific topologies, but it's one of the 'odd' examples I'm fond of. Joe On 11/20/2013 10:23 AM, Joe Touch wrote: > > > On 11/19/2013 7:14 PM, Ted Faber wrote: >> On 11/19/2013 10:15, Joe Touch wrote: >>> On 11/19/2013 10:09 AM, Dave Crocker wrote: >>>> Given the complete generality of the question that was asked, is there >>>> something fundamentally deficient in the answer in: >>>> >>>> http://en.wikipedia.org/wiki/Congestion_control#Congestion_control >>>> >>>> ? >>>> >>>> In particular, I think it's opening sentence is quite reasonable. >>> >>> I agree, but it jumps in assuming packets. Given packets, it's easy to >>> assume that oversubscription is the natural consequence of avoiding >>> congestion. >> >> Unless someone's edited it, you should read the first sentence again. I >> see: >> >>> Congestion control concerns controlling traffic entry into a >>> telecommunications network, so as to avoid congestive collapse by >>> attempting to avoid oversubscription of any of the processing or link >>> capabilities of the intermediate nodes and networks and taking resource >>> reducing steps, such as reducing the rate of sending packets. >> >> I read the reference to packets as an example. > > Me too. > > But circuits don't have a collapse or oversubscription. They simply > reject calls that aren't compatible with available capacity. > > I'm not disagreeing with the definition; I'm disagreeing with the > assumption that having a network implies congestion and thus the need > for congestion control. > > There are a variety of mechanisms that avoid congestion, typically by > a-priori reservation (circuits), or by limiting resource use implicitly > (e.g., ischemic control). These are a kind of proactive control that > avoid congestion in the first place. > > That's not to say whether these mechanisms are scalable or efficient > compared to the resource sharing afforded by packet multiplexing. > > Joe From touch at isi.edu Thu Nov 21 09:56:33 2013 From: touch at isi.edu (Joe Touch) Date: Thu, 21 Nov 2013 09:56:33 -0800 Subject: [e2e] Fwd: Re: Question the other way round: In-Reply-To: <1385050900.458129083@apps.rackspace.com> References: <1385050900.458129083@apps.rackspace.com> Message-ID: <528E4951.8070109@isi.edu> Forwarded for David Reed. Joe (list admin) -------- Original Message -------- Subject: Re: [e2e] Question the other way round: Date: Thu, 21 Nov 2013 11:21:40 -0500 (EST) From: dpreed at reed.com To: Jon Crowcroft CC: Joe Touch , jon.crowcroft at cl.cam.ac.uk, Ted Faber , end2end-interest at postel.org (please forward, Joe, if this is OK) We don't actually cause congestion to discover the rate, Jon. Typically, we try to build networks that have adequate capacity (factors of 10 or 100 are needed for things like the "Mother's Day" effect, or 9/11-scale community need to spread and filter news quickly.) We encounter congestion rarely - and we fix it by building in "factors of safety" in every portion of an underlying network. Only Ph.D. theses spend an enormous amount of effort on the totally congested "corner cases". It's like a little puzzle that is easy to state, easy to solve, and makes the solver work hard. It's kind of like a "rite of passage", so that is good, I guess. But if you are building a datacenter (AWS) or an access network or a transport network, you build for the worst case, and expect it to happen rarely. The systems that depend on the network to actually work for people's needs never want a congested network, and don't actually want the network to operate at its local minimum cost/bit/sec. They want the network to never be in the way, and the cost they really care about is the cost of getting congested for the wrong reasons. On Thursday, November 21, 2013 2:29am, "Jon Crowcroft" said: > i think we're mixing up two discussions here > > 1. congestion was the original cause of the cwnd mech in tcp, BUT the > rate adaption using feedback as a way to distributed resource > allocation is the solution of the optimisation problem of net + user > addressed by several researchers (kelly/voice et al, also folks at > caltech) - these aren't the same thing - they got conflated in > protocols in practice because we couldn't get ECN out there completely > (yet) - ECN (when implemented with some decent queue (see 3 below) can > be part of an efficient decentralised rate allocation > > congestion is bad - avoiding it is good > > distributed rate allocation for flows > that have increasing utility for higher rate transfer > is also good (actually its betterer:) > > 2. for flows that have an a priori known rate, distributed rate > allocation is a daft idea, a priori - so some sort of admission > control for the flow seems better (but you can do probe/measurement > based admission control if you like, and are allergic to complex > signaling protocols) > > 3. orthogonal to both 1&2 is policing and fairness - flow state means you > can do somewhat better in fairness for 1 (e.g. do fair queus, a la > keshav), and a lot better for policing for 2... > > but then we've been round the best effort, integrated service, > differentated service, core stateless fair queue, probe based > admission control, ecn, pcn loop about 6 times since this list > existed:) > > yes, to detlef's original point, causing congestion (and buffer > overrun) to find out the rate is a bit of a sad story... > > In missive <528CFE15.7070808 at isi.edu>, Joe Touch typed: > > >> > >> > >>On 11/19/2013 7:14 PM, Ted Faber wrote: > >>> On 11/19/2013 10:15, Joe Touch wrote: > >>>> On 11/19/2013 10:09 AM, Dave Crocker wrote: > >>>>> Given the complete generality of the question that was > asked, is there > >>>>> something fundamentally deficient in the answer in: > >>>>> > >>>>> > http://en.wikipedia.org/wiki/Congestion_control#Congestion_control > >>>>> > >>>>> ? > >>>>> > >>>>> In particular, I think it's opening sentence is quite > reasonable. > >>>> > >>>> I agree, but it jumps in assuming packets. Given packets, it's > easy to > >>>> assume that oversubscription is the natural consequence of > avoiding > >>>> congestion. > >>> > >>> Unless someone's edited it, you should read the first sentence > again. I > >>> see: > >>> > >>>> Congestion control concerns controlling traffic entry into a > >>>> telecommunications network, so as to avoid congestive collapse > by > >>>> attempting to avoid oversubscription of any of the processing or > link > >>>> capabilities of the intermediate nodes and networks and taking > resource > >>>> reducing steps, such as reducing the rate of sending packets. > >>> > >>> I read the reference to packets as an example. > >> > >>Me too. > >> > >>But circuits don't have a collapse or oversubscription. They simply > >>reject calls that aren't compatible with available capacity. > >> > >>I'm not disagreeing with the definition; I'm disagreeing with the > >>assumption that having a network implies congestion and thus the need > >>for congestion control. > >> > >>There are a variety of mechanisms that avoid congestion, typically by > >>a-priori reservation (circuits), or by limiting resource use implicitly > >>(e.g., ischemic control). These are a kind of proactive control that > >>avoid congestion in the first place. > >> > >>That's not to say whether these mechanisms are scalable or efficient > >>compared to the resource sharing afforded by packet multiplexing. > >> > >>Joe > > cheers > > jon > > From jon.crowcroft at cl.cam.ac.uk Fri Nov 22 01:06:24 2013 From: jon.crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Fri, 22 Nov 2013 09:06:24 +0000 Subject: [e2e] Question the other way round: In-Reply-To: <1385050900.458129083@apps.rackspace.com> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528C2921.6030002@isi.edu> <528CFE15.7070808@isi.edu> <1385050900.458129083@apps.rackspace.com> Message-ID: actually, tcp does precisely that (in the absence of smart virtual queues + ecn) and the deployment of DCTCP in data centers (which puts ECN and virtual quees, and modifies the congestion window evolution of TCP) is precisely because incast (and other problems in big data center computations) are not rare events, but are caused by applications' traffic patterns almost pathologically... yes, we would like to design networks to make such things rare and a lot of the topology and capacity planning in networks tries to do this, but a) the evolution of applications is faster than even the best laid(*) update/upgrade plans of the best data center and intranet planners (and that was the POINT of making the internet an open platform for fast innovation) and b) the internet-at-large is not planned - its an evolved thing of a wibbly wobbly organic kind (cue 50th anniversary dr who music) having spent the last couple of years staring at a few real-world data center traffic traces, I wonder if they aren't also unplanned (slightly pregnant pause)....after all "azure" == blue sky right ? :-) cheers jon to paraphrase spike milligna, "a plan so cunningly laid that no matter where you stood, it got under your feet" On Thu, Nov 21, 2013 at 4:21 PM, wrote: > (please forward, Joe, if this is OK) > > > > We don't actually cause congestion to discover the rate, Jon. Typically, > we try to build networks that have adequate capacity (factors of 10 or 100 > are needed for things like the "Mother's Day" effect, or 9/11-scale > community need to spread and filter news quickly.) > > > > We encounter congestion rarely - and we fix it by building in "factors of > safety" in every portion of an underlying network. > > > > Only Ph.D. theses spend an enormous amount of effort on the totally > congested "corner cases". It's like a little puzzle that is easy to state, > easy to solve, and makes the solver work hard. It's kind of like a "rite > of passage", so that is good, I guess. > > > > But if you are building a datacenter (AWS) or an access network or a > transport network, you build for the worst case, and expect it to happen > rarely. The systems that depend on the network to actually work for > people's needs never want a congested network, and don't actually want the > network to operate at its local minimum cost/bit/sec. They want the > network to never be in the way, and the cost they really care about is the > cost of getting congested for the wrong reasons. > > > > > > > > > > On Thursday, November 21, 2013 2:29am, "Jon Crowcroft" < > Jon.Crowcroft at cl.cam.ac.uk> said: > > > i think we're mixing up two discussions here > > > > 1. congestion was the original cause of the cwnd mech in tcp, BUT the > > rate adaption using feedback as a way to distributed resource > > allocation is the solution of the optimisation problem of net + user > > addressed by several researchers (kelly/voice et al, also folks at > > caltech) - these aren't the same thing - they got conflated in > > protocols in practice because we couldn't get ECN out there completely > > (yet) - ECN (when implemented with some decent queue (see 3 below) can > > be part of an efficient decentralised rate allocation > > > > congestion is bad - avoiding it is good > > > > distributed rate allocation for flows > > that have increasing utility for higher rate transfer > > is also good (actually its betterer:) > > > > 2. for flows that have an a priori known rate, distributed rate > > allocation is a daft idea, a priori - so some sort of admission > > control for the flow seems better (but you can do probe/measurement > > based admission control if you like, and are allergic to complex > > signaling protocols) > > > > 3. orthogonal to both 1&2 is policing and fairness - flow state means you > > can do somewhat better in fairness for 1 (e.g. do fair queus, a la > > keshav), and a lot better for policing for 2... > > > > but then we've been round the best effort, integrated service, > > differentated service, core stateless fair queue, probe based > > admission control, ecn, pcn loop about 6 times since this list > > existed:) > > > > yes, to detlef's original point, causing congestion (and buffer > > overrun) to find out the rate is a bit of a sad story... > > > > In missive <528CFE15.7070808 at isi.edu>, Joe Touch typed: > > > > >> > > >> > > >>On 11/19/2013 7:14 PM, Ted Faber wrote: > > >>> On 11/19/2013 10:15, Joe Touch wrote: > > >>>> On 11/19/2013 10:09 AM, Dave Crocker wrote: > > >>>>> Given the complete generality of the question that was > > asked, is there > > >>>>> something fundamentally deficient in the answer in: > > >>>>> > > >>>>> > > http://en.wikipedia.org/wiki/Congestion_control#Congestion_control > > >>>>> > > >>>>> ? > > >>>>> > > >>>>> In particular, I think it's opening sentence is quite > > reasonable. > > >>>> > > >>>> I agree, but it jumps in assuming packets. Given packets, it's > > easy to > > >>>> assume that oversubscription is the natural consequence of > > avoiding > > >>>> congestion. > > >>> > > >>> Unless someone's edited it, you should read the first sentence > > again. I > > >>> see: > > >>> > > >>>> Congestion control concerns controlling traffic entry into a > > >>>> telecommunications network, so as to avoid congestive collapse > > by > > >>>> attempting to avoid oversubscription of any of the processing or > > link > > >>>> capabilities of the intermediate nodes and networks and taking > > resource > > >>>> reducing steps, such as reducing the rate of sending packets. > > >>> > > >>> I read the reference to packets as an example. > > >> > > >>Me too. > > >> > > >>But circuits don't have a collapse or oversubscription. They simply > > >>reject calls that aren't compatible with available capacity. > > >> > > >>I'm not disagreeing with the definition; I'm disagreeing with the > > >>assumption that having a network implies congestion and thus the need > > >>for congestion control. > > >> > > >>There are a variety of mechanisms that avoid congestion, typically by > > >>a-priori reservation (circuits), or by limiting resource use implicitly > > >>(e.g., ischemic control). These are a kind of proactive control that > > >>avoid congestion in the first place. > > >> > > >>That's not to say whether these mechanisms are scalable or efficient > > >>compared to the resource sharing afforded by packet multiplexing. > > >> > > >>Joe > > > > cheers > > > > jon > > > > > From l.wood at surrey.ac.uk Fri Nov 22 01:48:05 2013 From: l.wood at surrey.ac.uk (l.wood@surrey.ac.uk) Date: Fri, 22 Nov 2013 09:48:05 +0000 Subject: [e2e] Question the other way round: In-Reply-To: References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528C2921.6030002@isi.edu> <528CFE15.7070808@isi.edu> <1385050900.458129083@apps.rackspace.com>, Message-ID: <290E20B455C66743BE178C5C84F1240847E5103789@EXMB01CMS.surrey.ac.uk> "we encounter congestion rarely"? I encounter it daily. But then, that's in the real world. Lloyd Wood http://sat-net.com/L.Wood/ ________________________________________ From: end2end-interest-bounces at postel.org [end2end-interest-bounces at postel.org] On Behalf Of Jon Crowcroft [jon.crowcroft at cl.cam.ac.uk] Sent: 22 November 2013 09:06 To: David Reed Cc: Ted Faber; <, end2end-interest at postel.org>, ; Joe Touch Subject: Re: [e2e] Question the other way round: actually, tcp does precisely that (in the absence of smart virtual queues + ecn) and the deployment of DCTCP in data centers (which puts ECN and virtual quees, and modifies the congestion window evolution of TCP) is precisely because incast (and other problems in big data center computations) are not rare events, but are caused by applications' traffic patterns almost pathologically... yes, we would like to design networks to make such things rare and a lot of the topology and capacity planning in networks tries to do this, but a) the evolution of applications is faster than even the best laid(*) update/upgrade plans of the best data center and intranet planners (and that was the POINT of making the internet an open platform for fast innovation) and b) the internet-at-large is not planned - its an evolved thing of a wibbly wobbly organic kind (cue 50th anniversary dr who music) having spent the last couple of years staring at a few real-world data center traffic traces, I wonder if they aren't also unplanned (slightly pregnant pause)....after all "azure" == blue sky right ? :-) cheers jon to paraphrase spike milligna, "a plan so cunningly laid that no matter where you stood, it got under your feet" On Thu, Nov 21, 2013 at 4:21 PM, wrote: > (please forward, Joe, if this is OK) > > > > We don't actually cause congestion to discover the rate, Jon. Typically, > we try to build networks that have adequate capacity (factors of 10 or 100 > are needed for things like the "Mother's Day" effect, or 9/11-scale > community need to spread and filter news quickly.) > > > > We encounter congestion rarely - and we fix it by building in "factors of > safety" in every portion of an underlying network. > > > > Only Ph.D. theses spend an enormous amount of effort on the totally > congested "corner cases". It's like a little puzzle that is easy to state, > easy to solve, and makes the solver work hard. It's kind of like a "rite > of passage", so that is good, I guess. > > > > But if you are building a datacenter (AWS) or an access network or a > transport network, you build for the worst case, and expect it to happen > rarely. The systems that depend on the network to actually work for > people's needs never want a congested network, and don't actually want the > network to operate at its local minimum cost/bit/sec. They want the > network to never be in the way, and the cost they really care about is the > cost of getting congested for the wrong reasons. > > > > > > > > > > On Thursday, November 21, 2013 2:29am, "Jon Crowcroft" < > Jon.Crowcroft at cl.cam.ac.uk> said: > > > i think we're mixing up two discussions here > > > > 1. congestion was the original cause of the cwnd mech in tcp, BUT the > > rate adaption using feedback as a way to distributed resource > > allocation is the solution of the optimisation problem of net + user > > addressed by several researchers (kelly/voice et al, also folks at > > caltech) - these aren't the same thing - they got conflated in > > protocols in practice because we couldn't get ECN out there completely > > (yet) - ECN (when implemented with some decent queue (see 3 below) can > > be part of an efficient decentralised rate allocation > > > > congestion is bad - avoiding it is good > > > > distributed rate allocation for flows > > that have increasing utility for higher rate transfer > > is also good (actually its betterer:) > > > > 2. for flows that have an a priori known rate, distributed rate > > allocation is a daft idea, a priori - so some sort of admission > > control for the flow seems better (but you can do probe/measurement > > based admission control if you like, and are allergic to complex > > signaling protocols) > > > > 3. orthogonal to both 1&2 is policing and fairness - flow state means you > > can do somewhat better in fairness for 1 (e.g. do fair queus, a la > > keshav), and a lot better for policing for 2... > > > > but then we've been round the best effort, integrated service, > > differentated service, core stateless fair queue, probe based > > admission control, ecn, pcn loop about 6 times since this list > > existed:) > > > > yes, to detlef's original point, causing congestion (and buffer > > overrun) to find out the rate is a bit of a sad story... > > > > In missive <528CFE15.7070808 at isi.edu>, Joe Touch typed: > > > > >> > > >> > > >>On 11/19/2013 7:14 PM, Ted Faber wrote: > > >>> On 11/19/2013 10:15, Joe Touch wrote: > > >>>> On 11/19/2013 10:09 AM, Dave Crocker wrote: > > >>>>> Given the complete generality of the question that was > > asked, is there > > >>>>> something fundamentally deficient in the answer in: > > >>>>> > > >>>>> > > http://en.wikipedia.org/wiki/Congestion_control#Congestion_control > > >>>>> > > >>>>> ? > > >>>>> > > >>>>> In particular, I think it's opening sentence is quite > > reasonable. > > >>>> > > >>>> I agree, but it jumps in assuming packets. Given packets, it's > > easy to > > >>>> assume that oversubscription is the natural consequence of > > avoiding > > >>>> congestion. > > >>> > > >>> Unless someone's edited it, you should read the first sentence > > again. I > > >>> see: > > >>> > > >>>> Congestion control concerns controlling traffic entry into a > > >>>> telecommunications network, so as to avoid congestive collapse > > by > > >>>> attempting to avoid oversubscription of any of the processing or > > link > > >>>> capabilities of the intermediate nodes and networks and taking > > resource > > >>>> reducing steps, such as reducing the rate of sending packets. > > >>> > > >>> I read the reference to packets as an example. > > >> > > >>Me too. > > >> > > >>But circuits don't have a collapse or oversubscription. They simply > > >>reject calls that aren't compatible with available capacity. > > >> > > >>I'm not disagreeing with the definition; I'm disagreeing with the > > >>assumption that having a network implies congestion and thus the need > > >>for congestion control. > > >> > > >>There are a variety of mechanisms that avoid congestion, typically by > > >>a-priori reservation (circuits), or by limiting resource use implicitly > > >>(e.g., ischemic control). These are a kind of proactive control that > > >>avoid congestion in the first place. > > >> > > >>That's not to say whether these mechanisms are scalable or efficient > > >>compared to the resource sharing afforded by packet multiplexing. > > >> > > >>Joe > > > > cheers > > > > jon > > > > > From neil.davies at pnsol.com Fri Nov 22 06:48:38 2013 From: neil.davies at pnsol.com (Neil Davies) Date: Fri, 22 Nov 2013 21:48:38 +0700 Subject: [e2e] Question the other way round: In-Reply-To: <290E20B455C66743BE178C5C84F1240847E5103789@EXMB01CMS.surrey.ac.uk> References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528C2921.6030002@isi.edu> <528CFE15.7070808@isi.edu> <1385050900.458129083@apps.rackspace.com>, <290E20B455C66743BE178C5C84F1240847E5103789@EXMB01CMS.surrey.ac.uk> Message-ID: <2695603D-7B62-4E64-9703-2EBBEB42C36F@pnsol.com> So maybe you need a mathematics of how to deal >100% offered load at a queue? Neil On 22 Nov 2013, at 16:48, L.Wood at surrey.ac.uk wrote: > "we encounter congestion rarely"? > > I encounter it daily. But then, that's in the real world. > > Lloyd Wood > http://sat-net.com/L.Wood/ > > > ________________________________________ > From: end2end-interest-bounces at postel.org [end2end-interest-bounces at postel.org] On Behalf Of Jon Crowcroft [jon.crowcroft at cl.cam.ac.uk] > Sent: 22 November 2013 09:06 > To: David Reed > Cc: Ted Faber; <, end2end-interest at postel.org>, ; Joe Touch > Subject: Re: [e2e] Question the other way round: > > actually, tcp does precisely that (in the absence of smart virtual queues + > ecn) and the deployment of DCTCP in data centers (which puts ECN and > virtual quees, and modifies the congestion window evolution of TCP) is > precisely because incast (and other problems in big data center > computations) are not rare events, but are caused by applications' traffic > patterns almost pathologically... > > yes, we would like to design networks to make such things rare and a lot of > the topology and capacity planning in networks tries to do this, but a) the > evolution of applications is faster than even the best laid(*) > update/upgrade plans of the best data center and intranet planners (and > that was the POINT of making the internet an open platform for fast > innovation) and b) the internet-at-large is not planned - its an evolved > thing of a wibbly wobbly organic kind (cue 50th anniversary dr who music) > > having spent the last couple of years staring at a few real-world data > center traffic traces, I wonder if they aren't also unplanned (slightly > pregnant pause)....after all "azure" == blue sky right ? :-) > > cheers > jon > to paraphrase spike milligna, "a plan so cunningly laid that no matter > where you stood, it got under your feet" > > > On Thu, Nov 21, 2013 at 4:21 PM, wrote: > >> (please forward, Joe, if this is OK) >> >> >> >> We don't actually cause congestion to discover the rate, Jon. Typically, >> we try to build networks that have adequate capacity (factors of 10 or 100 >> are needed for things like the "Mother's Day" effect, or 9/11-scale >> community need to spread and filter news quickly.) >> >> >> >> We encounter congestion rarely - and we fix it by building in "factors of >> safety" in every portion of an underlying network. >> >> >> >> Only Ph.D. theses spend an enormous amount of effort on the totally >> congested "corner cases". It's like a little puzzle that is easy to state, >> easy to solve, and makes the solver work hard. It's kind of like a "rite >> of passage", so that is good, I guess. >> >> >> >> But if you are building a datacenter (AWS) or an access network or a >> transport network, you build for the worst case, and expect it to happen >> rarely. The systems that depend on the network to actually work for >> people's needs never want a congested network, and don't actually want the >> network to operate at its local minimum cost/bit/sec. They want the >> network to never be in the way, and the cost they really care about is the >> cost of getting congested for the wrong reasons. >> >> >> >> >> >> >> >> >> >> On Thursday, November 21, 2013 2:29am, "Jon Crowcroft" < >> Jon.Crowcroft at cl.cam.ac.uk> said: >> >>> i think we're mixing up two discussions here >>> >>> 1. congestion was the original cause of the cwnd mech in tcp, BUT the >>> rate adaption using feedback as a way to distributed resource >>> allocation is the solution of the optimisation problem of net + user >>> addressed by several researchers (kelly/voice et al, also folks at >>> caltech) - these aren't the same thing - they got conflated in >>> protocols in practice because we couldn't get ECN out there completely >>> (yet) - ECN (when implemented with some decent queue (see 3 below) can >>> be part of an efficient decentralised rate allocation >>> >>> congestion is bad - avoiding it is good >>> >>> distributed rate allocation for flows >>> that have increasing utility for higher rate transfer >>> is also good (actually its betterer:) >>> >>> 2. for flows that have an a priori known rate, distributed rate >>> allocation is a daft idea, a priori - so some sort of admission >>> control for the flow seems better (but you can do probe/measurement >>> based admission control if you like, and are allergic to complex >>> signaling protocols) >>> >>> 3. orthogonal to both 1&2 is policing and fairness - flow state means you >>> can do somewhat better in fairness for 1 (e.g. do fair queus, a la >>> keshav), and a lot better for policing for 2... >>> >>> but then we've been round the best effort, integrated service, >>> differentated service, core stateless fair queue, probe based >>> admission control, ecn, pcn loop about 6 times since this list >>> existed:) >>> >>> yes, to detlef's original point, causing congestion (and buffer >>> overrun) to find out the rate is a bit of a sad story... >>> >>> In missive <528CFE15.7070808 at isi.edu>, Joe Touch typed: >>> >>>>> >>>>> >>>>> On 11/19/2013 7:14 PM, Ted Faber wrote: >>>>>> On 11/19/2013 10:15, Joe Touch wrote: >>>>>>> On 11/19/2013 10:09 AM, Dave Crocker wrote: >>>>>>>> Given the complete generality of the question that was >>> asked, is there >>>>>>>> something fundamentally deficient in the answer in: >>>>>>>> >>>>>>>> >>> http://en.wikipedia.org/wiki/Congestion_control#Congestion_control >>>>>>>> >>>>>>>> ? >>>>>>>> >>>>>>>> In particular, I think it's opening sentence is quite >>> reasonable. >>>>>>> >>>>>>> I agree, but it jumps in assuming packets. Given packets, it's >>> easy to >>>>>>> assume that oversubscription is the natural consequence of >>> avoiding >>>>>>> congestion. >>>>>> >>>>>> Unless someone's edited it, you should read the first sentence >>> again. I >>>>>> see: >>>>>> >>>>>>> Congestion control concerns controlling traffic entry into a >>>>>>> telecommunications network, so as to avoid congestive collapse >>> by >>>>>>> attempting to avoid oversubscription of any of the processing or >>> link >>>>>>> capabilities of the intermediate nodes and networks and taking >>> resource >>>>>>> reducing steps, such as reducing the rate of sending packets. >>>>>> >>>>>> I read the reference to packets as an example. >>>>> >>>>> Me too. >>>>> >>>>> But circuits don't have a collapse or oversubscription. They simply >>>>> reject calls that aren't compatible with available capacity. >>>>> >>>>> I'm not disagreeing with the definition; I'm disagreeing with the >>>>> assumption that having a network implies congestion and thus the need >>>>> for congestion control. >>>>> >>>>> There are a variety of mechanisms that avoid congestion, typically by >>>>> a-priori reservation (circuits), or by limiting resource use implicitly >>>>> (e.g., ischemic control). These are a kind of proactive control that >>>>> avoid congestion in the first place. >>>>> >>>>> That's not to say whether these mechanisms are scalable or efficient >>>>> compared to the resource sharing afforded by packet multiplexing. >>>>> >>>>> Joe >>> >>> cheers >>> >>> jon >>> >>> >> > From neil.davies at pnsol.com Fri Nov 22 06:50:01 2013 From: neil.davies at pnsol.com (Neil Davies) Date: Fri, 22 Nov 2013 21:50:01 +0700 Subject: [e2e] Question the other way round: In-Reply-To: References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528C2921.6030002@isi.edu> <528CFE15.7070808@isi.edu> <1385050900.458129083@apps.rackspace.com> Message-ID: <77298033-0736-4DF9-A0CC-97420C316CE4@pnsol.com> Jon My real world experience is that not only is it not planned, it isn't even costed Neil On 22 Nov 2013, at 16:06, Jon Crowcroft wrote: > actually, tcp does precisely that (in the absence of smart virtual queues + > ecn) and the deployment of DCTCP in data centers (which puts ECN and > virtual quees, and modifies the congestion window evolution of TCP) is > precisely because incast (and other problems in big data center > computations) are not rare events, but are caused by applications' traffic > patterns almost pathologically... > > yes, we would like to design networks to make such things rare and a lot of > the topology and capacity planning in networks tries to do this, but a) the > evolution of applications is faster than even the best laid(*) > update/upgrade plans of the best data center and intranet planners (and > that was the POINT of making the internet an open platform for fast > innovation) and b) the internet-at-large is not planned - its an evolved > thing of a wibbly wobbly organic kind (cue 50th anniversary dr who music) > > having spent the last couple of years staring at a few real-world data > center traffic traces, I wonder if they aren't also unplanned (slightly > pregnant pause)....after all "azure" == blue sky right ? :-) > > cheers > jon > to paraphrase spike milligna, "a plan so cunningly laid that no matter > where you stood, it got under your feet" > > > On Thu, Nov 21, 2013 at 4:21 PM, wrote: > >> (please forward, Joe, if this is OK) >> >> >> >> We don't actually cause congestion to discover the rate, Jon. Typically, >> we try to build networks that have adequate capacity (factors of 10 or 100 >> are needed for things like the "Mother's Day" effect, or 9/11-scale >> community need to spread and filter news quickly.) >> >> >> >> We encounter congestion rarely - and we fix it by building in "factors of >> safety" in every portion of an underlying network. >> >> >> >> Only Ph.D. theses spend an enormous amount of effort on the totally >> congested "corner cases". It's like a little puzzle that is easy to state, >> easy to solve, and makes the solver work hard. It's kind of like a "rite >> of passage", so that is good, I guess. >> >> >> >> But if you are building a datacenter (AWS) or an access network or a >> transport network, you build for the worst case, and expect it to happen >> rarely. The systems that depend on the network to actually work for >> people's needs never want a congested network, and don't actually want the >> network to operate at its local minimum cost/bit/sec. They want the >> network to never be in the way, and the cost they really care about is the >> cost of getting congested for the wrong reasons. >> >> >> >> >> >> >> >> >> >> On Thursday, November 21, 2013 2:29am, "Jon Crowcroft" < >> Jon.Crowcroft at cl.cam.ac.uk> said: >> >>> i think we're mixing up two discussions here >>> >>> 1. congestion was the original cause of the cwnd mech in tcp, BUT the >>> rate adaption using feedback as a way to distributed resource >>> allocation is the solution of the optimisation problem of net + user >>> addressed by several researchers (kelly/voice et al, also folks at >>> caltech) - these aren't the same thing - they got conflated in >>> protocols in practice because we couldn't get ECN out there completely >>> (yet) - ECN (when implemented with some decent queue (see 3 below) can >>> be part of an efficient decentralised rate allocation >>> >>> congestion is bad - avoiding it is good >>> >>> distributed rate allocation for flows >>> that have increasing utility for higher rate transfer >>> is also good (actually its betterer:) >>> >>> 2. for flows that have an a priori known rate, distributed rate >>> allocation is a daft idea, a priori - so some sort of admission >>> control for the flow seems better (but you can do probe/measurement >>> based admission control if you like, and are allergic to complex >>> signaling protocols) >>> >>> 3. orthogonal to both 1&2 is policing and fairness - flow state means you >>> can do somewhat better in fairness for 1 (e.g. do fair queus, a la >>> keshav), and a lot better for policing for 2... >>> >>> but then we've been round the best effort, integrated service, >>> differentated service, core stateless fair queue, probe based >>> admission control, ecn, pcn loop about 6 times since this list >>> existed:) >>> >>> yes, to detlef's original point, causing congestion (and buffer >>> overrun) to find out the rate is a bit of a sad story... >>> >>> In missive <528CFE15.7070808 at isi.edu>, Joe Touch typed: >>> >>>>> >>>>> >>>>> On 11/19/2013 7:14 PM, Ted Faber wrote: >>>>>> On 11/19/2013 10:15, Joe Touch wrote: >>>>>>> On 11/19/2013 10:09 AM, Dave Crocker wrote: >>>>>>>> Given the complete generality of the question that was >>> asked, is there >>>>>>>> something fundamentally deficient in the answer in: >>>>>>>> >>>>>>>> >>> http://en.wikipedia.org/wiki/Congestion_control#Congestion_control >>>>>>>> >>>>>>>> ? >>>>>>>> >>>>>>>> In particular, I think it's opening sentence is quite >>> reasonable. >>>>>>> >>>>>>> I agree, but it jumps in assuming packets. Given packets, it's >>> easy to >>>>>>> assume that oversubscription is the natural consequence of >>> avoiding >>>>>>> congestion. >>>>>> >>>>>> Unless someone's edited it, you should read the first sentence >>> again. I >>>>>> see: >>>>>> >>>>>>> Congestion control concerns controlling traffic entry into a >>>>>>> telecommunications network, so as to avoid congestive collapse >>> by >>>>>>> attempting to avoid oversubscription of any of the processing or >>> link >>>>>>> capabilities of the intermediate nodes and networks and taking >>> resource >>>>>>> reducing steps, such as reducing the rate of sending packets. >>>>>> >>>>>> I read the reference to packets as an example. >>>>> >>>>> Me too. >>>>> >>>>> But circuits don't have a collapse or oversubscription. They simply >>>>> reject calls that aren't compatible with available capacity. >>>>> >>>>> I'm not disagreeing with the definition; I'm disagreeing with the >>>>> assumption that having a network implies congestion and thus the need >>>>> for congestion control. >>>>> >>>>> There are a variety of mechanisms that avoid congestion, typically by >>>>> a-priori reservation (circuits), or by limiting resource use implicitly >>>>> (e.g., ischemic control). These are a kind of proactive control that >>>>> avoid congestion in the first place. >>>>> >>>>> That's not to say whether these mechanisms are scalable or efficient >>>>> compared to the resource sharing afforded by packet multiplexing. >>>>> >>>>> Joe >>> >>> cheers >>> >>> jon >>> >>> >> From neil.davies at pnsol.com Fri Nov 22 06:58:44 2013 From: neil.davies at pnsol.com (Neil Davies) Date: Fri, 22 Nov 2013 21:58:44 +0700 Subject: [e2e] Question the other way round: In-Reply-To: References: <5280CC5D.70401@web.de> <5282AF41.2030107@bennett.com> <5283601B.9070800@web.de> <5283776B.60209@isae.fr> <52839CDA.9010703@web.de> <5283F43C.9000008@bennett.com> <52869563.4010201@web.de> <528B9AC5.3010608@isi.edu> <528BA958.9050203@bbiw.net> <528BAABB.90004@isi.edu> <528C2921.6030002@isi.edu> <528CFE15.7070808@isi.edu> <1385050900.458129083@apps.rackspace.com> Message-ID: <156D1E7D-83B0-4556-BDED-A73EC717C0E7@pnsol.com> >> >> But if you are building a datacenter (AWS) or an access network or a >> transport network, you build for the worst case, and expect it to happen >> rarely. The systems that depend on the network to actually work for >> people's needs never want a congested network, and don't actually want the >> network to operate at its local minimum cost/bit/sec. They want the >> network to never be in the way, and the cost they really care about is the >> cost of getting congested for the wrong reasons. I've been a the ITU jamboree in Bangkok this week - there is another question, when your country is hit by an earthquake and an tsunami - offered loads for telecoms systems go to 50x to 60x capacity. (http://www.itu.int/tlc/WORLD2013/forum/entries/session.1299.pdf) Those PhD corner cases are sounding more interesting now... Neil From detlef.bosau at web.de Fri Nov 22 07:08:14 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 22 Nov 2013 16:08:14 +0100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <528E4951.8070109@isi.edu> References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> Message-ID: <528F735E.4010802@web.de> > Forwarded for David Reed. > > > > (please forward, Joe, if this is OK) > > We don't actually cause congestion to discover the rate, Jon. I'm sorry David - it is exactly what happens. (IIRC a bit "hidden" in BSD Unix because IIRC a TCP socket which attempts to enqueue a packet in case of seeing an occupied queue halves its window and postpones the packet, I don't remember the details, it is a bit more sophisticated than a silent discard, however the consequences are the same.) > Typically, we try to build networks that have adequate capacity > (factors of 10 or 100 are needed for things like the "Mother's Day" > effect, or 9/11-scale community need to spread and filter news quickly.) > > We encounter congestion rarely - and we fix it by building in "factors > of safety" in every portion of an underlying network. Hm. To my understanding, the main mechanism for finding the appropriate window size is inducing congestion. > > Only Ph.D. theses spend an enormous amount of effort on the totally > congested "corner cases". So, congestion is no real problem? > It's like a little puzzle that is easy to > state, easy to solve, and makes the solver work hard. It's kind of like > a "rite of passage", so that is good, I guess. > > But if you are building a datacenter (AWS) or an access network or a > transport network, you build for the worst case, and expect it to happen What is the "worst case"? What is .the largest number of parallel flows possible? Detlef > rarely. The systems that depend on the network to actually work for > people's needs never want a congested network, and don't actually want > the network to operate at its local minimum cost/bit/sec. They want the > network to never be in the way, and the cost they really care about is > the cost of getting congested for the wrong reasons. > > > > On Thursday, November 21, 2013 2:29am, "Jon Crowcroft" > said: > > > i think we're mixing up two discussions here > > > > 1. congestion was the original cause of the cwnd mech in tcp, BUT the > > rate adaption using feedback as a way to distributed resource > > allocation is the solution of the optimisation problem of net + user > > addressed by several researchers (kelly/voice et al, also folks at > > caltech) - these aren't the same thing - they got conflated in > > protocols in practice because we couldn't get ECN out there completely > > (yet) - ECN (when implemented with some decent queue (see 3 below) can > > be part of an efficient decentralised rate allocation > > > > congestion is bad - avoiding it is good > > > > distributed rate allocation for flows > > that have increasing utility for higher rate transfer > > is also good (actually its betterer:) > > > > 2. for flows that have an a priori known rate, distributed rate > > allocation is a daft idea, a priori - so some sort of admission > > control for the flow seems better (but you can do probe/measurement > > based admission control if you like, and are allergic to complex > > signaling protocols) > > > > 3. orthogonal to both 1&2 is policing and fairness - flow state > means you > > can do somewhat better in fairness for 1 (e.g. do fair queus, a la > > keshav), and a lot better for policing for 2... > > > > but then we've been round the best effort, integrated service, > > differentated service, core stateless fair queue, probe based > > admission control, ecn, pcn loop about 6 times since this list > > existed:) > > > > yes, to detlef's original point, causing congestion (and buffer > > overrun) to find out the rate is a bit of a sad story... > > > > In missive <528CFE15.7070808 at isi.edu>, Joe Touch typed: > > > > >> > > >> > > >>On 11/19/2013 7:14 PM, Ted Faber wrote: > > >>> On 11/19/2013 10:15, Joe Touch wrote: > > >>>> On 11/19/2013 10:09 AM, Dave Crocker wrote: > > >>>>> Given the complete generality of the question that was > > asked, is there > > >>>>> something fundamentally deficient in the answer in: > > >>>>> > > >>>>> > > http://en.wikipedia.org/wiki/Congestion_control#Congestion_control > > >>>>> > > >>>>> ? > > >>>>> > > >>>>> In particular, I think it's opening sentence is quite > > reasonable. > > >>>> > > >>>> I agree, but it jumps in assuming packets. Given packets, it's > > easy to > > >>>> assume that oversubscription is the natural consequence of > > avoiding > > >>>> congestion. > > >>> > > >>> Unless someone's edited it, you should read the first sentence > > again. I > > >>> see: > > >>> > > >>>> Congestion control concerns controlling traffic entry into a > > >>>> telecommunications network, so as to avoid congestive collapse > > by > > >>>> attempting to avoid oversubscription of any of the processing or > > link > > >>>> capabilities of the intermediate nodes and networks and taking > > resource > > >>>> reducing steps, such as reducing the rate of sending packets. > > >>> > > >>> I read the reference to packets as an example. > > >> > > >>Me too. > > >> > > >>But circuits don't have a collapse or oversubscription. They simply > > >>reject calls that aren't compatible with available capacity. > > >> > > >>I'm not disagreeing with the definition; I'm disagreeing with the > > >>assumption that having a network implies congestion and thus the > need > > >>for congestion control. > > >> > > >>There are a variety of mechanisms that avoid congestion, > typically by > > >>a-priori reservation (circuits), or by limiting resource use > implicitly > > >>(e.g., ischemic control). These are a kind of proactive control that > > >>avoid congestion in the first place. > > >> > > >>That's not to say whether these mechanisms are scalable or efficient > > >>compared to the resource sharing afforded by packet multiplexing. > > >> > > >>Joe > > > > cheers > > > > jon > > > > > > > -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Fri Nov 22 13:03:08 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 22 Nov 2013 22:03:08 +0100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <1385152345.342725449@apps.rackspace.com> References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> <528F735E.4010802@web.de> <1385152345.342725449@apps.rackspace.com> Message-ID: <528FC68C.6090401@web.de> David, two messages ago you wrote: > > We don't actually cause congestion to discover the rate, Jon. > Typically, we try to build networks that have adequate capacity > (factors of 10 or 100 are needed for things like the "Mother's Day" > effect, or 9/11-scale community need to spread and filter news quickly.) And this is, excuse me, simply nonsense. Am 22.11.2013 21:32, schrieb dpreed at reed.com: > > Detlef - you missed my point entirely! By focusing on the network > only as the end rather than a means in a larger context you eliminated > the whole issue of congestion. Think end-to-end, which means you must > think about why information is being sent. > > > > You are interpreting the code in Linux/Unix/... as if it were driven > by an application whose only goal is to transmit data at the fastest > possible rate, and has no inherent limit, or even an external reason > to exist. > No. I made a remark on BSD to anticipate a comment like "but in BSD...." Actually, causing congestion is part of the VJCC congestion handling, and exactly this is the problem! > > > > E.g. cat /dev/zero | ; > Shows what? Yes: Flow control does work. Or what do I miss? > > > > Every application I am aware of (and I mean every application other > than benchmarks run for short periods) has limited needs, driven by > "pull" style stuff. Even a file transfer eventually stops. > Even the time eventually ends. And some people even believe, JC (the other one) will eventually return. > > A file transfer has no natural rate, but it has a satisfactory rate > in all cases. > Depends on which requirements you want to satisfy. > > Usually whatever the file size is divided by 100 msec (a human measure). > > > > Enough said? > Perhaps, I didn't follow, what you wanted to say... -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From faber at isi.edu Fri Nov 22 14:34:25 2013 From: faber at isi.edu (Ted Faber) Date: Fri, 22 Nov 2013 14:34:25 -0800 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <528FC68C.6090401@web.de> References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> <528F735E.4010802@web.de> <1385152345.342725449@apps.rackspace.com> <528FC68C.6090401@web.de> Message-ID: <528FDBF1.7090101@isi.edu> On 11/22/2013 13:03, Detlef Bosau wrote: > > Actually, causing congestion is part of the VJCC congestion handling, > and exactly this is the problem!DecBit, ECN, Vegas, FAST, XCP... -- Ted Faber http://www.isi.edu/~faber PGP: http://www.isi.edu/~faber/pubkeys.asc Unexpected attachment on this mail? See http://www.isi.edu/~faber/FAQ.html#SIG From touch at isi.edu Fri Nov 22 14:37:47 2013 From: touch at isi.edu (Joe Touch) Date: Fri, 22 Nov 2013 14:37:47 -0800 Subject: [e2e] Fwd: RE: Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <1385152345.342725449@apps.rackspace.com> References: <1385152345.342725449@apps.rackspace.com> Message-ID: <528FDCBB.6080608@isi.edu> Forwarded for David Reed. Joe (list admin) -------- Original Message -------- Subject: RE: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: Date: Fri, 22 Nov 2013 15:32:25 -0500 (EST) From: dpreed at reed.com To: Detlef Bosau CC: end2end-interest at postel.org, "Joe Touch" (Joe - forward if you wish) Detlef - you missed my point entirely! By focusing on the network only as the end rather than a means in a larger context you eliminated the whole issue of congestion. Think end-to-end, which means you must think about why information is being sent. You are interpreting the code in Linux/Unix/... as if it were driven by an application whose only goal is to transmit data at the fastest possible rate, and has no inherent limit, or even an external reason to exist. E.g. cat /dev/zero | ; Every application I am aware of (and I mean every application other than benchmarks run for short periods) has limited needs, driven by "pull" style stuff. Even a file transfer eventually stops. A file transfer has no natural rate, but it has a satisfactory rate in all cases. Usually whatever the file size is divided by 100 msec (a human measure). Enough said? On Friday, November 22, 2013 10:08am, "Detlef Bosau" said: > > > Forwarded for David Reed. > > > > > > > > (please forward, Joe, if this is OK) > > > > We don't actually cause congestion to discover the rate, Jon. > > I'm sorry David - it is exactly what happens. > > (IIRC a bit "hidden" in BSD Unix because IIRC a TCP socket which > attempts to enqueue a packet in case of seeing an occupied queue halves > its window and postpones the packet, I don't remember the details, it is > a bit more sophisticated than a silent discard, however the consequences > are the same.) > > > Typically, we try to build networks that have adequate capacity > > (factors of 10 or 100 are needed for things like the "Mother's Day" > > effect, or 9/11-scale community need to spread and filter news quickly.) > > > > We encounter congestion rarely - and we fix it by building in "factors > > of safety" in every portion of an underlying network. > > Hm. To my understanding, the main mechanism for finding the appropriate > window size is inducing congestion. > > > > > Only Ph.D. theses spend an enormous amount of effort on the totally > > congested "corner cases". > > So, congestion is no real problem? > > It's like a little puzzle that is easy to > > state, easy to solve, and makes the solver work hard. It's kind of like > > a "rite of passage", so that is good, I guess. > > > > But if you are building a datacenter (AWS) or an access network or a > > transport network, you build for the worst case, and expect it to happen > > What is the "worst case"? What is .the largest number of parallel flows > possible? > > Detlef > > rarely. The systems that depend on the network to actually work for > > people's needs never want a congested network, and don't actually want > > the network to operate at its local minimum cost/bit/sec. They want the > > network to never be in the way, and the cost they really care about is > > the cost of getting congested for the wrong reasons. > > > > > > > > On Thursday, November 21, 2013 2:29am, "Jon Crowcroft" > > said: > > > > > i think we're mixing up two discussions here > > > > > > 1. congestion was the original cause of the cwnd mech in tcp, BUT the > > > rate adaption using feedback as a way to distributed resource > > > allocation is the solution of the optimisation problem of net + user > > > addressed by several researchers (kelly/voice et al, also folks at > > > caltech) - these aren't the same thing - they got conflated in > > > protocols in practice because we couldn't get ECN out there completely > > > (yet) - ECN (when implemented with some decent queue (see 3 below) can > > > be part of an efficient decentralised rate allocation > > > > > > congestion is bad - avoiding it is good > > > > > > distributed rate allocation for flows > > > that have increasing utility for higher rate transfer > > > is also good (actually its betterer:) > > > > > > 2. for flows that have an a priori known rate, distributed rate > > > allocation is a daft idea, a priori - so some sort of admission > > > control for the flow seems better (but you can do probe/measurement > > > based admission control if you like, and are allergic to complex > > > signaling protocols) > > > > > > 3. orthogonal to both 1&2 is policing and fairness - flow state > > means you > > > can do somewhat better in fairness for 1 (e.g. do fair queus, a la > > > keshav), and a lot better for policing for 2... > > > > > > but then we've been round the best effort, integrated service, > > > differentated service, core stateless fair queue, probe based > > > admission control, ecn, pcn loop about 6 times since this list > > > existed:) > > > > > > yes, to detlef's original point, causing congestion (and buffer > > > overrun) to find out the rate is a bit of a sad story... > > > > > > In missive <528CFE15.7070808 at isi.edu>, Joe Touch typed: > > > > > > >> > > > >> > > > >>On 11/19/2013 7:14 PM, Ted Faber wrote: > > > >>> On 11/19/2013 10:15, Joe Touch wrote: > > > >>>> On 11/19/2013 10:09 AM, Dave Crocker wrote: > > > >>>>> Given the complete generality of the question that > was > > > asked, is there > > > >>>>> something fundamentally deficient in the answer > in: > > > >>>>> > > > >>>>> > > > http://en.wikipedia.org/wiki/Congestion_control#Congestion_control > > > >>>>> > > > >>>>> ? > > > >>>>> > > > >>>>> In particular, I think it's opening sentence is > quite > > > reasonable. > > > >>>> > > > >>>> I agree, but it jumps in assuming packets. Given > packets, it's > > > easy to > > > >>>> assume that oversubscription is the natural > consequence of > > > avoiding > > > >>>> congestion. > > > >>> > > > >>> Unless someone's edited it, you should read the first > sentence > > > again. I > > > >>> see: > > > >>> > > > >>>> Congestion control concerns controlling traffic entry > into a > > > >>>> telecommunications network, so as to avoid congestive > collapse > > > by > > > >>>> attempting to avoid oversubscription of any of the > processing or > > > link > > > >>>> capabilities of the intermediate nodes and networks > and taking > > > resource > > > >>>> reducing steps, such as reducing the rate of sending > packets. > > > >>> > > > >>> I read the reference to packets as an example. > > > >> > > > >>Me too. > > > >> > > > >>But circuits don't have a collapse or oversubscription. They > simply > > > >>reject calls that aren't compatible with available capacity. > > > >> > > > >>I'm not disagreeing with the definition; I'm disagreeing with > the > > > >>assumption that having a network implies congestion and thus > the > > need > > > >>for congestion control. > > > >> > > > >>There are a variety of mechanisms that avoid congestion, > > typically by > > > >>a-priori reservation (circuits), or by limiting resource use > > implicitly > > > >>(e.g., ischemic control). These are a kind of proactive control > that > > > >>avoid congestion in the first place. > > > >> > > > >>That's not to say whether these mechanisms are scalable or > efficient > > > >>compared to the resource sharing afforded by packet > multiplexing. > > > >> > > > >>Joe > > > > > > cheers > > > > > > jon > > > > > > > > > > > > > > > -- > ------------------------------------------------------------------ > Detlef Bosau > Galileistra?e 30 > 70565 Stuttgart Tel.: +49 711 5208031 > mobile: +49 172 6819937 > skype: detlef.bosau > ICQ: 566129673 > detlef.bosau at web.de http://www.detlef-bosau.de > > From detlef.bosau at web.de Sat Nov 23 11:10:30 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 23 Nov 2013 20:10:30 +0100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <528FDBF1.7090101@isi.edu> References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> <528F735E.4010802@web.de> <1385152345.342725449@apps.rackspace.com> <528FC68C.6090401@web.de> <528FDBF1.7090101@isi.edu> Message-ID: <5290FDA6.1000303@web.de> Am 22.11.2013 23:34, schrieb Ted Faber: > On 11/22/2013 13:03, Detlef Bosau wrote: >> >> Actually, causing congestion is part of the VJCC congestion handling, >> and exactly this is the problem!DecBit, > > ECN, Vegas, FAST, XCP... TCP with ECN does not cause congestion? Vegas does not cause congestion? Why does TCP nead an Explicit Congestion Notification, when there is no congestion? > -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From faber at isi.edu Sat Nov 23 13:15:04 2013 From: faber at isi.edu (Ted Faber) Date: Sat, 23 Nov 2013 13:15:04 -0800 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <5290FDA6.1000303@web.de> References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> <528F735E.4010802@web.de> <1385152345.342725449@apps.rackspace.com> <528FC68C.6090401@web.de> <528FDBF1.7090101@isi.edu> <5290FDA6.1000303@web.de> Message-ID: <52911AD8.6010400@isi.edu> On 11/23/2013 11:10, Detlef Bosau wrote: > Am 22.11.2013 23:34, schrieb Ted Faber: >> On 11/22/2013 13:03, Detlef Bosau wrote: >>> >>> Actually, causing congestion is part of the VJCC congestion handling, >>> and exactly this is the problem!DecBit, >> >> ECN, Vegas, FAST, XCP... > > TCP with ECN does not cause congestion? Vegas does not cause congestion? Those systems significantly mitigate the effect of network probes on congestion, yes. I took your statement about "VJCC causing congestion" to mean you took issue with the endpoints slowly opening their windows to probe the network state and the effect of that probing on network queues and consequently on other flows. The work I mentioned is part of a large body of work focused on reducing the effect of such probes. Since I didn't mention anything that isn't more than a decade old, I didn't think citations were necessary. > > Why does TCP nead an Explicit Congestion Notification, when there is no > congestion? I'm surprised that you're asking these questions, if you're familiar with the work, but here's an explanation: Despite the name, ECN is not a "congestion" notification, but data about queue occupancy along the path a packet took. A router implementing ECN just marks a packet that would have been dropped under an early discard queueing discipline (e.g., RED). RED discards packets before congestion sets in (Random Early Detection) in order to slow down loss-reactive flows before the network experiences congestion collapse. ECN mitigates RED's effect on loss rate by making the drops virtual (packets are marked instead of dropped), informing sources to slow without making them retransmit the "lost" packet. Now, you may take the position that any queue occupancy is congestion, but if so, I think the significant benefits of store and forward routers argue persuasively against that position. -- Ted Faber http://www.isi.edu/~faber PGP: http://www.isi.edu/~faber/pubkeys.asc Unexpected attachment on this mail? See http://www.isi.edu/~faber/FAQ.html#SIG From detlef.bosau at web.de Sat Nov 23 16:04:36 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 24 Nov 2013 01:04:36 +0100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <52911AD8.6010400@isi.edu> References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> <528F735E.4010802@web.de> <1385152345.342725449@apps.rackspace.com> <528FC68C.6090401@web.de> <528FDBF1.7090101@isi.edu> <5290FDA6.1000303@web.de> <52911AD8.6010400@isi.edu> Message-ID: <52914294.4070007@web.de> Am 23.11.2013 22:15, schrieb Ted Faber: > > Those systems significantly mitigate the effect of network probes on > congestion, yes. Certainly. Nevertheless, network congestion is reality and it is solicited. Look at the discussion with David Reed and Jon Crowcroft. Imagine a scenario: pc 1 ------------------------ TP line -------------------------- pc 2 and perhaps two TCP flows from pc1 to pc2. So, you have two flows sharing a line. And it would be useful to reasonably share the line. E.g. by a scheduler. What are we doing instead? We induce network congestion. Wouldn't it make sense to at least discuss alternatives? > > I took your statement about "VJCC causing congestion" to mean you took > issue with the endpoints slowly opening their windows to probe the > network state and the effect of that probing on network queues and > consequently on other flows. The work I mentioned is part of a large > body of work focused on reducing the effect of such probes. Is there a possibility to avoid those probes at all? > > > I'm surprised that you're asking these questions, if you're familiar > with the work, but here's an explanation: > > Despite the name, ECN is not a "congestion" notification, but data about > queue occupancy along the path a packet took. ECN is one of the acronyms with several meanings. One is "explicit congestion notification" in contrast to "implicit congestion notification" (packet loss), the other is "early congestion notification" and includes various approaches, some of which make sense, while others are more or less black magic. Years ago, some guys wanted to assess the load of a WiFi cell by observing transport delay variations. Of course, delay variations in WiFi may stem from many reasons, amongst them are noise, congestion, movement...... and afterwards, a wise algorithm has a sixth sense to properly detect THE very reason. Sometimes, it is a bit annoying to read approaches like this. > A router implementing ECN > just marks a packet that would have been dropped under an early discard > queueing discipline (e.g., RED). Yes. And particularly RED is one of the black magic approaches. Some months ago, I read some work about coddle, which is proposed as some kind of "self configuring RED". I don't know whether I still remember my objections from that time, however, I'm not fully convinced. O.k., IIRC coddle even discards packets under certain circumstances. The problem with discarded packets is: Each packet discard is a possible cause for a retransmission. Actually, a congestion collapse is in fact a retransmission collapse. Hence, induced retransmissions are quite similar to induced congestion. > RED discards packets before congestion > sets in (Random Early Detection) in order to slow down loss-reactive > flows before the network experiences congestion collapse. Exactly. So it "emulates" congestion to fend off congestion ;-) (That's another acronym with multiple interprations. Didn't RED stand for "random early discard" in some very early papers?) > ECN mitigates > RED's effect on loss rate by making the drops virtual (packets are > marked instead of dropped), informing sources to slow without making > them retransmit the "lost" packet. However, IIRC, in coddle, packets are dropped. > > > Now, you may take the position that any queue occupancy is congestion, > but if so, I think the significant benefits of store and forward routers > argue persuasively against that position. > I don't mind store and forward! I only ask some nasty questions about congestion ;-) > -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From faber at isi.edu Sat Nov 23 16:57:00 2013 From: faber at isi.edu (Ted Faber) Date: Sat, 23 Nov 2013 16:57:00 -0800 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <52914294.4070007@web.de> References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> <528F735E.4010802@web.de> <1385152345.342725449@apps.rackspace.com> <528FC68C.6090401@web.de> <528FDBF1.7090101@isi.edu> <5290FDA6.1000303@web.de> <52911AD8.6010400@isi.edu> <52914294.4070007@web.de> Message-ID: <52914EDC.3010701@isi.edu> On 11/23/2013 04:04 PM, Detlef Bosau wrote: > Am 23.11.2013 22:15, schrieb Ted Faber: >> I took your statement about "VJCC causing congestion" to mean you took >> issue with the endpoints slowly opening their windows to probe the >> network state and the effect of that probing on network queues and >> consequently on other flows. The work I mentioned is part of a large >> body of work focused on reducing the effect of such probes. > > Is there a possibility to avoid those probes at all? In the Internet, no. Scale and heterogeneity preclude the kinds of constraints on the network elements that make non-reactive congestion control feasible. -- Ted Faber http://www.isi.edu/~faber PGP: http://www.isi.edu/~faber/pubkeys.asc Unexpected attachment on this mail? See http://www.isi.edu/~faber/FAQ.html#SIG From richard at bennett.com Sat Nov 23 19:10:20 2013 From: richard at bennett.com (Richard Bennett) Date: Sat, 23 Nov 2013 19:10:20 -0800 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <52914EDC.3010701@isi.edu> References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> <528F735E.4010802@web.de> <1385152345.342725449@apps.rackspace.com> <528FC68C.6090401@web.de> <528FDBF1.7090101@isi.edu> <5290FDA6.1000303@web.de> <52911AD8.6010400@isi.edu> <52914294.4070007@web.de> <52914EDC.3010701@isi.edu> Message-ID: Network congestion isn?t, or doesn?t have to be, one enormous global problem that can only be solved one way. It can be viewed as myriad local problems that can be solved in ways that roll up into a coherent algorithm employed in many places at the same time. A big part of the solutions come from classifying flows according to application requirements and willingness to pay for grades of service. I?m comfortably predicting that the best solutions will come from the commercial sector rather than the academic/government research sector, just like they have for privacy and security. Government has too many conflicts of interest to be very helpful here. On Nov 23, 2013, at 4:57 PM, Ted Faber wrote: > On 11/23/2013 04:04 PM, Detlef Bosau wrote: >> Am 23.11.2013 22:15, schrieb Ted Faber: >>> I took your statement about "VJCC causing congestion" to mean you took >>> issue with the endpoints slowly opening their windows to probe the >>> network state and the effect of that probing on network queues and >>> consequently on other flows. The work I mentioned is part of a large >>> body of work focused on reducing the effect of such probes. >> >> Is there a possibility to avoid those probes at all? > > In the Internet, no. > > Scale and heterogeneity preclude the kinds of constraints on the network > elements that make non-reactive congestion control feasible. > > -- > Ted Faber > http://www.isi.edu/~faber PGP: > http://www.isi.edu/~faber/pubkeys.asc > Unexpected attachment on this mail? See > http://www.isi.edu/~faber/FAQ.html#SIG > Richard Bennett Visiting Fellow, American Enterprise Institute Center for Internet, Communications, and Technology Policy Editor, High Tech Forum Consultant From detlef.bosau at web.de Tue Nov 26 11:42:58 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 26 Nov 2013 20:42:58 +0100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> <528F735E.4010802@web.de> <1385152345.342725449@apps.rackspace.com> <528FC68C.6090401@web.de> <528FDBF1.7090101@isi.edu> <5290FDA6.1000303@web.de> <52911AD8.6010400@isi.edu> <52914294.4070007@web.de> <52914EDC.3010701@isi.edu> Message-ID: <5294F9C2.1010704@web.de> Am 24.11.2013 04:10, schrieb Richard Bennett: > Network congestion isn?t, or doesn?t have to be, one enormous global problem that can only be solved one way. It can be viewed as myriad local problems Period ;-) To my understanding, congestion is at first a queue overrun. > that can be solved in ways that roll up into a coherent algorithm employed in many places at the same time. A big part of the solutions come from classifying flows according to application requirements and willingness to pay for grades of service. As you know, I would like to talk about the problems why approaches like this are rarely discussed. > > I?m comfortably predicting that the best solutions will come from the commercial sector rather than the academic/government research sector, just like they have for privacy and security. Government has too many conflicts of interest to be very helpful here. Or too many people don't want to have approaches like that to be published. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From jtw at isi.edu Tue Nov 26 15:56:13 2013 From: jtw at isi.edu (John Wroclawski) Date: Tue, 26 Nov 2013 15:56:13 -0800 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <5294F9C2.1010704@web.de> References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> <528F735E.4010802@web.de> <1385152345.342725449@apps.rackspace.com> <528FC68C.6090401@web.de> <528FDBF1.7090101@isi.edu> <5290FDA6.1000303@web.de> <52911AD8.6010400@isi.edu> <52914294.4070007@web.de> <52914EDC.3010701@isi.edu> <5294F9C2.1010704@web.de> Message-ID: <5055D645-CC0F-43B0-BF49-4A8264461521@isi.edu> On Nov 26, 2013, at 11:42 AM, Detlef Bosau wrote: >> that can be solved in ways that roll up into a coherent algorithm employed in many places at the same time. A big part of the solutions come from classifying flows according to application requirements and willingness to pay for grades of service. > > As you know, I would like to talk about the problems why approaches like > this are rarely discussed. This is quite a surreal comment. "classifying flows according to application requirements and willingness to pay for grades of service" [and then meeting those requirements and providing those grades] is pretty much a picture-perfect definition of Network Quality of Service, which I think many people would argue is one of the most-discussed issues of all time within the field. As a quick experiment, googling produces about 10,700,000 results ( produces 18,000,000), while produces 1,070,000. cheers, --john From richard at bennett.com Wed Nov 27 00:10:44 2013 From: richard at bennett.com (Richard Bennett) Date: Wed, 27 Nov 2013 00:10:44 -0800 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <5055D645-CC0F-43B0-BF49-4A8264461521@isi.edu> References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> <528F735E.4010802@web.de> <1385152345.342725449@apps.rackspace.com> <528FC68C.6090401@web.de> <528FDBF1.7090101@isi.edu> <5290FDA6.1000303@web.de> <52911AD8.6010400@isi.edu> <52914294.4070007@web.de> <52914EDC.3010701@isi.edu> <5294F9C2.1010704@web.de> <5055D645-CC0F-43B0-BF49-4A8264461521@isi.edu> Message-ID: <86850089-E301-4D4C-B964-38FB2E417C7D@bennett.com> Yes, there?s absolutely no relationship between quality of service and congestion. They are two different search terms that differ both in word count, number of letters, and letter frequency. Indeed. On Nov 26, 2013, at 3:56 PM, John Wroclawski wrote: > > On Nov 26, 2013, at 11:42 AM, Detlef Bosau wrote: > >>> that can be solved in ways that roll up into a coherent algorithm employed in many places at the same time. A big part of the solutions come from classifying flows according to application requirements and willingness to pay for grades of service. >> >> As you know, I would like to talk about the problems why approaches like >> this are rarely discussed. > > This is quite a surreal comment. > > "classifying flows according to application requirements and willingness to pay for grades of service" [and then meeting those requirements and providing those grades] is pretty much a picture-perfect definition of Network Quality of Service, which I think many people would argue is one of the most-discussed issues of all time within the field. > > As a quick experiment, googling produces about 10,700,000 results ( produces 18,000,000), while produces 1,070,000. > > cheers, --john > > > > Richard Bennett Visiting Fellow, American Enterprise Institute Center for Internet, Communications, and Technology Policy Editor, High Tech Forum Consultant From detlef.bosau at web.de Wed Nov 27 03:49:41 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 27 Nov 2013 12:49:41 +0100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <5055D645-CC0F-43B0-BF49-4A8264461521@isi.edu> References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> <528F735E.4010802@web.de> <1385152345.342725449@apps.rackspace.com> <528FC68C.6090401@web.de> <528FDBF1.7090101@isi.edu> <5290FDA6.1000303@web.de> <52911AD8.6010400@isi.edu> <52914294.4070007@web.de> <52914EDC.3010701@isi.edu> <5294F9C2.1010704@web.de> <5055D645-CC0F-43B0-BF49-4A8264461521@isi.edu> Message-ID: <5295DC55.7080504@web.de> Am 27.11.2013 00:56, schrieb John Wroclawski: > On Nov 26, 2013, at 11:42 AM, Detlef Bosau wrote: > >>> that can be solved in ways that roll up into a coherent algorithm employed in many places at the same time. A big part of the solutions come from classifying flows according to application requirements and willingness to pay for grades of service. >> As you know, I would like to talk about the problems why approaches like >> this are rarely discussed. > This is quite a surreal comment. > > "classifying flows according to application requirements and willingness to pay for grades of service" [and then meeting those requirements and providing those grades] is pretty much a picture-perfect definition of Network Quality of Service, which I think many people would argue is one of the most-discussed issues of all time within the field. It depends on your point of view! When you take the view of the streaming guys and QoS guys, you are perfectly right! When you take the view of the transport guys, i.e. the tcp guys, I've never seen a discussion like that. The QoS guys often talk about reservation, the transport guys hardly ever talk about reservation. > As a quick experiment, googling produces about 10,700,000 results ( produces 18,000,000), while produces 1,070,000. Yes. QoS. Since when does TCP really deal with QoS? TCP uses best effort. And that's the reason for its success. I agree with you that there are lots of papers concerning QoS and that there is a huge interest in this topic in the academic world. Outside the academic world "QoS" is hardly used as a marketing argument any more. Perhaps, I should have a look, how often the term QoS was even used by publications from the "heise Verlag" here in Germany, I think it's the best known editor for non academic papers in the IT world here in Germany. What we're doing is QoS by underutilization. To my understanding, no other approach to QoS ever took flight in the customer's field. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Wed Nov 27 03:53:02 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 27 Nov 2013 12:53:02 +0100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <86850089-E301-4D4C-B964-38FB2E417C7D@bennett.com> References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> <528F735E.4010802@web.de> <1385152345.342725449@apps.rackspace.com> <528FC68C.6090401@web.de> <528FDBF1.7090101@isi.edu> <5290FDA6.1000303@web.de> <52911AD8.6010400@isi.edu> <52914294.4070007@web.de> <52914EDC.3010701@isi.edu> <5294F9C2.1010704@web.de> <5055D645-CC0F-43B0-BF49-4A8264461521@isi.edu> <86850089-E301-4D4C-B964-38FB2E417C7D@bennett.com> Message-ID: <5295DD1E.3040609@web.de> Am 27.11.2013 09:10, schrieb Richard Bennett: > Yes, there?s absolutely no relationship between quality of service and congestion. They are two different search terms that differ both in word count, number of letters, and letter frequency. > > Indeed. And again I'm ready to take flames.... Is that necessarily so? To my understanding, effective congestion management is actually effective resource management. Hence, perhaps I'm looking for a third way between "strict resource negotiation and reservation" and "no resource negotiation and reservation". -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From sthaug at nethelp.no Wed Nov 27 05:36:53 2013 From: sthaug at nethelp.no (sthaug@nethelp.no) Date: Wed, 27 Nov 2013 14:36:53 +0100 (CET) Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <5295DC55.7080504@web.de> References: <5294F9C2.1010704@web.de> <5055D645-CC0F-43B0-BF49-4A8264461521@isi.edu> <5295DC55.7080504@web.de> Message-ID: <20131127.143653.41659998.sthaug@nethelp.no> > I agree with you that there are lots of papers concerning QoS and that > there is a huge interest in this topic in the academic world. > > Outside the academic world "QoS" is hardly used as a marketing argument > any more. Disagree. I work for a service provider that sells services with QoS to customers. Most often L3VPN type services, sometimes other types. > What we're doing is QoS by underutilization. To my understanding, no > other approach to QoS ever took flight in the customer's field. Again, disagree. For our customers, QoS is often used to provide the necessary priority for business critial applications (e.g. Citrix, VoIP) while other services (e.g. Internet) receive lower priority, all within a given total link capacity. Most of our competitors also offer QoS for L3VPN type services (and the customers are buying) - we're not alone. One could certainly argue that QoS is "oversold" and that many cases would be better solved by overprovisioning. But not always... Steinar Haug, Nethelp consulting, sthaug at nethelp.no From detlef.bosau at web.de Wed Nov 27 06:54:33 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 27 Nov 2013 15:54:33 +0100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <5295DD1E.3040609@web.de> References: <1385050900.458129083@apps.rackspace.com> <528E4951.8070109@isi.edu> <528F735E.4010802@web.de> <1385152345.342725449@apps.rackspace.com> <528FC68C.6090401@web.de> <528FDBF1.7090101@isi.edu> <5290FDA6.1000303@web.de> <52911AD8.6010400@isi.edu> <52914294.4070007@web.de> <52914EDC.3010701@isi.edu> <5294F9C2.1010704@web.de> <5055D645-CC0F-43B0-BF49-4A8264461521@isi.edu> <86850089-E301-4D4C-B964-38FB2E417C7D@bennett.com> <5295DD1E.3040609@web.de> Message-ID: <529607A9.9040003@web.de> Am 27.11.2013 12:53, schrieb Detlef Bosau: > Am 27.11.2013 09:10, schrieb Richard Bennett: >> Yes, there?s absolutely no relationship between quality of service and congestion. They are two different search terms that differ both in word count, number of letters, and letter frequency. >> >> Indeed. > And again I'm ready to take flames.... > > Is that necessarily so? > > To my understanding, effective congestion management is actually > effective resource management. Hence, perhaps I'm looking for a third > way between "strict resource negotiation and reservation" and "no > resource negotiation and reservation". And just because I had a very short discussion on wireless networks off list some minutes ago: At latest when wireless networks are involved, any kind of QoS discussion becomes more or less questionable. > -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Wed Nov 27 10:50:28 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 27 Nov 2013 19:50:28 +0100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <20131127.143653.41659998.sthaug@nethelp.no> References: <5294F9C2.1010704@web.de> <5055D645-CC0F-43B0-BF49-4A8264461521@isi.edu> <5295DC55.7080504@web.de> <20131127.143653.41659998.sthaug@nethelp.no> Message-ID: <52963EF4.7000908@web.de> Am 27.11.2013 14:36, schrieb sthaug at nethelp.no: >> I agree with you that there are lots of papers concerning QoS and that >> there is a huge interest in this topic in the academic world. >> >> Outside the academic world "QoS" is hardly used as a marketing argument >> any more. > Disagree. I work for a service provider that sells services with QoS > to customers. Most often L3VPN type services, sometimes other types. To commercial customers or to private customers? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From sthaug at nethelp.no Wed Nov 27 11:28:01 2013 From: sthaug at nethelp.no (sthaug@nethelp.no) Date: Wed, 27 Nov 2013 20:28:01 +0100 (CET) Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <52963EF4.7000908@web.de> References: <5295DC55.7080504@web.de> <20131127.143653.41659998.sthaug@nethelp.no> <52963EF4.7000908@web.de> Message-ID: <20131127.202801.41673746.sthaug@nethelp.no> > >> I agree with you that there are lots of papers concerning QoS and that > >> there is a huge interest in this topic in the academic world. > >> > >> Outside the academic world "QoS" is hardly used as a marketing argument > >> any more. > > Disagree. I work for a service provider that sells services with QoS > > to customers. Most often L3VPN type services, sometimes other types. > > To commercial customers or to private customers? Commercial. Private/residential customers get the normal Best Effort Internet service. No QoS there. Steinar Haug, Nethelp consulting, sthaug at nethelp.no From detlef.bosau at web.de Wed Nov 27 14:06:59 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 27 Nov 2013 23:06:59 +0100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <20131127.202801.41673746.sthaug@nethelp.no> References: <5295DC55.7080504@web.de> <20131127.143653.41659998.sthaug@nethelp.no> <52963EF4.7000908@web.de> <20131127.202801.41673746.sthaug@nethelp.no> Message-ID: <52966D03.7050909@web.de> Am 27.11.2013 20:28, schrieb sthaug at nethelp.no: > Commercial. Private/residential customers get the normal Best Effort > Internet service. No QoS there. That's what I expected :-) German Telecom has made a QoS attempt. (Which has given them the nickname "Throttlecom" :-)) IMHO the Telecom model (flatrate with a certain download limit per month, if the limit is exceeded the DSL speed is slowed down a bit) is perfectly reasonable. The end user protests brought the issue to the European Union in Brussels - so, the customers will stay best effort forever. (IIRC, you live in Norway and Norway is not in the EU? I think you should continue doing so, this will spare you lots of grief and grey hairs...) (My regards to JC :-) From your point of view, the Throttlecom problem was an european problem - not to say: a problem somewhere overseas ;-)) The hard problem is that private customers rarely have an understanding of business models. Another issue is that QoS is often achieved at layer 2, hence most of the work is done at the ATM, cell relay, frame relay, MLPS .... level, is there really that much work done at the IP layer or higher? Detlef -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From sthaug at nethelp.no Wed Nov 27 21:17:01 2013 From: sthaug at nethelp.no (sthaug@nethelp.no) Date: Thu, 28 Nov 2013 06:17:01 +0100 (CET) Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <52966D03.7050909@web.de> References: <52963EF4.7000908@web.de> <20131127.202801.41673746.sthaug@nethelp.no> <52966D03.7050909@web.de> Message-ID: <20131128.061701.74698037.sthaug@nethelp.no> > Another issue is that QoS is often achieved at layer 2, hence most of > the work is done at the ATM, cell relay, frame relay, MLPS .... level, > is there really that much work done at the IP layer or higher? In our case, QoS is normally configured at L3, at the network edge. *Queuing* often takes place at L2 (MPLS) in the core of the network, but there is of course also queuing at the edge. Steinar Haug, Nethelp consulting, sthaug at nethelp.no From detlef.bosau at web.de Thu Nov 28 09:05:06 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Thu, 28 Nov 2013 18:05:06 +0100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <20131128.061701.74698037.sthaug@nethelp.no> References: <52963EF4.7000908@web.de> <20131127.202801.41673746.sthaug@nethelp.no> <52966D03.7050909@web.de> <20131128.061701.74698037.sthaug@nethelp.no> Message-ID: <529777C2.6030308@web.de> I think it was stated quite often already, but all these QoS concepts work pretty fine in wire line networks or in some well designed wireless point to point networks. As soon as any other wireless concepts come into play, you cannot even bridge the gap between net data rate and throughput Detlef Am 28.11.2013 06:17, schrieb sthaug at nethelp.no: >> Another issue is that QoS is often achieved at layer 2, hence most of >> the work is done at the ATM, cell relay, frame relay, MLPS .... level, >> is there really that much work done at the IP layer or higher? > In our case, QoS is normally configured at L3, at the network edge. > *Queuing* often takes place at L2 (MPLS) in the core of the network, > but there is of course also queuing at the edge. > > Steinar Haug, Nethelp consulting, sthaug at nethelp.no -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From andrewmcgr at google.com Thu Nov 28 15:24:13 2013 From: andrewmcgr at google.com (Andrew Mcgregor) Date: Fri, 29 Nov 2013 10:24:13 +1100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <529777C2.6030308@web.de> References: <52963EF4.7000908@web.de> <20131127.202801.41673746.sthaug@nethelp.no> <52966D03.7050909@web.de> <20131128.061701.74698037.sthaug@nethelp.no> <529777C2.6030308@web.de> Message-ID: In which case... measure, don't assume. Served us well for 802.11 modulation selection, I don't see why it shouldn't work for AQM. On 29 November 2013 04:05, Detlef Bosau wrote: > I think it was stated quite often already, but all these QoS concepts > work pretty fine in wire line networks or in some well designed wireless > point to point networks. > > As soon as any other wireless concepts come into play, you cannot even > bridge the gap between net data rate and throughput > > Detlef > > Am 28.11.2013 06:17, schrieb sthaug at nethelp.no: > >> Another issue is that QoS is often achieved at layer 2, hence most of > >> the work is done at the ATM, cell relay, frame relay, MLPS .... level, > >> is there really that much work done at the IP layer or higher? > > In our case, QoS is normally configured at L3, at the network edge. > > *Queuing* often takes place at L2 (MPLS) in the core of the network, > > but there is of course also queuing at the edge. > > > > Steinar Haug, Nethelp consulting, sthaug at nethelp.no > > > -- > ------------------------------------------------------------------ > Detlef Bosau > Galileistra?e 30 > 70565 Stuttgart Tel.: +49 711 5208031 > mobile: +49 172 6819937 > skype: detlef.bosau > ICQ: 566129673 > detlef.bosau at web.de http://www.detlef-bosau.de > > -- Andrew McGregor | SRE | andrewmcgr at google.com | +61 4 8143 7128 From detlef.bosau at web.de Fri Nov 29 05:13:49 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 29 Nov 2013 14:13:49 +0100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: References: <52963EF4.7000908@web.de> <20131127.202801.41673746.sthaug@nethelp.no> <52966D03.7050909@web.de> <20131128.061701.74698037.sthaug@nethelp.no> <529777C2.6030308@web.de> Message-ID: <5298930D.7000908@web.de> Am 29.11.2013 00:24, schrieb Andrew Mcgregor: > In which case... measure, don't assume. Served us well for 802.11 > modulation selection, I don't see why it shouldn't work for AQM. What do you want to measure? From andrewmcgr at google.com Sat Nov 30 21:05:12 2013 From: andrewmcgr at google.com (Andrew Mcgregor) Date: Sun, 1 Dec 2013 16:05:12 +1100 Subject: [e2e] Answer to Dave Reed Re: Fwd: Re: Question the other way round: In-Reply-To: <5298930D.7000908@web.de> References: <52963EF4.7000908@web.de> <20131127.202801.41673746.sthaug@nethelp.no> <52966D03.7050909@web.de> <20131128.061701.74698037.sthaug@nethelp.no> <529777C2.6030308@web.de> <5298930D.7000908@web.de> Message-ID: The actual clearance rate from the queue (or the sojourn time), if that matters for your AQM scheme. That way you are not assuming a known line rate. On 30 November 2013 00:13, Detlef Bosau wrote: > Am 29.11.2013 00:24, schrieb Andrew Mcgregor: > > In which case... measure, don't assume. Served us well for 802.11 > modulation selection, I don't see why it shouldn't work for AQM. > > > What do you want to measure? > -- Andrew McGregor | SRE | andrewmcgr at google.com | +61 4 8143 7128