From detlef.bosau at web.de Tue Jun 3 05:43:51 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 03 Jun 2014 14:43:51 +0200 Subject: [e2e] A Cute Story. Or: How to talk completely at cross purposes. Re: [ih] When was Go Back N adopted by TCP In-Reply-To: References: <20140519150259.35C1E28E137@aland.bbn.com> <537B35D0.9040400@web.de> Message-ID: <538DC307.90101@web.de> I presume that I'm allowed to forward some mail by DPR here to the list (if not, DPR may kill me...), however the original mail was sent to the Internet History list and therefore actually intended to reach the public. A quick summary at the beginning: Yes, TCP doesn't manage for sent packets a retransmission queue with copies of the sent packets but maintains an unacknowledged data queue and does GBN basically. This seems to be in contrast to RFC 793, but that's life. A much more important insight into the history of TCP is the "workload discussion" as conducted by Raj Jain and Van Jacobson. Unfortunately, both talk completely at cross purposes and have completely different goals...... Having read the congavoid paper, I noticed that VJ refers to Jains CUTE algorithm in the context of how a flow shall reach equilibrium. Unfortunately, this doesn't really make sense, because slow start and CUTE pursue different goals. - Van Jacobson asks how a flow should reach equlibrium, - Raj Jain assumes a flow to be in equilibrium and asks which workload makes the flow work with an optimum performance. We often mix up "stationary" and "stable". To my understanding, for a queueing system "being stable" means "being stationary", i.e. the queueing system is positively recurrent, i.e., roughly, in human speech: None of the queue lengths will stay beyond all limits for all times but there is a probability > 0 for a queue to reach a finite length at any time. A queueing system is stationary when its arrival rate doesn't permanently exceed its service rate, this is actually nothing else than the "self clocking mechanism" and the equilibrium VJ is talking about. >From RJ's papers I see a focus on the workload and the perfomance of queueing systems. A possible performance metric is the quotient p = average throughput / average sojourn time. If the workload is too little, operators will have idle times, the system is not fully loaded. (=> sojourn time acceptable, throughput to small.) If the workload is too large, too much jobs are not being serviced but reside in queues. (=> throughput fine, sojourn time too large.) >From Jain's work we conclude that a queueing system has an optimum workload - which can be assessed by probing. => Set a workload, assess the system's performance, adjust the workload. Van Jacobson will reach the equilibrium. => Set a workload, if we see drops, the workload is too large. As a consequence, a system may stay perfectly in equilibrium state while seeing buffer bloat in the sense of "a packet's queueing time is more than a half of the packet's sojourne time. I don't know yet, perhaps someone can comment on this one, whether buffer bloat will affect a system's performance. (My gut feeling is: "Yes it will". Because the sojourn time grows inadequately large.) The other, more important, consequence is that probing for "dropfreeness" of a system does not necessarily mean the same as "probing for optimum performance". Detlef Am 20.05.2014 16:49, schrieb David P. Reed: > I really appreciate the work being done to reconstruct the diverse set > of implementations of the end to end TCP flow, congestion, and > measurement specs. > > This work might be a new approach to creating a history of the > Internet... meaning a new way to do what history of technology does best. > > I'd argue that one could award a PhD for that contribution when it > reaches a stage of completion such that others can use it to study the > past. As a work of historical impact it needs citation and commentary. > Worth thinking about how to add citation and commentary to a > simulation - something like knuth's literate programming but for > protocol systems. > > Far better than a list of who did what when, or a set of battles. It's > a contribution to history of the ideas... > > On May 20, 2014, Detlef Bosau wrote: > > Am 19.05.2014 17:02, schrieb Craig Partridge: > > Hi Detlef: I don't keep the 4.3bsd code around anymore, but > here's my recollection of what the code did. 4.3BSD had one > round-trip timeout (RTO) counter per TCP connection. > > > That's the way I find it in the NS2. > > On round-trip timeout, send 1MSS of data starting at the > lowest outstanding sequence number. > > > Which is not yet GBN in its "pure" form, but actually it is, because > CWND is increased with every new ack. And when you call "send_much" when > a new ack arrives (I had a glance at the BSD code myself some years ago, > the routines are named equally there, as far as I've seen, the ns2 cod > e > and the BSD code are extremely similar) the behaviour resembles GBN very > much. > > Set the RTO counter to the next increment. Once an ack is > received, update the sequence numbers and begin slow start > again. What I don't remember is whether 4.3bsd kept track of > multiple outstanding losses and fixed all of them before slow > start or not. > > > OMG. ;-) Who else should remember this, if not Van himself our you? > > However, first of all I have to thank for all the answers here. > > Detlef > > > -- Sent from my Android device with *K-@ Mail > *. > Please excuse my brevity. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From andrewmcgr at google.com Tue Jun 3 17:01:27 2014 From: andrewmcgr at google.com (Andrew Mcgregor) Date: Wed, 4 Jun 2014 10:01:27 +1000 Subject: [e2e] A Cute Story. Or: How to talk completely at cross purposes. Re: [ih] When was Go Back N adopted by TCP In-Reply-To: <538DC307.90101@web.de> References: <20140519150259.35C1E28E137@aland.bbn.com> <537B35D0.9040400@web.de> <538DC307.90101@web.de> Message-ID: Bufferbloat definitely does impair performance, by slowing down feedback it increases the variance of the system workload, which inevitably causes either packet drops because there is a finite buffer limit in reach, or by causing such large delays that retransmission timers fire for packets that are still in flight. In either case, the system is doing excess work. On 3 June 2014 22:43, Detlef Bosau wrote: > > I presume that I'm allowed to forward some mail by DPR here to the list > (if not, DPR may kill me...), however the original mail was sent to the > Internet History list and therefore actually intended to reach the public. > > A quick summary at the beginning: Yes, TCP doesn't manage for sent > packets a retransmission queue with copies of the sent packets but > maintains an unacknowledged data queue and does GBN basically. This > seems to be in contrast to RFC 793, but that's life. > > A much more important insight into the history of TCP is the "workload > discussion" as conducted by Raj Jain and Van Jacobson. > Unfortunately, both talk completely at cross purposes and have > completely different goals...... > > Having read the congavoid paper, I noticed that VJ refers to Jains CUTE > algorithm in the context of how a flow shall reach equilibrium. > > Unfortunately, this doesn't really make sense, because slow start and > CUTE pursue different goals. > > - Van Jacobson asks how a flow should reach equlibrium, > - Raj Jain assumes a flow to be in equilibrium and asks which workload > makes the flow work with an optimum performance. > > We often mix up "stationary" and "stable". To my understanding, for a > queueing system "being stable" means "being stationary", i.e. > the queueing system is positively recurrent, i.e., roughly, in human > speech: None of the queue lengths will stay beyond all limits for all > times but there is a probability > 0 for a queue to reach a finite > length at any time. > > A queueing system is stationary when its arrival rate doesn't > permanently exceed its service rate, this is actually nothing else than > the "self clocking mechanism" and the equilibrium VJ is talking about. > > >From RJ's papers I see a focus on the workload and the perfomance of > queueing systems. A possible performance metric is the quotient > p = average throughput / average sojourn time. > > If the workload is too little, operators will have idle times, the > system is not fully loaded. (=> sojourn time acceptable, throughput to > small.) > If the workload is too large, too much jobs are not being serviced but > reside in queues. (=> throughput fine, sojourn time too large.) > > >From Jain's work we conclude that a queueing system has an optimum > workload - which can be assessed by probing. > => Set a workload, assess the system's performance, adjust the workload. > > Van Jacobson will reach the equilibrium. > => Set a workload, if we see drops, the workload is too large. > > As a consequence, a system may stay perfectly in equilibrium state while > seeing buffer bloat in the sense of "a packet's queueing time is more > than a half of the packet's sojourne time. > > I don't know yet, perhaps someone can comment on this one, whether > buffer bloat will affect a system's performance. (My gut feeling is: > "Yes it will". Because the sojourn time grows inadequately large.) > > The other, more important, consequence is that probing for > "dropfreeness" of a system does not necessarily mean the same as > "probing for optimum performance". > > Detlef > > > > > > > > > > Am 20.05.2014 16:49, schrieb David P. Reed: > > I really appreciate the work being done to reconstruct the diverse set > > of implementations of the end to end TCP flow, congestion, and > > measurement specs. > > > > This work might be a new approach to creating a history of the > > Internet... meaning a new way to do what history of technology does best. > > > > I'd argue that one could award a PhD for that contribution when it > > reaches a stage of completion such that others can use it to study the > > past. As a work of historical impact it needs citation and commentary. > > Worth thinking about how to add citation and commentary to a > > simulation - something like knuth's literate programming but for > > protocol systems. > > > > Far better than a list of who did what when, or a set of battles. It's > > a contribution to history of the ideas... > > > > On May 20, 2014, Detlef Bosau wrote: > > > > Am 19.05.2014 17:02, schrieb Craig Partridge: > > > > Hi Detlef: I don't keep the 4.3bsd code around anymore, but > > here's my recollection of what the code did. 4.3BSD had one > > round-trip timeout (RTO) counter per TCP connection. > > > > > > That's the way I find it in the NS2. > > > > On round-trip timeout, send 1MSS of data starting at the > > lowest outstanding sequence number. > > > > > > Which is not yet GBN in its "pure" form, but actually it is, because > > CWND is increased with every new ack. And when you call "send_much" > when > > a new ack arrives (I had a glance at the BSD code myself some years > ago, > > the routines are named equally there, as far as I've seen, the ns2 > cod > > e > > and the BSD code are extremely similar) the behaviour resembles GBN > very > > much. > > > > Set the RTO counter to the next increment. Once an ack is > > received, update the sequence numbers and begin slow start > > again. What I don't remember is whether 4.3bsd kept track of > > multiple outstanding losses and fixed all of them before slow > > start or not. > > > > > > OMG. ;-) Who else should remember this, if not Van himself our you? > > > > However, first of all I have to thank for all the answers here. > > > > Detlef > > > > > > -- Sent from my Android device with *K-@ Mail > > >*. > > Please excuse my brevity. > > > -- > ------------------------------------------------------------------ > Detlef Bosau > Galileistra?e 30 > 70565 Stuttgart Tel.: +49 711 5208031 > mobile: +49 172 6819937 > skype: detlef.bosau > ICQ: 566129673 > detlef.bosau at web.de http://www.detlef-bosau.de > > -- Andrew McGregor | SRE | andrewmcgr at google.com | +61 4 1071 2221 From detlef.bosau at web.de Wed Jun 4 01:44:56 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 04 Jun 2014 10:44:56 +0200 Subject: [e2e] A Cute Story. Or: How to talk completely at cross purposes. Re: [ih] When was Go Back N adopted by TCP In-Reply-To: References: <20140519150259.35C1E28E137@aland.bbn.com> <537B35D0.9040400@web.de> <538DC307.90101@web.de> Message-ID: <538EDC88.5010801@web.de> Am 04.06.2014 02:01, schrieb Andrew Mcgregor: > Bufferbloat definitely does impair performance, by slowing down > feedback it increases the variance of the system workload, which > inevitably causes either packet drops because there is a finite buffer > limit in reach, or by causing such large delays that retransmission > timers fire for packets that are still in flight. In either case, the > system is doing excess work. I absolutely agree with that. And I did not say anything else. It is, however, interesting, that probing schemes as, e.g., VJCC simply don't consider buffer bloat. On the contrary, they produce it because a path is "pumped up" with workload as long as no packets are discarded. We try to alleviate the problem e.g. by ECN, where switches indicate that their buffers grow extremely large, or intentionally discarding packets, e.g. CODDLE, in order to have the senders slow down. However, the basic algorithm in VJCC is chasing congestion - and lead the flow in a nearly congested state again and again. While Jains approaches attempt to achieve am optimum performance. NB: I mentioned a performance metrics throughput / sojourn time, AFAIK this is neither mentioned in the Bible nor in the Quran or the Talmud and the Nobel Price is still pending. I can well imagine that users only assess one of these two parameters, e.g. in a FTP transfer, I'm only interested in a high throughput. In an interactive ssh session, I'm primarily interested in a little sojourn time. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Wed Jun 4 01:56:44 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 04 Jun 2014 10:56:44 +0200 Subject: [e2e] or in other words: In-Reply-To: <538EDC88.5010801@web.de> References: <20140519150259.35C1E28E137@aland.bbn.com> <537B35D0.9040400@web.de> <538DC307.90101@web.de> <538EDC88.5010801@web.de> Message-ID: <538EDF4C.7020100@web.de> Reading the congavoid paper and the footnote regarding CUTE, one _could_ think, VJCC and CUTE would pursue the same purpose and the "equilibrium window" CWND and the optimum workload "path capacity" or "path space" in CUTE are the same. No way. And NB: As queues (in queueing theory) are often considered unlimited, a self clocking TCP flow is necessarily in equilibrium state from its very beginning. The challenge is to maximize its performance, hence to find the right trade off between avoiding both, idle times and abundant queue lengths. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From jon.crowcroft at cl.cam.ac.uk Wed Jun 4 01:59:52 2014 From: jon.crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Wed, 4 Jun 2014 09:59:52 +0100 Subject: [e2e] A Cute Story. Or: How to talk completely at cross purposes. Re: [ih] When was Go Back N adopted by TCP In-Reply-To: <538DC307.90101@web.de> References: <20140519150259.35C1E28E137@aland.bbn.com> <537B35D0.9040400@web.de> <538DC307.90101@web.de> Message-ID: I dont think there's anything wrong here, but maybe a note on buffer bloat is in order:- alongside the feedback/AIMD and ack clocking mechanisms for tcp, there was a long discussion on right sizing buffers in the net - since AIMD naively applied led to the sawtooth rate behaviour in TCP, a back of envelope calculation led to the notion that the bottleneck had to have a buffer to cope with the peak, which at worst case would be bandwidth*delay product worth of packets (basically 3/2 times the mean rate) so that when 1 more packet was sent at that rate, one loss would be incurred triggering the MD part of AIMD once every ln(W) worth of RTTs...[al this is academic in reality for lots of reasons, including the various other triggers like dupacks and the fact that this case is a corner one - since usually there are lots of flows multiplexed at the bottleneck(s) and multiple bottlenecks, so the appropriate buffer siz could be way smaller - and of course, any one running a virtual queue and rate estimater (i.e. AQM a la codel etc) and especially ding ECN rather than loss baed feedback can avoid all this rediculsous provisioning of packet memory all ov er the net but alas, the rule of thumb for a corner case became dogma for a lot of router vendors for way too long to get it disestablished.... and many of the bottlenecks today are near the edge, and more common than not, probably in the interface between cellular data and backhaul, where, as you say, thee radio link may not exhibit any kind of stationary capacity at all etc etc On Tue, Jun 3, 2014 at 1:43 PM, Detlef Bosau wrote: > > I presume that I'm allowed to forward some mail by DPR here to the list > (if not, DPR may kill me...), however the original mail was sent to the > Internet History list and therefore actually intended to reach the public. > > A quick summary at the beginning: Yes, TCP doesn't manage for sent packets > a retransmission queue with copies of the sent packets but maintains an > unacknowledged data queue and does GBN basically. This seems to be in > contrast to RFC 793, but that's life. > > A much more important insight into the history of TCP is the "workload > discussion" as conducted by Raj Jain and Van Jacobson. > Unfortunately, both talk completely at cross purposes and have completely > different goals...... > > Having read the congavoid paper, I noticed that VJ refers to Jains CUTE > algorithm in the context of how a flow shall reach equilibrium. > > Unfortunately, this doesn't really make sense, because slow start and CUTE > pursue different goals. > > - Van Jacobson asks how a flow should reach equlibrium, > - Raj Jain assumes a flow to be in equilibrium and asks which workload > makes the flow work with an optimum performance. > > We often mix up "stationary" and "stable". To my understanding, for a > queueing system "being stable" means "being stationary", i.e. > the queueing system is positively recurrent, i.e., roughly, in human > speech: None of the queue lengths will stay beyond all limits for all times > but there is a probability > 0 for a queue to reach a finite length at any > time. > > A queueing system is stationary when its arrival rate doesn't permanently > exceed its service rate, this is actually nothing else than the "self > clocking mechanism" and the equilibrium VJ is talking about. > > From RJ's papers I see a focus on the workload and the perfomance of > queueing systems. A possible performance metric is the quotient > p = average throughput / average sojourn time. > > If the workload is too little, operators will have idle times, the system > is not fully loaded. (=> sojourn time acceptable, throughput to small.) > If the workload is too large, too much jobs are not being serviced but > reside in queues. (=> throughput fine, sojourn time too large.) > > From Jain's work we conclude that a queueing system has an optimum > workload - which can be assessed by probing. > => Set a workload, assess the system's performance, adjust the workload. > > Van Jacobson will reach the equilibrium. > => Set a workload, if we see drops, the workload is too large. > > As a consequence, a system may stay perfectly in equilibrium state while > seeing buffer bloat in the sense of "a packet's queueing time is more than > a half of the packet's sojourne time. > > I don't know yet, perhaps someone can comment on this one, whether buffer > bloat will affect a system's performance. (My gut feeling is: "Yes it > will". Because the sojourn time grows inadequately large.) > > The other, more important, consequence is that probing for "dropfreeness" > of a system does not necessarily mean the same as "probing for optimum > performance". > > Detlef > > > > > > > > > > Am 20.05.2014 16:49, schrieb David P. Reed: > > I really appreciate the work being done to reconstruct the diverse set of > implementations of the end to end TCP flow, congestion, and measurement > specs. > > This work might be a new approach to creating a history of the Internet... > meaning a new way to do what history of technology does best. > > I'd argue that one could award a PhD for that contribution when it reaches > a stage of completion such that others can use it to study the past. As a > work of historical impact it needs citation and commentary. Worth thinking > about how to add citation and commentary to a simulation - something like > knuth's literate programming but for protocol systems. > > Far better than a list of who did what when, or a set of battles. It's a > contribution to history of the ideas... > > On May 20, 2014, Detlef Bosau > wrote: >> >> Am 19.05.2014 17:02, schrieb Craig Partridge: >>> >>> Hi Detlef: >>> >>> I don't keep the 4.3bsd code around anymore, but here's my recollection >>> of what the code did. >>> >>> 4.3BSD had one round-trip timeout (RTO) counter per TCP connection. >> >> >> That's the way I find it in the NS2. >> >>> >>> On round-trip timeout, send 1MSS of data starting at the lowest outstanding >>> sequence number. >> >> >> Which is not yet GBN in its "pure" form, but actually it is, because >> CWND is increased with every new ack. And when you call "send_much" when >> a new ack arrives (I had a glance at the BSD code myself some years ago, >> the routines are named equally there, as far as I've seen, the ns2 cod >> e >> and the BSD code are extremely similar) the behaviour resembles GBN very >> much. >>> >>> Set the RTO counter to the next increment. >>> >>> Once an ack is received, update the sequence numbers and begin slow start >>> again. >>> >>> What I don't remember is whether 4.3bsd kept track of multiple outstanding >>> losses and fixed all of them before slow start or not. >> >> >> OMG. ;-) Who else should remember this, if not Van himself our you? >> >> However, first of all I have to thank for all the answers here. >> >> Detlef >> >> > -- Sent from my Android device with *K-@ Mail > *. > Please excuse my brevity. > > > > -- > ------------------------------------------------------------------ > Detlef Bosau > Galileistra?e 30 > 70565 Stuttgart Tel.: +49 711 5208031 > mobile: +49 172 6819937 > skype: detlef.bosau > ICQ: 566129673detlef.bosau at web.de http://www.detlef-bosau.de > > From detlef.bosau at web.de Wed Jun 4 04:58:03 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 04 Jun 2014 13:58:03 +0200 Subject: [e2e] A Cute Story. Or: How to talk completely at cross purposes. Re: [ih] When was Go Back N adopted by TCP In-Reply-To: References: <20140519150259.35C1E28E137@aland.bbn.com> <537B35D0.9040400@web.de> <538DC307.90101@web.de> Message-ID: <538F09CB.6020903@web.de> Am 04.06.2014 10:59, schrieb Jon Crowcroft: > I dont think there's anything wrong here, "wrong" wouldn't be an adequate word. I think, we have different goals here. > but maybe a note on buffer bloat is in order:- > > alongside the feedback/AIMD and ack clocking mechanisms for tcp, there > was a long discussion on right sizing buffers in the net - since AIMD > naively applied led to the sawtooth rate behaviour in TCP, a back of > envelope calculation ^^^^^^^^^^^^^^^^^^^^^very appropriate for a physicist :-) Even Einstein did so :-) > led to the notion that the bottleneck had to have a buffer to cope > with the peak, which at worst case would be bandwidth*delay product and exactly this might be a problem. What is the "delay" then? The more buffer space you introduce in the path, the greater the delay, and hence the product, will be.... > worth of packets (basically 3/2 times the mean rate) so that when 1 > more packet was sent at that rate, one loss would be incurred > triggering the MD part of AIMD once every ln(W) worth of RTTs...[al > this is academic in reality for lots of reasons, including the various > other triggers like dupacks and the fact that this case is a corner > one - since usually there are lots of flows multiplexed at the > bottleneck(s) and multiple bottlenecks, so the appropriate buffer siz > could be way smaller - and of course, any one running a virtual queue > and rate estimater (i.e. AQM a la codel etc) and especially ding ECN > rather than loss baed feedback can avoid all this rediculsous > provisioning of packet memory all ov er the net > My concern is that I doubt, that this calculation should be done for the "whole path end to end". And of course, you will perhaps provide sufficient buffer that the links will work at full load. Hence, the delay will vary from propagation and serialization delays only (empty queues) and ./. + queueing delays. Extremely rough- > but alas, the rule of thumb for a corner case became dogma for a lot > of router vendors for way too long to get it disestablished.... what doesn't forbid to ask questions ;-) > > and many of the bottlenecks today are near the edge, and more common > than not, probably in the interface between cellular data and > backhaul, where, as you say, thee radio link may not exhibit any kind > of stationary capacity at all etc etc When I got the insight of non stationary radio links fourteen years ago, I certainly would have less grey hairs than today ;-) > -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Wed Jun 4 07:11:50 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 04 Jun 2014 16:11:50 +0200 Subject: [e2e] Bufferbloat In-Reply-To: References: <20140519150259.35C1E28E137@aland.bbn.com> <537B35D0.9040400@web.de> <538DC307.90101@web.de> Message-ID: <538F2926.2030705@web.de> Jon, basically, the term Bufferbloat seems to me more like an agreement than a hard fact. In any queueing system we observe the sojourn time = service time + waiting time. And when I correctly remember DPR's clarification some months ago, the term Bufferbloat means nothing else than that a jobs queueing time exceeds the jobs service time. Unfortunately, from an end to end point perspective, neither of these can be assessed. We can assess the sojourn time, we can assess the throughput. We cannot (at least not in the general case) assess a jobs service time or a jobs waiting time. Hence, RJ's perfomance metric is a derived term which maps assessable metrics (sojourn time, throughput) to some number suitable as input for some control mechanism. (This is a generic problem in control theory, VJ refers to Luenberger in his congavoid paper, refer to the textbooks of Luenberger or Kalman here. Keywords: Luenberger Observer, Kalman Filter. The problem is how we assess variables which cannot be directly observed.) When we coin a term for the situation "waiting time exceeds service time", this is an agreement. Nothing like an "universal constant". However, any approach to avoid buffer bloat from an end to end perspective is restricted to one control variable (the congestion window) and to derived assessment variables (see above) as well. This is a direct consequence of building a feed back control loop here. Could it be worthwhile to discuss an open loop, feed forward approach as an alternative? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From johnh at isi.edu Tue Jun 24 09:58:28 2014 From: johnh at isi.edu (John Heidemann) Date: Tue, 24 Jun 2014 09:58:28 -0700 Subject: [e2e] CFP: HotNets 2014 (conference: Oct 27-28, 2014, abstracts: July 9, 2014) Message-ID: <28269.1403629108@dash.isi.edu> HotNets 2014: the Thirteenth ACM Workshop on Hot Topics in Networks October 27-28, 2014 -- Los Angeles, California, USA http://http://conferences.sigcomm.org/hotnets/2014/ Call for Papers The 13th ACM Workshop on Hot Topics in Networks (HotNets 2014) will bring together researchers in computer networks and systems to engage in a lively debate on the theory and practice of networking. HotNets provides a venue for debating future research agendas in networking and for presenting innovative ideas that have the potential to significantly influence the community. We invite researchers and practitioners to submit short position papers. In particular we are interested in papers that foster discussions that can shape research agendas for the networking community as a whole. Thus, we strongly encourage papers that identify fundamental open questions, or offer a constructive critique of the state of networking research. We also encourage submissions of early-stage work describing enticing but unproven ideas. Submissions can, for example, advocate a new approach, re-frame or debunk existing work, report unexpected early results from a deployment, or propose new evaluation methodologies. Novel ideas need not necessarily be supported by full evaluation; well-reasoned arguments or preliminary evaluations can be used to support their feasibility. Once fully developed and evaluated, we expect the work to be published at conferences such as SIGCOMM, SOSP, OSDI, SenSys, NSDI, MobiCom, MobiSys, PODC, CoNEXT, or INFOCOM. Short papers on finished work will be a better fit with the short papers track at CoNEXT. HotNets takes a broad view of networking research. This includes new ideas relating to (but not limited to) data center networks, home and enterprise networks and wide area networks using a variety of link media (wired, wireless, acoustic) as well as social networks and network architecture. It encompasses all aspects of networks, including (but not limited to) provisioning and resource management, economics and evolution, robustness and security, topology, mobility, interactions with applications, usability of underlying networking technologies, energy, performance, measurement and diagnosis, and hardware. Position papers will be selected based on originality, likelihood of spawning insightful discussion at the workshop, and technical merit. Accepted papers will be posted online prior to the workshop and will be published in the ACM Digital Library, thereby widely disseminating the ideas discussed at the workshop. Workshop Participation HotNets attendance is limited to roughly 80 people to facilitate lively discussion. Invitations will be allocated first to one author of each paper, HotNets organizers and committee members, and conference sponsors. New this year, to promote a more inclusive workshop, HotNets will also make a limited number of open registration slots available to the community. Submission Instructions Submitted papers must be no longer than 6 pages (10 point font, 1 inch margins) including all content except references. Authors can take up to one extra page for references beyond the 6 pages. Please consider using the sig-alternate-10pt.cls style file. All submissions must be blind: submissions must not indicate the names or affiliations of the authors in the paper. Only electronic submissions in PDF will be accepted. Submissions must be written in English, render without error using standard tools (e.g., Acrobat Reader), and print on US-Letter sized paper. Papers must contain novel ideas and must differ significantly in content from previously published papers and papers under simultaneous submission. Papers will be submitted at: http://crp.inet.tu-berlin.de/hotnets/. Important Dates Abstract registration: July 9, 2014 (11:59PM GMT) Paper submission: July 16, 2014 (11:59 PM GMT) Notification of decision: September 12, 2014 Camera-ready submission: September 24, 2014 Workshop dates: October 27-28, 2014