[e2e] Why do we need congestion control?
Fred Baker (fred)
fred at cisco.com
Wed Apr 10 07:48:28 PDT 2013
I find this whole discussion a little amazing.
Let me step aside from math and esoteric arguments. Let me attach a simple test scenario, one that any of us could do at any time. I happen to be in a hotel; it's the Fairmont San Jose in San Jose California, but I don't know that the name of the hotel matters. I am located a few miles from Cisco's main campus, and am connected over a combination of the hotel wireless and one of the local internet providers to that campus using a VPN.
I ran a ping (on Linux, it's "ping -s") from my hotel room to a computer "at the plant" for 12 hours overnight. I ran that through a simple script that summarizes, per minute, the minimum, maximum, median, arithmetic mean, and standard deviation of the samples in the minute, as well as the loss rate. The ping stands in for any packet; if the ping would see a given RTT at a given time, any packet exchange would see that RTT AFAIK. The data is dumped into a CSV file, which I can pull into my favorite spreadsheet application.
You can look at a pdf of the median and minimum values overnight at
The other files are in the same directory including the raw ping output.
One sample is at best anecdotal evidence, and I present it as nothing stronger. However, at the point where the median RTT in a given minute is on the order of seconds, and this is not an isolated event but happens from time to time throughout the test period, something's not right. Note that this is in a network *with* what we call congestion control and competing primarily with TCP sessions that use congestion control. Samples like this one, which are trivial to obtain, are the reason for congestion control in TCP and for AQM in the network.
What I think folks are generally looking for is good throughput (if I want to move something from here to there, move it as fast as the available connectivity will permit), reasonable competition (if one neighbor is playing with bitorrent and another is watching a movie on his IP TV, I'd like to be able to do the same and have a satisfactory experience), and reasonable responsiveness (my median RTT should approximate my minimum RTT to a given site within some tolerance). BTW, statements such as RFC 6057 would suggest that service providers have pretty similar goals - they want their help-line phones to not ring and want their customers to recommend their services to other potential customers, and will take whatever steps they deem necessary to make that be true.
That doesn't call for TDM purity, and it doesn't call for discussions of angels and heads of pins. It does call for reasonable behavior both by the ISP and by the end user equipment attached to it.
On Apr 10, 2013, at 6:14 AM, Detlef Bosau <detlef.bosau at web.de>
> Am 06.03.2013 19:19, schrieb Richard G. Clegg:
>> On 06/03/13 15:02, Jon Crowcroft wrote:
>>> ok - i see your point - this is true if your sources have a peak rate they can send at
>>> this could be the line rate of their uplink -
>>> that would be embarrasingly bad
>>> (see keshav's followup on escalating costs of coding)
>>> or the rate they can get data off disk (which could be as bad, but might be lower)
>>> or an application specific rate (e.g. streamed video) for which you're suggestion is
>>> quite reasonable...
>> Apologies if this has been mentioned already -- the "blast it out at full whack and code against loss" strategy is explored in
>> Is the ''Law of the Jungle'' Sustainable for the Internet? from Infocom 2009.
>> Nice maths in that paper actually -- I was lucky enough to see them present it. The conclusions are interesting.
> I'm generally reluctant to "nice maths" in this discussion - quite a lot of the "mathwork" is an impressive envelope with hardly a letter inside.
> I think we should reconsider our goals here in order not to give another confirmation to the well known quote: "Having lost sight of our goals, we endoubled our efforts".
> In a PM, Matt Mathis mentioned that we should be particularly careful to use maths from "statistics" and "stochastics" - when actually there is hardly any stochastic behaviour is present. Matt pointed out that in many TCP scenarios, the behaviour of the net is mainly deterministic - and hence some of our statistical apparatus simply does not apply here, e.g. Little's Law. And I fear, the same holds true for erasure codes and statistic rationales for "fair goodput reduction".
> Let me sketch a very simple scenario here.
> Think of four nodes, e.g. PC, attached to a simple coax Ethernet.
> PC 1 PC 2 PC 3 PC 4
> | | | |
> Think of two bidirectional TCP flows here.
> One betwenn PC 1 and PC 3, the other between PC 2 and PC 4.
> And now allow me to ask some questions.
> Q1: What do we want to achieve at all in this scenario? With particular respect to the categories
> - goodput
> - throughput
> - congestion avoidance
> - fairness?
> Q2: Do we need VJCC in this scenario?
> Q3: Which of the goals stated in response to Q 1 are achieved?
> Q4: If goals are achieved, refer to Q3, what is the particular contribution of VJCC here?
> This scenario is quite easy - and when I answered this questions for myself, it became clear that I simply asked the wrong question when I asked how many flows would fit into the Internet.
> To some degree, VJCC is a stroke of genius - while being a kludge at the same time.
> So, I would like to ask: What are our goals? What do we achieve? Did we achieve our goals? And is that what we achieve identical to our intentions in the first?
> Detlef Bosau
> Galileistraße 30
> 70565 Stuttgart Tel.: +49 711 5208031
> mobile: +49 172 6819937
> skype: detlef.bosau
> ICQ: 566129673
> detlef.bosau at web.de http://www.detlef-bosau.de
More information about the end2end-interest