[e2e] Question about propagation and queuing delays

vijay gill vgill at vijaygill.com
Mon Aug 22 19:45:14 PDT 2005

Fred Baker wrote:
> no, but there are different realities, and how one measures them is  
> also relevant.
> In large fiber backbones, within the backbone we generally run 10:1  
> overprovisioned or more. within those backbones, as you note, the  
> discussion is moot. But not all traffic stays within the cores of  large 
> fiber backbones - much of it is originated and terminates in  end 
> systems located in homes and offices.

We don't run 10:1 overprovisioning or n:1 overprovisioning in the 
backbone because we simply do not know how. I am provisioning a backbone 
interface, where do I get the 10 to 1 figure from. I have worked at very 
large backbones for most of my career and in every case, the backbone 
bandwidth provisioning was simply kicked off when certain paths got to a 
steady 50% or more utilization. The saving factor is that large 
macroflows between places are fairly tractacble and we can watch the 
link utilization and upgrade as needed (I speak to well funded north 
american networks, if you're running a country over a VSAT link and 
dialup modem, disregard this).

> The networks that connect homes and offices to the backbones are  often 
> constrained differently. For example, my home (in an affluent  community 
> in California) is connected by Cable Modem, and the service  that I buy 
> (business service that in its AUP accepts a VPN, unlike  the same 
> company's residential service) guarantees a certain amount  of 
> bandwidth, and constrains me to that bandwidth - measured in KBPS. 

Here is where overprovisioning is common. Normally most cable plants 
allocate 20 kbps or 25 kbps per paying sub for capacity planning 
purposes and build the physical plant to support that.

> in MBPS. And, they tell me that the entire world  is not connected by 
> large fiber cores - as soon as you step out of  the affluent 
> industrialized countries, VSAT, 64 KBPS links, and even  9.6 access over 
> GSM become the access paths available.

> As to measurement, note that we generally measure that  overprovisioning 
> by running MRTG and sampling throughput rates every  300 seconds. When 
> you're discussing general service levels for an  ISP, that is probably 
> reasonable. When you're measuring time  variations on the order of 
> milliseconds, that's a little like running  a bump counter cable across 
> a busy intersection in your favorite  downtown, reading the counter once 
> a day, and drawing inferences  about the behavior of traffic during 
> light changes during rush hour...

Which is why I've been pushing my vendors to implement high watermark 
counters that measure the maximum queue depth reached. The EWMA counters 
used in most routers might as well be a random number in terms of 
finding out microburst caused congestion. It is however, perfectly valid 
for cap planning for large city-pair flows.

> http://www.ieee-infocom.org/2004/Papers/37_4.PDF has an interesting  
> data point. They used a much better measurement methodology, and one  of 
> the large networks gave them some pretty cool access in order to  make 
> those tests. Basically, queuing delays within that particular  
> very-well-engineered large fiber core were on the order of 1 ms or  less 
> during the study, with very high confidence. But the same data  flows 
> frequently jumped into the 10 ms range even within the 90%  confidence 
> interval, and a few times jumped to 100 ms or so. The  jumps to high 
> delays would most likely relate to correlated high  volume data flows, I 
> suspect, either due to route changes or simple  high traffic volume.

That burstiness occurs more frequently if your customers are connected 
at links that are on the same bandwidth as the core. Lots of ds3/t1/e3 
type customers are not going to cause significant microburstiness issues 
on a 10 gig backbone.

> The people on NANOG and the people in the NRENs live in a certain  ivory 
> tower, and have little patience with those who don't. They also  measure 
> the world in a certain way that is easy for them.

No comment.


More information about the end2end-interest mailing list