[e2e] Are we doing sliding window in the Internet?

Vadim Antonov avg at kotovnik.com
Mon Jan 8 21:44:27 PST 2007


On Mon, 8 Jan 2007, Dave Andersen wrote:

> http://www.amazon.com/Emily-Posts-Etiquette-16th-Peggy/dp/0062700782

Oh, I'm never the first to use ad hominem, but I also won't let anyone to 
try that on me without getting taste of their own medicine.
 
> Though in the case of TCP, it takes a certain amount of effort to cheat.
>  Absent an easy to use mechanism in a popular OS, most people aren't
> going to do it.

Cheating TCP is very simple - it is sufficient to open several TCP
sessions. All software written specifically to download large files does
that.

> But if you're making an economic argument, you have to consider all of
> the costs.  There is a cost to enforcement in the network, in hardware
> and complexity.  There is a cost to billing by usage, both in actual
> costs and in customer satisfaction.

Actually, I didn't talk of usage-based billing. Customers tend to dislike 
it (people like to have predictable expenses), and switch to flat-rate 
plans whenever they can afford them.

What is really needed is fairness enforcement, not usage accounting. In a
fair network you pay for the ability to have a guaranteed use of some
fraction of network capacity, plus use of proportionally allocated unused
capacity.  Ideally, the fee should be proportional to the guaranteed
fraction. It does not have to be ideal, just somewhat effective.
 
> There most likely exists a point at which the costs of enforcement or
> the costs of accounting are lower than the costs imposed by cheating
> users.  But in an environment where capacity is still increasing
> exponentially and where clueful network operators and programmers are
> not getting any cheaper, it's not clear to me when we'll reach that
> point.  

Mmm... demand is expanding faster than capacity. Right now the choke 
point is distribution networks, but that is slowly (in US) being fixed.
Currently DSL providers in US have something like 1:30 oversubscription, 
and P2P has the capacity to soak all of that. In the recent year the DSL 
service in major population centers got noticeably slower during peak 
times, and the customer dissatisfaction will eventually force ISPs to 
decrease the oversubscription.

The backbone capacity has hard physical limits - getting smaller 
dispersion in the fiber or reducing size of WDM frequency bands can go
only that far; the remaining option (just put more fibers) is generally
limited by what's already in the ground - with no prospect of another 
dot-com style financial insanity on the horizon.

Besides, "lay more fiber" is not exponential, it's linear in bandwidth to 
cost ratio.

> (Particularly when most of the voip people can usually be satisfied by
> simply doing prioritization at the edge.)

Yep. That's because right now backbones are faster than edge - given the
present duty cycle.  The duty cycle is changing from 2-3% to 20-30% as
video over Internet becomes popular.  This will shift (or already 
shifting) the bottleneck back to the backbones - to the place where it was 
10-15 years ago.
 
> It's very hard to quantify the costs of things like "complexity", "more
> code", and "users prefer flat-rate billing", but they do exist.

The funny part is that most routers can do FQ out of box. Just enabling 
that will reduce the misbehaving stack/application problem to the point of 
insignificance.

A better design would track FQ weights on per-prefix basis (and sum them 
when routes are aggregated) to improve fairness on larger scales.

> The nice thing about today's environment is that the fiber is already in
> the ground.  Adding more capacity is doable by "only" upgrading the
> transcievers, adding more wavelengths, upgrading to faster multimillion
> dollar routers, etc. :)

Unfortunately, it is not that simple. You cannot pack information denser
than Shannon limit for a given level of noise, you cannot increase S/N by
pumping more power into fibers without causing non-linearity and things
like Raman scattering.  So the way to expand is to put more equipment in
parallel and reduce leg distances. It means the expensive things like
building more amplifier stations in the middle of nowhere, and beefing up
CO space, power, and cooling.  The high-speed stuff is hot, and power
budget quickly gets to megawatt range.
 
All the while prices on residential access are getting down to few dozens
$ per Mbps of downlink capacity.  The market is not growing very fast in
financial terms. So it is either cost-cutting or out-of-business.

There's a huge disparity between capacity of PCs to source/sink traffic
(the modern desktop CPUs can easily run 200-300Mbps or TCP traffic with a
suitable NIC) and the capacity of the network.  This creates, well, an
interesting situation - the demand is potentially huge.

> I suspect we're saying the same thing from different perspectives, but
> have possibly different opinions about where we are on the cost curve.

Yep. But at least it is helpful to think about economics rather than go 
wishing that the world was perfect and everybody did the Right Thing:)

--vadim



More information about the end2end-interest mailing list