[e2e] Are we doing sliding window in the Internet?
dga+e2e at cs.cmu.edu
Mon Jan 8 21:35:11 PST 2007
Vadim Antonov wrote:
> On Mon, 8 Jan 2007, Detlef Bosau wrote:
>> First of all, you most probably want to care for a good text book on
>> networking because what you write on this topic simply makes my hair
>> stands on end.
And in return, might I kindly suggest:
> It is not neutral from the point of view of an economist - having shared
> resource with no admission control creates the tragedy of commons. Meaning
> that it creates incentives for people to cheat and overexploit the shared
> resource, until it becomes useless (this, incidentally, is the problem
> with socialism in general).
Though in the case of TCP, it takes a certain amount of effort to cheat.
Absent an easy to use mechanism in a popular OS, most people aren't
going to do it. If you will, there's a certain cost to cheating (be
that the cost of tweaking your stack, writing a new protocol, or
installing some "accelerator" program that does it for you).
> Therefore the appeal to developers to be conscientious in the way they
> design network stacks and applications is not going to work. On the other
> hand, long-haul ISPs have pretty good reason to protect the value of their
> resources - i.e. the networks. So far, they do not perceive
> overexploitation as a problem. That will change as end-users en mass
> start to exchange huge video files - and, consequently, are starting to
> use software which does cheat - it does make a lot of difference for them.
> Any P2P software which opens multiple TCP sessions for simultaneous
> download essentially overrides the rough fairness of the cooperative
> congestion control.
> The end-point based congestion control and fairness enforcement, while
> quite widely deployed, were a bad architectural decision - economically.
> People who made that decision didn't pay much attention to the economics -
> they were doing research, not doing business. (To their credit back then
But if you're making an economic argument, you have to consider all of
the costs. There is a cost to enforcement in the network, in hardware
and complexity. There is a cost to billing by usage, both in actual
costs and in customer satisfaction.
There most likely exists a point at which the costs of enforcement or
the costs of accounting are lower than the costs imposed by cheating
users. But in an environment where capacity is still increasing
exponentially and where clueful network operators and programmers are
not getting any cheaper, it's not clear to me when we'll reach that
point. Some people may argue we already have; I don't think that we're
there _for the majority of uses_. It may well be that there are
applications that want to pay more for better service today (voip,
remote open heart surgery), but it's not clear yet that the economic
benefit to ISPs for satisfying that class of apps is worth the costs.
(Particularly when most of the voip people can usually be satisfied by
simply doing prioritization at the edge.)
It's very hard to quantify the costs of things like "complexity", "more
code", and "users prefer flat-rate billing", but they do exist.
> Well, the reality is starting to catch up - the name of the game in the
> ISP business is no longer "grab as much ground as you can and damn the
> cost" but, rather, "drive the costs down". The profit margins are getting
> slim, and the packet transport is no longer novelty, but simply another
> commodity. It is no longer feasible just to throw bandwidth at the
> problem; there's not going to be another mad rush to lay fiber anytime
The nice thing about today's environment is that the fiber is already in
the ground. Adding more capacity is doable by "only" upgrading the
transcievers, adding more wavelengths, upgrading to faster multimillion
dollar routers, etc. :)
I suspect we're saying the same thing from different perspectives, but
have possibly different opinions about where we are on the cost curve.
More information about the end2end-interest