[e2e] Are we doing sliding window in the Internet?
dga+ at cs.cmu.edu
Tue Jan 9 10:55:40 PST 2007
[Wow, sorry, this is long. And I think I'm on e2e with the wrong
address, so this might get held up for moderation.]
On Jan 9, 2007, at 12:44 AM, Vadim Antonov wrote:
> On Mon, 8 Jan 2007, Dave Andersen wrote:
>> Though in the case of TCP, it takes a certain amount of effort to
>> Absent an easy to use mechanism in a popular OS, most people aren't
>> going to do it.
> Cheating TCP is very simple - it is sufficient to open several TCP
> sessions. All software written specifically to download large files
- If everyone does it, is it cheating?
- If it's only a small constant factor, is it cheating? (It's
certainly still TCP-friendly, though TCP-fair is a more stringent
- If instead of running p2p software, I just download 10 programs in
parallel instead of 1 program in ten-parallel, is it cheating?
I ask not to poke at your argument, but more to expose a fuzziness in
the very definition of end-to-end fairness. The meaning of "flow A
and flow B interact fairly" is well-defined. The meaning of
"application A and application B interact fairly" is less clear. By
the time you get to the host level, it's out the window (what if the
host is a proxy server for 1000s of clients?).
Combine this with the difficulty of determining the direction of
value flow for Internet packets, and I think you've got an incredibly
difficult problem. It may well be that what we have today is the
best solution: An over-engineered core and endpoints that are
limited by the capacity of the access link they purchase and by the
limited demand that most people have.*
(* -- A recent thread on Nanog is interesting in this regard: The
Skype people are starting a real-time TV/video/whatever p2p streaming
service. If it becomes as popular as they hope / as Skype has, it's
quite possible that the demand will go through the roof. I don't
pretend to know if they're right, of course.)
>> But if you're making an economic argument, you have to consider
>> all of
>> the costs. There is a cost to enforcement in the network, in
>> and complexity. There is a cost to billing by usage, both in actual
>> costs and in customer satisfaction.
> Actually, I didn't talk of usage-based billing. Customers tend to
> it (people like to have predictable expenses), and switch to flat-rate
> plans whenever they can afford them.
I know. I was giving that as an example of a cost of enforcement.
There's also a cost to doing fairness in the network.
>> There most likely exists a point at which the costs of enforcement or
>> the costs of accounting are lower than the costs imposed by cheating
>> users. But in an environment where capacity is still increasing
>> exponentially and where clueful network operators and programmers are
>> not getting any cheaper, it's not clear to me when we'll reach that
> Mmm... demand is expanding faster than capacity. Right now the choke
> point is distribution networks, but that is slowly (in US) being
> Currently DSL providers in US have something like 1:30
> and P2P has the capacity to soak all of that. In the recent year
> the DSL
> service in major population centers got noticeably slower during peak
> times, and the customer dissatisfaction will eventually force ISPs to
> decrease the oversubscription.
Eh, there are still multiple factors involved. We're still in a
phase where DSL is still gaining because it's replacing the still-
(The Pew Internet & American Life project claims that from March 2005
- 2006, 75% of broadband growth came from "current users switching
from dial-up to broadband." The total # of homes with broadband
access grew by 40% during this period.)
The portion of the growth that is fueled by an increasing # of
customers or by customers moving to more expensive service pays for
itself. (Or better...)
That leaves the other portion - demand growth - that has to be
balanced with the growth in capacity per dollar. I suspect you're
right that capacity per dollar is growing more slowly than total
unfunded demand, but it's not quite as bad as it sounds.
>> It's very hard to quantify the costs of things like "complexity",
>> code", and "users prefer flat-rate billing", but they do exist.
> The funny part is that most routers can do FQ out of box. Just
> that will reduce the misbehaving stack/application problem to the
> point of
> A better design would track FQ weights on per-prefix basis (and sum
> when routes are aggregated) to improve fairness on larger scales.
Certain vendor's routers can do *everything* out of the box. But
they don't necessarily do everything well, or stably, or at full line-
speed, or in a way that a network operator is comfortable with or can
get to behave properly. Consider RED.
>> The nice thing about today's environment is that the fiber is
>> already in
>> the ground. Adding more capacity is doable by "only" upgrading the
>> transcievers, adding more wavelengths, upgrading to faster
>> dollar routers, etc. :)
> Unfortunately, it is not that simple. You cannot pack information
> than Shannon limit for a given level of noise, you cannot increase
> S/N by
> pumping more power into fibers without causing non-linearity and
> like Raman scattering. So the way to expand is to put more
> equipment in
> parallel and reduce leg distances. It means the expensive things like
> building more amplifier stations in the middle of nowhere, and
> beefing up
> CO space, power, and cooling. The high-speed stuff is hot, and power
> budget quickly gets to megawatt range.
1) There's still remaining capacity in existing fibers:
I'm not an expert in this area, but getting somewhere around 150 Tbit/
sec out of a fiber (aggregate, WDM) should be doable assuming the
technology keeps up. (Grossly simplified and underestimated from a
quick read of http://www.nature.com/nature/journal/v411/n6841/full/
4111027a0.html and a few other papers. A very conservative read
might say a lower bound would be in the 45Tbit/sec range).
That's about an order of magnitude more than the best research
http://www.ntt.co.jp/news/news06e/0609/060929a.html (Sept 2006)
which is itself about 15x better than what's in use today.
2) A large chunk of the cost of laying fiber is the cost of
physically installing it. Hence, dark fiber. (Wikipedia claims
without attribution that the physical process "accounts for more than
60% of the cost of developing fiber networks." I don't know the
truth of this.) Yes, we've been increasing capacity by using up the
dark fiber that was put in during the dot-com craze, but there's
still some left.
> There's a huge disparity between capacity of PCs to source/sink
> (the modern desktop CPUs can easily run 200-300Mbps or TCP traffic
> with a
> suitable NIC) and the capacity of the network. This creates, well, an
> interesting situation - the demand is potentially huge.
Sure. But there's also a big difference between how much people
_can_ source/sink and how much they _want_ to.
I, like probably 98% of this list, have a *lot* of capacity at my
fingertips, and I (ab)use it to the fullest. Over the last day, when
I was using the network _a lot_ (streaming mp3s constantly and one
movie from my remote storage server), I used about 200 Kbit/sec on
average. That's a far cry from even my access link capacity, much
less my NIC's capacity.
>> I suspect we're saying the same thing from different perspectives,
>> have possibly different opinions about where we are on the cost
> Yep. But at least it is helpful to think about economics rather
> than go
> wishing that the world was perfect and everybody did the Right Thing:)
Of course. :) I just think that the economics are a bit more subtle
than just saying "people can cheat, so they will."
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 186 bytes
Desc: This is a digitally signed message part
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070109/bb569744/PGP.bin
More information about the end2end-interest