[e2e] very simple IP QoS for the bottleneck access link ? (was Skype and congestion collapse.)

Marc Herbert marc.herbert at free.fr
Tue Apr 5 02:41:59 PDT 2005


On Fri, 4 Mar 2005, RJ Atkinson wrote:

> On Mar 4, 2005, at 17:47, Clark Gaylord wrote:
> > This is why we really do need some notion of QoS other than The Fat
> > Pipe.  It doesn't have to be as elaborate as RSVP-disciplined CAC, but
> > you need to be able to prioritize traffic that matters and limit the
> > amount of traffic that gets prioritized.  It doesn't have to be more
> > complex than that, but it has to do at least that.  [Ergo ... left as
> > an exercise to the reader.]
>
> I don't know that the "network" needs to have a more sophisticated
> notion of QoS than best effort.  It can sometimes be useful for the
> network device connected directly to a congested link (e.g. access
> link between a site and its upstream provider) to have some
> internal-to-the-box QoS configuration.
>
> It is not uncommon these days for the access router at the customer
> premise to have some ACL ruleset that prefers some traffic over
> other traffic or rate-limits certain kinds of traffic -- and
> equivalent configuration of the aggregation router on the ISP side
> of the same link is also not uncommon these days.

OK, so why not generalize, extend, standardize, promote and sell this
technique?  To the point of creating a extremely simple QoS API
allowing latency-sensitive applications (assumed to be CBR, as mostly
are) to register their traffic to both ends of the access link. This
API would just reliably replace ugly hacks like guessing about
"well-known" UDP ports or tedious manual configurations.

Let's assume a network overprovisioned at the core, where the
bottleneck is the access link for a significant number of nodes
(_significant_, not even "majority"). This looks a lot like the
current Internet to me. Looking at current technology trends, this
looks like it's gonna stay like this for long. OK, maybe some
revolutions in transmissions and economics we can't envision today
would make the assumptions above wrong in the end. But in the end, we
are all dead anyway.

For nodes whose access link is not the bottleneck, then this does not
apply, and they have to solve this latency issue by some other
means, assuming they want to solve it. That's all. Simple.

The implementation looks simple. The latency-sensitive application
regularly sends to both access link halves (up- and down- stream) some
way to identify their packets (for instance: dst UDP port 27015
belongs to higher class). The access link implement strict priority
for those latency-sensitive packets.  Elastic traffic takes the rest.
Only two traffic classes, can be implemented cheaply by a DSLAM and by
a consumer device. No complex configurations. For those customers who
only have a poor USB DSL modem, this could be implemented in the PC
itself.

Since it's local it's scalable. No need to perform QoS at lightning
speed, the load is spreaded to numerous network ends, etc.

Since it's local it's incremental. It's incremental in the sense you
can deploy it for one customer and not the other without any issue.
It's incremental in the sense some ISP can start offering it without
caring about the others ISP. It's incremental in the sense you can
deploy it for some applications and not the others _on the same access
link_. Legacy applications just get the lower class. It's incremental
in the sense you can deploy it first for the upstream access link (the
biggest issue today because of the "A" in ADSL)  before the downstream
link.

It's also incremental in the sense you can make it peacefully co-exist
with a more primitive and less reliable "guess well-known UDP ports"
approach. It's incremental in the sense that, once started,
applications will have a strong incentive to move to this API.

What about the user registering too much traffic in the upper priority
class?  Well, it fails. Not worst than today. Most internet users now
know how to solve this congestion issue (observed immediately): they
shut some applications down. No computations, the simple try and fix
approach known today, only better. The only added complexity is the
two classes. Since most elastic applications report the currently used
throughput, users would not have a hard time understanding that
shutting down an application that is left with zero kb/s will not
solve their congestion issue in this case.

Since it's local I hardly see any security issue. Well you can imagine
some rogue application running in your home and stealing bandwith, but
then I would say you have a much bigger issue anyway.

>From the point of view of the end to end argument, you can think of it
as the definition of "the end" has been extended to include the access
link. Is this too much heretical? IMHO there have been much worst
deviances from The Argument in network history (firewalls anyone?).

Do you think it could have any economical viability? I think that if
just one ISP and one CBR killer app (Skype, a game, whatever) would
start to package it then it would sell. "No more lag thanks to our
brand new low-ping advanced technology. Now you can download and play
at the same time". You can even give to power users an advanced link
access controller allowing them to prioritize most legacy applications
and widening the market potential, attracting all geeks.

Any issues I missed ?  There must be some. This looks too good to
be true :-) Thanks a lot in advance for your comments.



-- 
So einfach wie möglich. Aber nicht einfacher -- Albert Einstein




More information about the end2end-interest mailing list