[e2e] Revisiting RON ("traffic engineering considered harmful"

David P. Reed dpreed at reed.com
Sat Oct 20 07:12:37 PDT 2001


IMHO, we wouldn't need "overlay networks" if we could get over the idea 
that IP by definition implies BGP or some other "universal solvent for 
routing".

It would be much more interesting to focus less on "global optimality" (of 
the sort that BGP strives for) and more on heterogeneous routing approaches 
co-existing.

Of course that would redefine the problem away from one of theoretical 
elegance and toward one of coping with the messiness of practical needs. :-)

A similar issue arises by creating the religion of TCP-compatibility around 
the idea that TCP is by definition the right way to organize all 
communications.

Overlay networks are a great way to deal with the political hegemony of 
BGP, to create room for experimentation.  But many of the artifacts of 
trying to live on top of a BGP platform are probably not worth thinking 
about or preserving.


At 02:19 PM 10/20/2001 +0100, Jon Crowcroft wrote:

>In message <200110191545.LAA26413 at nms.lcs.mit.edu>, "David G. Andersen" typed:
>
>  >>
>  >>  It's quite unclear from the IWAN paper what in SOAR is actually
>  >>implemented, and how it performs on real networks.  Do you have
>  >>any data about it?  I'd love to see how an app-level routing
>  >>approach performs on a different set of machines than those I
>  >>used for my tests.
>
>not yet - we had some service location stuff running a couple of years
>back from australia uk and us...but as i said, we cannot figure out
>how to get the incentives right to get real deployment and most
>expierments done on "friends" machiens have artificially good
>ocnnectivity (e.g. people on internet2) so yo udont relaly get useful
>results in terms of seeign real problems (esp. if to get a valid
>comparison, you need to be comparing with sets of sites with
>multihome/traffic engineered or just bad
>configed BGP problems)....
>
>
>i spose we should do the same as a lot of other overlay projects
>just to get on the map:-)
>  >>
>  >>> > as far as i kmnow, the now oft cited idea of using padhye or floyd or
>  >>> > other TCP rate equation as a route choice metric originated in 
> fact with
>  >>> > Curtiz Villamizar in the really nice optimized multipath routing draft
>  >>> > for OSPF (prob. expired now - was draft-ietf-ospf-omp-03.txt)
>  >>> > which would obviate the need for SOAR or RON if deployed...although an
>  >>> > inter-domain IP level solution of course would still elude us and
>  >>> > require somethign like this.
>  >>
>  >>  Ahh, thanks.  I agree with you;  in fact, one of the premises of RON
>  >>is that providers will generally be able to do a pretty tight job
>  >>of routing within their domain - OSPF, I _believe_, doesn't suffer some
>  >>of the problems as BGP in terms of convergence times.  RON was designed
>  >>more for inter-AS communication, where the scalability and heterogeneity
>  >>problems start to creep in.
>  >>
>  >>> > imho the actual problem is to create the right incentives for a 3rd
>  >>> > party to deploy open programmable (or user signalable) application
>  >>> > level proxies....closed ones are already there in abundance - although
>  >>> > in a pure open market it aint at all obvious why such application
>  >>> > specific, non-ISP or content server provided solutions should ever
>  >>> > exist, but then who says the ISP market (or any other:-) is free
>  >>> >
>  >>> > RON I guess is nice and simple, which is good
>  >>
>  >>  And fairly easily deployed.  The incentives to 3rd parties could
>  >>take the form of pay-per-packet, which is at least easier than trying
>  >>to come up with a generic form of payment for the rent-a-server model
>  >>when people are running arbitrary code.
>  >>
>  >>> > of course ,there are people working on the IP repair convergence
>  >>> > problem - it is simply not true to say that the policy constrain
>  >>> > requirements in an EGP suvch as BGP4, combined with scaling are the
>  >>> > _cause_ of slow convergence after failure. the problem is the
>  >>> > baroqueness of routing _implementations_.
>  >>
>  >>  I agree completely.  I don't think the policy constraints are
>  >>the cause of slow convergence;  I think that policy constraints take
>  >>away some links you might want to use for failover.  Flipping that
>  >>on its head, I think that _inflexible_ policy constraints are another
>  >>cause -- you can't say "Let dave go through MIT to get to his
>  >>cable modem from anywhere," you have to direct that policy at entire
>  >>remote networks.  Small application-layer overlays seem a better place
>  >>to implement these fine-grained policies, because you don't have
>  >>the scaling problem.
>  >>
>  >>> > van et al have some suggestions on faster convergence
>  >>> > http://www.packetdesign.com/publications.html
>  >>> >
>  >>> > while there are lots of deployment barriers to that, I dont personalyl
>  >>> > see why these are worse than server deployment barriers...
>  >>
>  >>  Packetdesign's "Towards milisecond IGP convergence" paper addresses
>  >>things you can do within a link-state protocol to speed things up.
>  >>RON is all about taking link-state protocol performance to the wide-area;
>  >>I have no illusions that a provider running OSPF can't beat the pants
>  >>off of RON for failures _inside their network_.
>  >>
>  >>  The only suggestions I've seen about improving BGP convergence come
>  >>from Labovitz.  If there are others, I'd be interested in seeing them.
>  >>
>  >>  Thanks for the comments, and the pointer to the SOAR paper!
>  >>
>  >>  -Dave
>
>  cheers
>
>    jon




More information about the end2end-interest mailing list