[e2e] traffic engineering considered harmful
J.Crowcroft at cs.ucl.ac.uk
Wed Jun 13 00:06:21 PDT 2001
to get decent route choice, a user needs to specify at MOST 3
a point in an ingress, a point in an egress and a point in a transit
given the choiec of access link at client and server, this gives
potential for 6 alternate choices of articulation points on the path
given power law scaling properties of the net, this is 6 degrees of
separation, which ought to be good for globval choice
the path quality/weather monitoring serviecs which the user needs to
consult to make an informaed choice do not need the "whole routing
table" - its clear that they need the inter-AS routing table (BGP
routing tables) but in fact even that can be comprssed a LOT since a
lot of data there is realyl redundent in the sense of not being useful
for making choices -
where is this heading? approaximaitely towards an evolutionary path
between BGP and NIMROD...
at the IP level
if we use (say) communitry attributed, we can code the choice poitns
if we use IPv6 (pah) and GSE, we can even explicitly put part of this
in our address translation tables and control it as a type of flow
state (soft of course) although we could also do it with hop-by-hop
options......it could also be done with NATS
at the APP level, as you say, its easy...
In message <200106122236.SAA07143 at nms.lcs.mit.edu>, "David G. Andersen" typed:
>>Bob Braden just mooed:
>>> *> Resilient Overlay Networks: http://nms.lcs.mit.edu/ron/
>>> *> Take a small collection of hosts around the 'Net. They
>>> *> can see different paths in and out of various ASs. Have them
>>> *> measure the paths between each other, and if they can establish
>>> *> a better route by sending their packets indirectly through another
>>> *> member of the overlay, do so.
>>> Why isn't this the Tragedy of the Commons waiting to happen?
>> It might be. There are a few saving graces, though:
>> - The mechanisms used in RON won't scale:
>> . The number of inter-node links grows as N^2; the probe
>> traffic would quickly drown most nodes.
>> . The routing traffic is reasonably expensive.
>>Oddly, I think this is a good thing. It means that you've got to
>>keep each group of reactive elements small. If you're careful to avoid
>>inadvertent synchronization (if you can?), then you may be able to
>>avoid oscillations from huge groups of reactive elements.
>> - These small groups will see different views of the network
>>I think this is probably the real saving grace. Since everyone will
>>have a different set of links available that they choose between,
>>they won't all be able to overwhelm the same set of "good" links with
>>However, I don't have a clue if I'm correct about either of these
>>hypotheses. And, I think your question had a deeper component
>>to it ("what if all of the good links get used up") that I don't
>>completely address in the second point. If there are enough ants
>>scrambling along the trails, they _will_ get congested. The answer
>>to that boils down to a more fundamental question about the
>>available bandwidth in the Internet, and its locations relative
>>to the mandatory bottlenecks people must traverse in getting
>>from source to destination. Which is a handwavy way of saying
>>I don't konw the answer.
>>work: dga at lcs.mit.edu me: dga at pobox.com
>> MIT Laboratory for Computer Science http://www.angio.net/
More information about the end2end-interest