[e2e] NDDI & OpenFlow

A.B. Jr. skandor at gmail.com
Fri Apr 29 11:55:13 PDT 2011


2011/4/28 <L.Wood at surrey.ac.uk>

> One way of looking at OpenFlow is that, as routers have developed, they
> have gone from being integrated, to having modular linecards plugged into a
> backplane/bus, to having line cards plugged into an internal 10.x network
> within the box, because Gigabit Ethernet can provide a nice fast,
> well-understood backplane without further custom engineering. (Cisco's
> catalyst 6000 series is one example of this.) The linecards do forwarding at
> (hopefully) line speed, but receive their forwarding tables over the
> internal Ethernet from the central processor where routing and routing
> tables exist, and also receive their traffic over the internal Ethernet. All
> in internal VLANs where the forwarding table information and control data
> can be prioritised.
>
> OpenFlow can take that internal Ethernet connectivity within the router,
> and stretches it so that the linecards are in different places around an
> office or university campus. So your network is no longer a bunch of
> smartish routers doing different and slightly repeated and redundant things
> in a hopefully coordinated fashion, but a bunch of somewhat dumber linecards
> being coordinated in synchrony from a central point. Your campus is now
> inside your router, and your campus-wide control plane just got way faster
> and more predictable.
>
> So far, so good; your traffic engineering and what-does-QoS-mean problems
> now exist within a single homogenous router, rather than a bunch of
> uncoordinated routers that have to be configured in situ etc. So the routing
> protocol being run across campus is suddenly consistent, instead of e.g.
> migrating piecemeal to OSPF, having part of campus generate and rebroadcast
> RIP without telling you because they're not upgrading their old kit, etc.
>
> Where I have trouble with the OpenFlow story is where network researchers
> say 'okay, now we've built this, we'll also instrument it and use it for
> traffic for our research experiments, in separate traffic-engineered
> virtualized slices of the network'. It's a production networking environment
> enabling a business or university to function, where network researchers
> don't do support, and which and whose budget will pay for this stuff and
> deal with downtime mitigation, exactly? Even handwaving the technology
> implications away, it's an accounting difficulty. I suppose researchers
> could found a startup that will charge the university to maintain its
> network, while at the same time also getting research funding to do new,
> interesting, and exciting things to their paying customer's network. It's a
> win-win (well, if we don't look too closely at funding models) right up
> until the first major outage.
>
> But with the recent formation of the Open Networking Foundation, we'll see
> multiple commercial suppliers providing ever-more-complex kit to support
> this site-as-router paradigm, with the usual subtle interoperability
> problems, a minimal common feature set and creative technical
> differentiation to market products and give unique selling points. Standards
> are good, but they can always be improved upon. At which point, we're back
> to a heterogenous multi-supplier network - but one with a lot more subtle
> interdependencies, requiring more support when things go wrong, as they do.
> Routing problems can now be trickier forwarding state and sync problems. But
> that's what support is for; you don't just install a network, you maintain
> one, and the support costs are a given.
>
> Meanwhile, the networking researchers remain locked out of the commercial
> kit for the sanity of the university administration, which is fine, because
> they've decided they can't do anything exciting with it anyway. Instead,
> they're off describing the problems with the status quo and winning new
> funding to look at distributed, fault-tolerant networking where there is no
> central control point. After all, you don't conquer complexity, you just
> shuffle it around.
>
> This should be interesting.
>
> L.
>
> Plus ça change, plus c'est la même chose.
>

 Ah, oui, sans doute!

This is an interesting way of looking at openflow, indeed.

IMHO, I think that a bunch of dumb datapath elements coordinated by a
omniscient central box can very hardly be thought as equivalent of a single
switch router. Saying this means forget the well known differences between a
single box made of tightly coupled set of components and a distributed
system linked by unreliable circuits. Distributed systems and network
protocols went a long way trying to deal with their issues. The complexity
cannot be simply brushed under the carpet by a appealing new idea.

 Supposing it's really new.

 The arguing for OpenFlow reminds me too much the centrally controlled
packet switching networks proposed in the 80's, like X.25, remember that?

 Plus ça … etc...















>
> Lloyd Wood
> http://sat-net.com/L.Wood/CCSR
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20110429/1ad2ad5f/attachment-0001.html


More information about the end2end-interest mailing list