From lars.eggert at gmail.com Thu Apr 21 02:25:56 2011 From: lars.eggert at gmail.com (Lars Eggert) Date: Thu, 21 Apr 2011 12:25:56 +0300 Subject: [e2e] Call for Nominations: Applied Networking Research Prize (ANRP) Message-ID: <62F13CAB-E954-45BC-B4C6-2C06466ABE33@gmail.com> CALL FOR NOMINATIONS: APPLIED NETWORKING RESEARCH PRIZE (ANRP) *** Apply until May 8, 2011 for the ANR Prize for IETF-81, *** July 24-29, 2011 in Quebec City, Canada! The Applied Networking Research Prize (ANRP) is awarded for recent results in applied networking research that are relevant for transitioning into shipping Internet products and related standardization efforts. Researchers with relevant, recently published results are encouraged to apply for this prize, which will offer them the opportunity to present and discuss their work with the engineers, network operators, policy makers and scientists that participate in the Internet Engineering Task Force (IETF) and its research arm, the Internet Research Task Force (IRTF). Third-party nominations for this prize are also encouraged. The goal of the Applied Networking Research Prize (ANRP) is to recognize the best new ideas in networking, and bring them to the IETF and IRTF especially in cases where they would not otherwise see much exposure or discussion. The Applied Networking Research Prize (ANRP) consists of: * cash prize of $500 (USD) * invited talk at the IRTF Open Meeting * travel grant to attend the week-long IETF meeting (airfare, hotel, registration, stipend) * recognition at the IETF plenary * invitation to related social activities * potential for additional travel grants to future IETF meetings, based on community feedback The Applied Networking Research Prize (ANRP) will be awarded three times per year, in conjunction with the three annual IETF meetings. HOW TO APPLY Applicants must nominate a peer-reviewed, recently-published, original journal, conference or workshop paper. Both self nominations (nominating one's own paper) and third-party nominations (nominating someone else's paper) are encouraged. The nominee must be one of the main authors of the nominated paper. The nominated paper should provide a scientific foundation for possible future IETF engineering work or IRTF experimentation, analyze the behavior of Internet protocols in operational deployments or realistic testbeds, make an important contribution to the understanding of Internet scalability, performance, reliability, security or capability, or otherwise be of relevance to ongoing or future IETF or IRTF activities. Applicants must briefly describe how the nominated paper relates to these goals, and are encouraged to describe how presentation of these research results will foster their transition into new IETF engineering or IRTF experimentation, or otherwise seed new activities that will have an impact on the real-world Internet. The goal of the Applied Networking Research Prize (ANRP) is to foster the transitioning of research results into real-world benefits for the Internet. Therefore, applicants must indicate that they (or the nominee, in case of third-party applications) are available to attend the respective IETF meeting in person and in its entirety. Applications must include: * the name and email address of the nominee * a reference to the published nominated paper * a PDF copy of the nominated paper * a statement that describes how the nominated paper fulfills the goals of the award * a statement that the nominee is available to attend the respective IETF meeting in person and in its entirety * a brief biography or CV of the nominee * optionally, any other supporting information (link to nominee's web site, etc.) *** Applications are submitted by email to anrp at isoc.org. *** SELECTION PROCESS A small selection committee comprised of individuals knowledgeable about the IRTF, IETF and the broader networking research community will evaluate the submissions against these selection criteria. The goal is to select 1-2 submissions for the Applied Networking Research Prize (ANRP) during each application period. All applicants will be notified by email. IMPORTANT DATES Applications open: April 11, 2011 Applications close: May 8, 2011 Notifications: May 31, 2011 IETF-81 Meeting: July 24-29, 2011 SPONSORS The Applied Networking Research Prize (ANRP) is supported by the Internet Society (ISOC), as part of its Internet Research Award Programme, in coordination with the Internet Research Task Force (IRTF). From swc at iis.sinica.edu.tw Mon Apr 25 03:57:39 2011 From: swc at iis.sinica.edu.tw (Sheng-Wei (Kuan-Ta) Chen) Date: Mon, 25 Apr 2011 18:57:39 +0800 Subject: [e2e] NOSSDAV 2011 Call for Participation Message-ID: <000601cc0337$9847f900$c8d7eb00$@sinica.edu.tw> [Apologies if you receive this more than once] ++++++++++++++++++++ [ NOSSDAV 2011 Call for Participation ] ++++++++++++++++++++++ The 21th International Workshop on Network and Operating Systems Support for Digital Audio and Video June 2 -- 3, 2011 Vancouver, British Columbia, Canada http://nss.cs.ubc.ca/nossdav2011/ *Registration now open!* Early registration deadline: May 1, 2011 We invite you to attend NOSSDAV 2011, the 21st anniversary of SIGMM's leading workshop on network and operating systems support for digital audio and video. The workshop, hosted at the University of British Columbia (UBC), will continue to focus on emerging research topics, controversial ideas, and future research directions in the area of multimedia systems research. As in previous years, we will maintain the focused single-track format a setting that stimulates lively discussions among the senior and junior participants. ** Technical Program: http://nss.cs.ubc.ca/nossdav2011/program.html ** Registration: http://nss.cs.ubc.ca/nossdav2011/registration.html If you have any questions, please get in touch with the co-chairs: Charles "Buck" Krasic (Google Inc., USA) Kang Li (University of Georgia, USA) (kangli at cs.uga.edu) NOSSDAV 2011 PROGRAM AT A GLANCE ================================ ** June 1, 2011 ** 1700 - 1900 Reception at Computer Science lounge ** June 2, 2011 ** Keynote Speech Jim Bankoski (Google Inc.) - Topic: WebM/VP8 Session 1: Streaming - LAN-Awareness: Improved P2P Live Streaming - In-Network Adaptation of H.264/SVC for HD Video Streaming over 802.11g Networks - Media-aware Networking for SVC-based P2P Streaming Session 2: Wireless & Mobile Media Delivery - Mobile Video Streaming Using Location-Based Network Prediction and Transparent Handover - The Impact of Inter-layer Network Coding on the Relative Performance of MRC/MDC WiFi Media Delivery - A Measurement Study of Resource Utilization in Internet Mobile Streaming Session 3: Social Media - Understanding Demand Volatility in Large VoD Systems - Sharing Social Content from Home: A Measurement-driven Feasibility Study - Load-Balanced Migration of Social Media to Content Clouds Banquet ** June 3, 2011 ** Session 4: Networking - Improving HTTP performance using "Stateless" TCP - A DTN Mode for Reliable Internet Telephony - Inferring the time-zones of Prefixes and Autonomous Systems by monitoring game server discovery traffic Session 5: Systems - GPU-based Fast Motion Estimation for On-the-Fly Encoding of Computer-Generated Video Streams - SAS Kernel: Streaming as a Service Kernel for Correlated Multi-Streaming - Managing Home and Network Storage of Television Recordings."I filled my DVR again! Now what?" Session 6: Media Adaptation - Moving Beyond the Framebuffer - Systems Support for Stereoscopic Video Compression - Accurate and Low-Delay Seeking Within and Across Mash-Ups of Highly-Compressed Videos Session 7: Foundation of Media Communication - Scalable Video Transmission: Packet Loss Induced Distortion Modeling and Estimation - Energy-efficient video streaming from high-speed trains - Celerity: Towards Low-Delay Multi-Party Conferencing Concluding Remarks From skandor at gmail.com Tue Apr 26 12:25:27 2011 From: skandor at gmail.com (A.B. Jr.) Date: Tue, 26 Apr 2011 16:25:27 -0300 Subject: [e2e] NDDI & OpenFlow Message-ID: Hi everyone, Do you believe that widespread adoption of architectures like the one described in http://www.internet2.edu/network/ose/ *Built using the first production deployment of OpenFlowtechnology, NDDI will deliver a "software-defined network" (SDN), a common infrastructure that can create multiple virtual networks, allowing researchers to experiment with new Internet protocols and architectures, and at the same time enabling domain scientists to accelerate their research with collaborators worldwide. * will significantly change the meaning of "end to end"? Regards, -abj -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20110426/1a2d8b2d/attachment.html From skandor at gmail.com Wed Apr 27 13:08:19 2011 From: skandor at gmail.com (A.B. Jr.) Date: Wed, 27 Apr 2011 17:08:19 -0300 Subject: [e2e] NDDI & OpenFlow In-Reply-To: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> Message-ID: Micah, Since e2e-interest is the name of this list, it can be assumed that everybody here has as least an intuitive understanding of what e2e issues are. Including myself. One could argue that OpenFlow is just too tied to Network ans under-network Layers, and that it has nothing to do with e2e aspects. Maybe. I think that if end systems become able to dynamically reconfigure the network to suit their needs, this can change many of the assumptions made by present days e2e protocols, rendering some parts of them unnecessary, and other parts insufficient. ? abj 2011/4/26 Micah Beck > Hi A. B. Jr., > > Can you tell us what *you* think "end to end" means? That might help to > focus the responses. > > Regards, > /micah > > > On Apr 26, 2011, at 3:25 PM, A.B. Jr. wrote: > > Hi everyone, > > Do you believe that widespread adoption of architectures like the one > described in > > http://www.internet2.edu/network/ose/ > > *Built using the first production deployment of OpenFlowtechnology, NDDI will deliver a "software-defined network" (SDN), a common > infrastructure that can create multiple virtual networks, allowing > researchers to experiment with new Internet protocols and architectures, and > at the same time enabling domain scientists to accelerate their research > with collaborators worldwide. > > * > will significantly change the meaning of "end to end"? > > Regards, > > -abj > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20110427/e88d3619/attachment.html From L.Wood at surrey.ac.uk Wed Apr 27 15:48:58 2011 From: L.Wood at surrey.ac.uk (L.Wood@surrey.ac.uk) Date: Wed, 27 Apr 2011 23:48:58 +0100 Subject: [e2e] NDDI & OpenFlow In-Reply-To: References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu>, Message-ID: > Since e2e-interest is the name of this list, it can be assumed that everybody here has as least an intuitive understanding of what e2e issues are. Including myself. Unfortunately, that cannot be assumed. Do not presume that all the list membership appreciates all the ramifications of Saltzer et al.'s papers. From braden at isi.edu Thu Apr 28 09:33:35 2011 From: braden at isi.edu (Bob Braden) Date: Thu, 28 Apr 2011 09:33:35 -0700 Subject: [e2e] NDDI & OpenFlow In-Reply-To: References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> Message-ID: <4DB996DF.1010908@isi.edu> On 4/27/2011 1:08 PM, A.B. Jr. wrote: > > I think that if end systems become able to dynamically reconfigure the > network to suit their needs, this can change many of the assumptions > made by present days e2e protocols, rendering some parts of them > unnecessary, and other parts insufficient. > > ? abj > > What a GREAT idea. I don't know why we never thought of it. As a starter, I would like my end system to reconfigure the network to give me all the available bandwidth and to drop none of my packets. So, that gets rid of the "fairness" assumption of present day E2E protocols and avoids the messiness of statistical multiplexing. Bob Braden -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20110428/d3309eeb/attachment.html From j.vimal at gmail.com Thu Apr 28 10:29:58 2011 From: j.vimal at gmail.com (Vimal) Date: Thu, 28 Apr 2011 10:29:58 -0700 Subject: [e2e] NDDI & OpenFlow In-Reply-To: <4DB996DF.1010908@isi.edu> References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> <4DB996DF.1010908@isi.edu> Message-ID: On 28 April 2011 09:33, Bob Braden wrote: > On 4/27/2011 1:08 PM, A.B. Jr. wrote: > > I think that if end systems become able to dynamically reconfigure the > network to suit their needs, this can change many of the assumptions made by > present days e2e protocols, rendering some parts of them unnecessary, and > other parts insufficient. > > ? abj > > What a GREAT idea. I don't know why we never thought of it. As a starter, I > would like my end system to reconfigure the network to give me all the > available bandwidth and to drop none of my packets.? So, that gets rid of > the "fairness" assumption of present day E2E protocols and avoids the > messiness of statistical multiplexing. :-) I think there's a misconception here. OpenFlow exposes a vendor agnostic API to control network elements by an end host. It doesn't mean that every end host gets to choose what it wants. Also, I am a bit curious about the wordings in the OpenFlow paper, which starts as: "... a way for researchers to run experimental protocols in the networks they use everyday..." It seems like OpenFlow would allow testing experimental *routing* protocols, or anything that only depends on control path. For any new protocol that requires datapath support, OpenFlow (at the moment) cannot support it. Examples of such protocols: XCP, RCP, etc. -- Vimal From scott.brim at gmail.com Thu Apr 28 12:04:23 2011 From: scott.brim at gmail.com (Scott Brim) Date: Thu, 28 Apr 2011 15:04:23 -0400 Subject: [e2e] NDDI & OpenFlow In-Reply-To: References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> <4DB996DF.1010908@isi.edu> Message-ID: On Thu, Apr 28, 2011 at 13:29, Vimal wrote: > :-) I think there's a misconception here. OpenFlow exposes a vendor > agnostic API to control network elements by an end host. ... or by some agent, anyway, even if it's not an endpoint in a data flow. swb -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20110428/d5a1b07f/attachment.html From bmanning at vacation.karoshi.com Thu Apr 28 12:08:11 2011 From: bmanning at vacation.karoshi.com (bmanning@vacation.karoshi.com) Date: Thu, 28 Apr 2011 19:08:11 +0000 Subject: [e2e] NDDI & OpenFlow In-Reply-To: <4DB996DF.1010908@isi.edu> References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> <4DB996DF.1010908@isi.edu> Message-ID: <20110428190811.GC26219@vacation.karoshi.com.> On Thu, Apr 28, 2011 at 09:33:35AM -0700, Bob Braden wrote: > On 4/27/2011 1:08 PM, A.B. Jr. wrote: > > > >I think that if end systems become able to dynamically reconfigure the > >network to suit their needs, this can change many of the assumptions > >made by present days e2e protocols, rendering some parts of them > >unnecessary, and other parts insufficient. > > > > abj > > > > > What a GREAT idea. I don't know why we never thought of it. As a > starter, I would like my end system to reconfigure the network to give > me all the available bandwidth and to drop none of my packets. So, that > gets rid of the "fairness" assumption of present day E2E protocols and > avoids the messiness of statistical multiplexing. > > Bob Braden > thought it had been done already... ftp://ftp.isi.edu/rsvp/active_signaling/ANworkshop9706.slides2x2.ps or more generally... http://nms.lcs.mit.edu/darpa-activenet/ but I've been wrong before. /bill From zartash at lums.edu.pk Thu Apr 28 12:27:48 2011 From: zartash at lums.edu.pk (Zartash Afzal Uzmi) Date: Fri, 29 Apr 2011 00:27:48 +0500 Subject: [e2e] NDDI & OpenFlow In-Reply-To: References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> <4DB996DF.1010908@isi.edu> Message-ID: <00d501cc05da$5b797e40$126c7ac0$@edu.pk> > > :-) I think there's a misconception here. OpenFlow exposes a vendor > agnostic API to control network elements by an end host. It doesn't > mean that every end host gets to choose what it wants. > > Also, I am a bit curious about the wordings in the OpenFlow paper, > which starts as: "... a way for researchers to run experimental > protocols in the networks they use everyday..." I believe this wording is more applicable towards enterprise networks. As I understand, Open Flow suggests using a Flow table -- a table using which forwarding can be done based on "flows" -- instead of the FIB. The flow table will include (i) entries for the "production traffic" (in the same way as a FIB would do) or, (ii) entries for the "research traffic". While replacing FIB with a more complex flow table appears okay in enterprise networks, I still have to convince myself that it will be a feasible idea to replace FIB with a more complex flow table in the service provider core where FIB performance requirements are more stringent. Anyone more familiar with Open Flow can throw some light? Zartash > It seems like OpenFlow would allow testing experimental *routing* > protocols, or anything that only depends on control path. For any new > protocol that requires datapath support, OpenFlow (at the moment) > cannot support it. Examples of such protocols: XCP, RCP, etc. > > -- > Vimal From jnc at mercury.lcs.mit.edu Thu Apr 28 14:08:45 2011 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 28 Apr 2011 17:08:45 -0400 (EDT) Subject: [e2e] NDDI & OpenFlow Message-ID: <20110428210845.309F318C10A@mercury.lcs.mit.edu> > From: Zartash Afzal Uzmi > I still have to convince myself that it will be a feasible idea to > replace FIB with a more complex flow table in the service provider core Please see the "Flow identification" subsection of Section 2.2 "Packet Format Fields" of RFC-1753, in particular the parts about aggregated flows; and then look at the MPLS header specification. While not designed specifically to meet the RFC-1753 requirements, MPLS looks the way it does in part because the architectural issues that come up when you get into flow-based packet handling are pretty generic. (MPLS was designed to be a switching substrate that could support a number of different 'service models' at the internetworking layer [no, I do not mean IPvN by that term - I mean this more generically, as in 'the layer that gets bits from A to B, where A and B are anywhere], both pure datagram, and also flow-based. I have no idea if Openflow uses MPLS hardware to actually move packets around - haven't read the docs yet - but there is no reason not to.) Noel From L.Wood at surrey.ac.uk Thu Apr 28 17:07:35 2011 From: L.Wood at surrey.ac.uk (L.Wood@surrey.ac.uk) Date: Fri, 29 Apr 2011 01:07:35 +0100 Subject: [e2e] NDDI & OpenFlow In-Reply-To: References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu>, Message-ID: One way of looking at OpenFlow is that, as routers have developed, they have gone from being integrated, to having modular linecards plugged into a backplane/bus, to having line cards plugged into an internal 10.x network within the box, because Gigabit Ethernet can provide a nice fast, well-understood backplane without further custom engineering. (Cisco's catalyst 6000 series is one example of this.) The linecards do forwarding at (hopefully) line speed, but receive their forwarding tables over the internal Ethernet from the central processor where routing and routing tables exist, and also receive their traffic over the internal Ethernet. All in internal VLANs where the forwarding table information and control data can be prioritised. OpenFlow can take that internal Ethernet connectivity within the router, and stretches it so that the linecards are in different places around an office or university campus. So your network is no longer a bunch of smartish routers doing different and slightly repeated and redundant things in a hopefully coordinated fashion, but a bunch of somewhat dumber linecards being coordinated in synchrony from a central point. Your campus is now inside your router, and your campus-wide control plane just got way faster and more predictable. So far, so good; your traffic engineering and what-does-QoS-mean problems now exist within a single homogenous router, rather than a bunch of uncoordinated routers that have to be configured in situ etc. So the routing protocol being run across campus is suddenly consistent, instead of e.g. migrating piecemeal to OSPF, having part of campus generate and rebroadcast RIP without telling you because they're not upgrading their old kit, etc. Where I have trouble with the OpenFlow story is where network researchers say 'okay, now we've built this, we'll also instrument it and use it for traffic for our research experiments, in separate traffic-engineered virtualized slices of the network'. It's a production networking environment enabling a business or university to function, where network researchers don't do support, and which and whose budget will pay for this stuff and deal with downtime mitigation, exactly? Even handwaving the technology implications away, it's an accounting difficulty. I suppose researchers could found a startup that will charge the university to maintain its network, while at the same time also getting research funding to do new, interesting, and exciting things to their paying customer's network. It's a win-win (well, if we don't look too closely at funding models) right up until the first major outage. But with the recent formation of the Open Networking Foundation, we'll see multiple commercial suppliers providing ever-more-complex kit to support this site-as-router paradigm, with the usual subtle interoperability problems, a minimal common feature set and creative technical differentiation to market products and give unique selling points. Standards are good, but they can always be improved upon. At which point, we're back to a heterogenous multi-supplier network - but one with a lot more subtle interdependencies, requiring more support when things go wrong, as they do. Routing problems can now be trickier forwarding state and sync problems. But that's what support is for; you don't just install a network, you maintain one, and the support costs are a given. Meanwhile, the networking researchers remain locked out of the commercial kit for the sanity of the university administration, which is fine, because they've decided they can't do anything exciting with it anyway. Instead, they're off describing the problems with the status quo and winning new funding to look at distributed, fault-tolerant networking where there is no central control point. After all, you don't conquer complexity, you just shuffle it around. This should be interesting. L. Plus ?a change, plus c'est la m?me chose. Lloyd Wood http://sat-net.com/L.Wood/CCSR From skandor at gmail.com Fri Apr 29 11:55:13 2011 From: skandor at gmail.com (A.B. Jr.) Date: Fri, 29 Apr 2011 15:55:13 -0300 Subject: [e2e] NDDI & OpenFlow In-Reply-To: References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> Message-ID: 2011/4/28 > One way of looking at OpenFlow is that, as routers have developed, they > have gone from being integrated, to having modular linecards plugged into a > backplane/bus, to having line cards plugged into an internal 10.x network > within the box, because Gigabit Ethernet can provide a nice fast, > well-understood backplane without further custom engineering. (Cisco's > catalyst 6000 series is one example of this.) The linecards do forwarding at > (hopefully) line speed, but receive their forwarding tables over the > internal Ethernet from the central processor where routing and routing > tables exist, and also receive their traffic over the internal Ethernet. All > in internal VLANs where the forwarding table information and control data > can be prioritised. > > OpenFlow can take that internal Ethernet connectivity within the router, > and stretches it so that the linecards are in different places around an > office or university campus. So your network is no longer a bunch of > smartish routers doing different and slightly repeated and redundant things > in a hopefully coordinated fashion, but a bunch of somewhat dumber linecards > being coordinated in synchrony from a central point. Your campus is now > inside your router, and your campus-wide control plane just got way faster > and more predictable. > > So far, so good; your traffic engineering and what-does-QoS-mean problems > now exist within a single homogenous router, rather than a bunch of > uncoordinated routers that have to be configured in situ etc. So the routing > protocol being run across campus is suddenly consistent, instead of e.g. > migrating piecemeal to OSPF, having part of campus generate and rebroadcast > RIP without telling you because they're not upgrading their old kit, etc. > > Where I have trouble with the OpenFlow story is where network researchers > say 'okay, now we've built this, we'll also instrument it and use it for > traffic for our research experiments, in separate traffic-engineered > virtualized slices of the network'. It's a production networking environment > enabling a business or university to function, where network researchers > don't do support, and which and whose budget will pay for this stuff and > deal with downtime mitigation, exactly? Even handwaving the technology > implications away, it's an accounting difficulty. I suppose researchers > could found a startup that will charge the university to maintain its > network, while at the same time also getting research funding to do new, > interesting, and exciting things to their paying customer's network. It's a > win-win (well, if we don't look too closely at funding models) right up > until the first major outage. > > But with the recent formation of the Open Networking Foundation, we'll see > multiple commercial suppliers providing ever-more-complex kit to support > this site-as-router paradigm, with the usual subtle interoperability > problems, a minimal common feature set and creative technical > differentiation to market products and give unique selling points. Standards > are good, but they can always be improved upon. At which point, we're back > to a heterogenous multi-supplier network - but one with a lot more subtle > interdependencies, requiring more support when things go wrong, as they do. > Routing problems can now be trickier forwarding state and sync problems. But > that's what support is for; you don't just install a network, you maintain > one, and the support costs are a given. > > Meanwhile, the networking researchers remain locked out of the commercial > kit for the sanity of the university administration, which is fine, because > they've decided they can't do anything exciting with it anyway. Instead, > they're off describing the problems with the status quo and winning new > funding to look at distributed, fault-tolerant networking where there is no > central control point. After all, you don't conquer complexity, you just > shuffle it around. > > This should be interesting. > > L. > > Plus ?a change, plus c'est la m?me chose. > Ah, oui, sans doute! This is an interesting way of looking at openflow, indeed. IMHO, I think that a bunch of dumb datapath elements coordinated by a omniscient central box can very hardly be thought as equivalent of a single switch router. Saying this means forget the well known differences between a single box made of tightly coupled set of components and a distributed system linked by unreliable circuits. Distributed systems and network protocols went a long way trying to deal with their issues. The complexity cannot be simply brushed under the carpet by a appealing new idea. Supposing it's really new. The arguing for OpenFlow reminds me too much the centrally controlled packet switching networks proposed in the 80's, like X.25, remember that? Plus ?a ? etc... > > Lloyd Wood > http://sat-net.com/L.Wood/CCSR > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20110429/1ad2ad5f/attachment-0001.html From randy at psg.com Fri Apr 29 15:00:36 2011 From: randy at psg.com (Randy Bush) Date: Sat, 30 Apr 2011 07:00:36 +0900 Subject: [e2e] NDDI & OpenFlow In-Reply-To: References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> Message-ID: the internet is a reliable network made from unreliable components. centralization of control is an enemy of this. randy From calvert at netlab.uky.edu Fri Apr 29 16:15:09 2011 From: calvert at netlab.uky.edu (Ken Calvert) Date: Fri, 29 Apr 2011 19:15:09 -0400 Subject: [e2e] NDDI & OpenFlow In-Reply-To: References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> Message-ID: <898D0321-D686-4814-8134-60F0458E193C@netlab.uky.edu> Hi Randy - > the internet is a reliable network made from unreliable components. > centralization of control is an enemy of this. I'm not sure whether I agree with these assertions (especially the latter). I don't think they are self-evident. If my web search fails, is the problem more likely to be at Google, or somewhere in the network? (Perhaps someone has hard data...?) Which has more centralized control? Was the North American telephone system more reliable in 1982, or today? Ken (glad to see discussion on e2e again) From ping at pingpan.org Fri Apr 29 16:18:26 2011 From: ping at pingpan.org (Ping Pan) Date: Fri, 29 Apr 2011 16:18:26 -0700 Subject: [e2e] NDDI & OpenFlow In-Reply-To: References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> Message-ID: On Fri, Apr 29, 2011 at 3:00 PM, Randy Bush wrote: > the internet is a reliable network made from unreliable components. > centralization of control is an enemy of this. > > randy > At the same time, DNS and much of the mobile operation are based on centralized or proxies servers. ;-) IMHO, OpenFLow is still too early to be proven to be used widely, however, it seems to be useful in some of the new applications, such as, data centers etc. Ping -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20110429/dd5a8af9/attachment.html From randy at psg.com Fri Apr 29 19:46:16 2011 From: randy at psg.com (Randy Bush) Date: Sat, 30 Apr 2011 11:46:16 +0900 Subject: [e2e] NDDI & OpenFlow In-Reply-To: <898D0321-D686-4814-8134-60F0458E193C@netlab.uky.edu> References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> <898D0321-D686-4814-8134-60F0458E193C@netlab.uky.edu> Message-ID: > If my web search fails, is the problem more likely to be at Google, or > somewhere in the network? in both cases you have choices. > Was the North American telephone system more reliable in 1982, or > today? dunno. but phones did not work here after the quake and internet did. randy From randy at psg.com Fri Apr 29 19:53:02 2011 From: randy at psg.com (Randy Bush) Date: Sat, 30 Apr 2011 11:53:02 +0900 Subject: [e2e] NDDI & OpenFlow In-Reply-To: References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> Message-ID: > At the same time, DNS and much of the mobile operation are based on > centralized or proxies servers. ;-) mobile ops are bellheads. of course they are centralized and fragile. you may want to look up how the dns protocols work and how they are actually deployed. and do not confuse centraliztion at layer nine with centralization at seven and below. while the former is a problem, it's not an engineering problem. randy From casado at cs.stanford.edu Sat Apr 30 08:00:01 2011 From: casado at cs.stanford.edu (Martin Casado) Date: Sat, 30 Apr 2011 08:00:01 -0700 Subject: [e2e] NDDI & OpenFlow In-Reply-To: References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> Message-ID: <4DBC23F1.1010002@cs.stanford.edu> On 4/29/11 3:00 PM, Randy Bush wrote: > the internet is a reliable network made from unreliable components. > centralization of control is an enemy of this. > > randy > It's probably worth pointing out that any OpenFlow controller worth its salt is fully distributed. The primary difference with that and the traditional architecture is that the distribution model is not fixed to the physical topology. Which one could argue, given the innate difficulty in building distributed systems, makes development and deployment of new stuff easier. .m From dedutta at cisco.com Sat Apr 30 09:29:18 2011 From: dedutta at cisco.com (Debo Dutta (dedutta)) Date: Sat, 30 Apr 2011 09:29:18 -0700 Subject: [e2e] NDDI & OpenFlow In-Reply-To: <4DBC23F1.1010002@cs.stanford.edu> References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> <4DBC23F1.1010002@cs.stanford.edu> Message-ID: <0079EA505DB7BF48BE3D565D16ED55F80ECA1AC8@xmb-sjc-225.amer.cisco.com> > the physical topology. Which one could argue, given the innate > difficulty in building distributed systems, makes development and > deployment of new stuff easier. I guess one could argue the other way too :) debo -----Original Message----- From: end2end-interest-bounces at postel.org [mailto:end2end-interest-bounces at postel.org] On Behalf Of Martin Casado Sent: Saturday, April 30, 2011 8:00 AM To: Randy Bush Cc: end2end-interest at postel.org; L.Wood at surrey.ac.uk Subject: Re: [e2e] NDDI & OpenFlow On 4/29/11 3:00 PM, Randy Bush wrote: > the internet is a reliable network made from unreliable components. > centralization of control is an enemy of this. > > randy > It's probably worth pointing out that any OpenFlow controller worth its salt is fully distributed. The primary difference with that and the traditional architecture is that the distribution model is not fixed to the physical topology. Which one could argue, given the innate difficulty in building distributed systems, makes development and deployment of new stuff easier. .m From casado at cs.stanford.edu Sat Apr 30 10:13:50 2011 From: casado at cs.stanford.edu (Martin Casado) Date: Sat, 30 Apr 2011 10:13:50 -0700 Subject: [e2e] NDDI & OpenFlow In-Reply-To: References: <507D6DCB-5000-4E9D-BF52-2B7C9AD4D890@eecs.utk.edu> <4DBC23F1.1010002@cs.stanford.edu> Message-ID: <4DBC434E.8030306@cs.stanford.edu> On 4/30/11 9:26 AM, Stiliadis, Dimitrios (Dimitri) wrote: >> On 4/29/11 3:00 PM, Randy Bush wrote: >>> the internet is a reliable network made from unreliable components. >>> centralization of control is an enemy of this. >>> >>> randy >>> >> It's probably worth pointing out that any OpenFlow controller worth its >> salt is fully distributed. The primary difference with that and the >> traditional architecture is that the distribution model is not fixed to >> the physical topology. Which one could argue, given the innate >> difficulty in building distributed systems, makes development and >> deployment of new stuff easier. > Well, if it is distributed, in one instantiation it can be running > in every node. Right? Then we are back at square one. Exactly right (to the first part). But I don't quite agree with the second statement. My point is that the choice of distribution model lies with the system developer, and isn't fixed in the architecture. You may choose to have a control element with each forwarding element. I may choose to have a relatively small number of control nodes manage a single-function network (say, for example, a data center). And Ken may choose to use a single control node to manage a home network. Why does this matter? Ken can trade off scalability and resilience for a simplified programming mode. I can employ modern distributed systems tools that are in wide use today (e.g. Cassandra, Zookeeper, whatever) which are unlikely to operate well if an instance is required to run on every node (yes, that is an understatement). And you're welcome to do whatever you please. Controller platforms have been developed that support multiple distribution models. They take care of a bunch of the low-level details of managing switch state, provide a common toolkit of distribution primitives, and allow the system designer to choose between, say, scale and a stronger consistency model. Some very cool production systems have been built on OpenFlow using tightly coupled clusters of compute nodes and modern scale-out systems practices. Unfortunately those who build them tend not to like to talk about them. I hope that changes soon. .m