From craig at aland.bbn.com Thu May 1 06:56:26 2008 From: craig at aland.bbn.com (Craig Partridge) Date: Thu, 01 May 2008 09:56:26 -0400 Subject: [e2e] end of interest In-Reply-To: Your message of "Wed, 30 Apr 2008 09:40:40 EDT." Message-ID: <20080501135626.5436B28E157@aland.bbn.com> In message , John Day writes: >Craig, > >Have a question. All of this innovative new research you are seeing. >What sort? > > ... > >Can you prove me wrong? I hope! ;-) Hi John: I'd describe the research as being along trajectories -- that is, clumps of potential that might gell into something really nifty: * Programmable physics -- how your network device (radio, electrical or optical) behaves on its medium is entirely a result of software -- you can change behavior [signal power, modulation, coding, MAC layer] in an instant. The work of all those IEEE 802.* committees becomes a matter loading a bit of software. * Knowing more while measuring less -- we're making tremendous progress on this front (trajectory sampling, principal component analysis on sparse traces, etc). * Re-examining the middle of the network -- the best example here is what if the router has a 100 GB hard drive in it -- and we view the contents of the hard drive as entirely "soft" (can be lost in an instant). Can we do nifty things? [cf. DTN (which views the drive as reliable, but similar vein), Van's talk @ Google, etc.] * Energy efficiency -- in this case I worked on an energy efficient radio project and discovered there's very little literature on saving energy in networks. (What are the design principles for an energy efficient transport protocol? Turns out that is a non-trivial and often counter-intuitive problem that has you looking at old ARQ work...) Thanks! Craig From mbeck at cs.utk.edu Thu May 1 12:16:42 2008 From: mbeck at cs.utk.edu (Micah Beck) Date: Thu, 1 May 2008 15:16:42 -0400 Subject: [e2e] end of interest References: <20080501135626.5436B28E157@aland.bbn.com> Message-ID: <005f01c8abbf$e6585130$6b00a8c0@universi37f55d> > * Re-examining the middle of the network -- the best example here is what > if the router has a 100 GB hard drive in it -- and we view the contents > of the hard drive as entirely "soft" (can be lost in an instant). > Can we do nifty things? [cf. DTN (which views the drive as reliable, > but similar vein), Van's talk @ Google, etc.] A technology which is exactly as described is Logistical Networking, which augments the network with intermediate nodes that are heavy with disk resources (we call them "depots") and then makes them available on a "best effort" basis using the Internet Backplane Protocol (IBP). There is a deployment project funded by NSF, based at Vanderbilt Advanced Computing Center for Research & Eduction and led by Paul Sheldon called Research and Education Data Depot Network (REDDnet, pronounced "ready net"). REDDnet has deployed 160TB so far using IBP, and is available as a platform for experimentation (see http://www.reddnet.org). Interested parties could contact me. Micah Beck Associate Professor, EECS University of Tennessee, Knoxville From day at std.com Thu May 1 14:08:51 2008 From: day at std.com (John Day) Date: Thu, 1 May 2008 17:08:51 -0400 Subject: [e2e] end of interest In-Reply-To: <20080501135626.5436B28E157@aland.bbn.com> References: <20080501135626.5436B28E157@aland.bbn.com> Message-ID: At 9:56 -0400 2008/05/01, Craig Partridge wrote: >In message , John Day writes: > >>Craig, >> >>Have a question. All of this innovative new research you are seeing. >>What sort? >> >> ... >> >>Can you prove me wrong? I hope! ;-) > >Hi John: > >I'd describe the research as being along trajectories -- that is, clumps >of potential that might gell into something really nifty: > >* Programmable physics -- how your network device (radio, electrical or > optical) behaves on its medium is entirely a result of software -- you > can change behavior [signal power, modulation, coding, MAC layer] in > an instant. The work of all those IEEE 802.* committees becomes > a matter loading a bit of software. > >* Knowing more while measuring less -- we're making tremendous progress > on this front (trajectory sampling, principal component analysis on > sparse traces, etc). > >* Re-examining the middle of the network -- the best example here is what > if the router has a 100 GB hard drive in it -- and we view the contents > of the hard drive as entirely "soft" (can be lost in an instant). > Can we do nifty things? [cf. DTN (which views the drive as reliable, > but similar vein), Van's talk @ Google, etc.] I think I saw that. Was that the one where at the beginning he called for a Copernican revolution in networking and then at end he said don't bother touching TCP and below? > >* Energy efficiency -- in this case I worked on an energy efficient radio > project and discovered there's very little literature on saving > energy in networks. (What are the design principles for an energy > efficient transport protocol? Turns out that is a non-trivial and > often counter-intuitive problem that has you looking at old ARQ work...) So I take it from this list you don't see much in the way of new fundamental results coming out of FIND or any of this "new architecture" stuff? Take care, John From pmendes at inescporto.pt Fri May 2 06:32:29 2008 From: pmendes at inescporto.pt (Paulo Mendes) Date: Fri, 02 May 2008 14:32:29 +0100 Subject: [e2e] experimenting on customers In-Reply-To: References: <48089E95.1090103@reed.com> <480908CC.5050400@reed.com> <480C9C37.40509@reed.com> <480EB31C.7060403@dcrocker.net> <480F296F.7020807@reed.com> <48109844.2040209@reed.com> Message-ID: <481B17ED.7020907@inescporto.pt> I have to say that I tend to agree with Jon. Let me just add: if a small few lines of changes in a p2p program can have a significant impact on the traffic pattern, imagine that to this phenomenon we join the capability of having a significant impact on the degree of Internet connectivity just be changing a few lines in some open source systems willing to relay Internet access (e.g. FON). Paulo Jon Crowcroft wrote: > the point is that very very small people can do very very big experiments - there > was some controversay about this, for example, in NSI Last year when the > bittyrant people reveleaved that they had released their variant of the torrent > tool with a modified incentive algorthm to see what would happen with a lot of > users - as with all good psycho-science (and some anthropolgy:), the users > can't know you are doing the experiment, coz that might interfere with the > validity (sounds like asimov's psychohistory;)..... > but of course that has interesting ethical impact... > > but thats not my main point, which is: > > something as small as few lines change in a p2p program > which is then run by 1000s or millions of users, > has a MASSIVE potential (and actual) impact on the traffic pattern, > which has a massive impact on the ISPs (infrastructure) > which has a massive impact on the other users. > > so just because you cannot alter an IP header or a TCP option saying > a) the middleboxes get in the way, > and > b) the vendors wont put it in the OS stack for you anyhow, > does NOT mean you cannot do BIG Network Science > one bit. not at all. > > oh no > > > In missive <48109844.2040209 at reed.com>, "David P. Reed" typed: > > >>So what's the point you're making Jon? Users' experiments impact other > >>users? What does that have to do with vendors experimenting on their > >>users? The cases are different qualitatively and in kind, and the risks > >>and liabilities are different, as in any multiparty system of > >>constraints and desires. > >> > >>Unless of course, you think that there is no difference at all between > >>the King and his subjects, the President and commander-in-chief and a > >>homeless person. > >> > >>Experiments by the powerful upon the weak/dependent are quite different > >>from experiments with limited impact and scale taking place in a vast > >>ocean of relatively unaffected people. > >> > >>There is no binary "good vs. evil" logic here. There is no black and > >>white. But the difference is plain unless you abstract away every > >>aspect of reality other than the term "experiment". > >> > >>Jon Crowcroft wrote: > >>> it is crucial to realize that the regulation of experiments of any kind is a > >>> post hoc rationalisation rather than any kidn of actual model of what > >>> ought to be the outcome > >>> > >>> almost all new succesfully deployed > >>> protocols in the last 15-20 years have been ahead of any curve > >>> to do with IETF processes, ISPs planning, provisioning, > >>> legal system comprehension... > >>> > >>> end users download and run stuff > >>> (even if they dont compromse their OS, they > >>> in the millions download facebook apps daily that compromise their privacy and > >>> potentially break laws in some parts of the world > >>> > >>> they upgrde or dont upgrade their end systems and their home xDSL/wifi routers' > >>> firmware > >>> > >>> every one of these may be a controlled experiment when conducted in isolation, > >>> and with full support of the software developer, but in combination > >>> they are clear > >>> blue sky... > >>> > >>> we don't know the emergent properties of these things until people notice them > >>> (in nanog, in court, in government, or, occasionalyl, by doing measurement > >>> experiments... > >>> > >>> frankly, even within a single node, i remember roger needham explaining over 10 > >>> years ago that it had become impossible for microsoft to run regression testing > >>> across all combinations of devices and drivers and OS versions because the > >>> numbers had just Got Too Big already (2-3 new devices per day etc etc) > >>> so now do that with networked interactions... > >>> > >>> ergo: > >>> all experiments by net customers are experiments on net customers... > >>> > >>> of course, the one thing we can't do with the one true internet (since it is now > >>> holy critical infrastructure) is proper destrctive testing > >>> (we can't even figure out LD50:) > >>> > >>> In missive <480F296F.7020807 at reed.com>, "David P. Reed" typed: > >>> > >>> >>Dave - as I mentioned earlier, there is a huge difference between > >>> >>experimenting on customers and letting customers experiment. > >>> >> > >>> >>Your post equates the two. I suggest that the distinction is crucial. > >>> >>And that is the point of the end-to-end argument, at least 80% of it, > >>> >>when applied to modularity between carriers and users, or the modularity > >>> >>between systems vendors and users, or the modularity between companies > >>> >>that would hope to support innovative research and researchers. > >>> >> > >>> >> > >>> >> > >>> >>Dave Crocker wrote: > >>> >>> > >>> >>> > >>> >>> Jon Crowcroft wrote: > >>> >>>> I dont understand all this - most software in the last 30 > >>> >>>> years is an experiment on customers - the internet as a whole > >>> >>>> is an experiment > >>> >>> ... > >>> >>>> so if we are honest, we'd admit this and say > >>> >>>> what we need is a pharma model of informed consent > >>> >>>> yeah, even discounts > >>> >>> > >>> >>> > >>> >>> In looking over the many postings in this thread, the word > >>> >>> "experiment" provides the most leverage both for insight and for > >>> >>> confusion. > >>> >>> > >>> >>> Experiments come in very different flavors, notably with very > >>> >>> different risk. > >>> >>> When talking about something on the scale of the current, public > >>> >>> Internet, > >>> >>> or American democracy or global jet travel, the term "experiment" > >>> >>> reminds us > >>> >>> that we do not fully understand impact. But the term also denotes a > >>> >>> risk of > >>> >>> failure which cannot reasonably apply for these grander uses. (After > >>> >>> a few > >>> >>> hundred years, if a civilization dies off, is it a "failure", even > >>> >>> though we > >>> >>> label it an experiment?) In other words, we use the word "experiment" > >>> >>> here in > >>> >>> a non-technical way, connoting the unknown, rather the denoting > >>> >>> controlled > >>> >>> manipulation, diligent study and incremental refinement. > >>> >>> > >>> >>> So, some of the complaints about being unable to experiment on the open > >>> >>> Internet simply do not make sense, any more than "testing" a radically > >>> >>> new concrete -- with no use experience -- on a freeway bridge would > >>> >>> make sense. Risk is obviously too high; in fact, failure early in the > >>> >>> lifecycle of a new > >>> >>> technology is typically guaranteed. Would you drive over that sucker? > >>> >>> Or under it? > >>> >>> > >>> >>> So if someone is going to express concerns about barriers to adoption, > >>> >>> such as > >>> >>> a lack of flexibility by providers or product companies, they need to > >>> >>> accompany it will a compelling adoption case that shows sufficiently > >>> >>> low risk > >>> >>> and sufficiently high benefit. Typically, that needs to come from real > >>> >>> experimentation, meaning early-stage development, real testing, and pilot > >>> >>> deployment. (Quite nicely this has the not-so-minor side benefit of > >>> >>> grooming an increasingly significant constituency that wants the > >>> >>> technology adopted.) > >>> >>> > >>> >>> Businesses do not deploy real experiments in their products and services. > >>> >>> Costs and risks are far both too high. What they deploy are features > >>> >>> that provide relatively assured benefits. > >>> >>> > >>> >>> As for "blocking" experiments by others, think again of the bridge. > >>> >>> Collateral damage requires that public infrastructure services been > >>> >>> particularly conservative in permitting change. > >>> >>> > >>> >>> In the early 90s, when new routing protocols were being defined and > >>> >>> debated, it was also noted that there was no 'laboratory' large enough > >>> >>> to test the protocols to scale, prior to deployment in the open > >>> >>> Internet. One thought was to couple smaller test networks, via > >>> >>> tunnels across the Internet. I suppose Internet II counts as a modern > >>> >>> alternative. In other words, real experiments needs real laboratories. > >>> >>> > >>> >>> The other challenge for this particular thread is that the term > >>> >>> end-to-end is > >>> >>> treated as a rigid absolute, but never has actually been that. It is > >>> >>> a term > >>> >>> of relativity defined by two boundary points. The the modern Internet > >>> >>> has added more complexity between the points, as others have noted. > >>> >>> Rather than a simplistic host-net dichotomy we have layers of > >>> >>> intermediary nets, and often layers of application hosts. (Thank you, > >>> >>> akamai...) We also have layers outside of what we typically call > >>> >>> end-points, such as services that treat email as underlying > >>> >>> infrastructure, rather than "the" application. > >>> >>> > >>> >>> And we have layers of trust (tussles). > >>> >>> > >>> >>> And, and, and... > >>> >>> > >>> >>> So when claiming to need end-to-end, the question is which of many > >>> >>> possible ends? > >>> >>> > >>> >>> And, for what purpose, or one might say... to what end? > >>> >>> > >>> >>> d/ > >>> >>> > >>> >>> > >>> >>> > >>> >>> > >>> >>> > >>> >>> > >>> >>> > >>> >>> > >>> >>> > >>> >>> > >>> > >>> cheers > >>> > >>> jon > >>> > >>> > >>> > > cheers > > jon > -- ----------------------------------------------------- Paulo Mendes, Ph.D Area Leader, Internet Architectures and Networking Telecommunication and Multimedia Unit INESC Porto Tel. +351 22 209 4264 Fax. +351 22 209 4050 http://telecom.inescporto.pt/~ian From pmendes at inescporto.pt Fri May 2 06:32:34 2008 From: pmendes at inescporto.pt (Paulo Mendes) Date: Fri, 02 May 2008 14:32:34 +0100 Subject: [e2e] dynamic ISP selection In-Reply-To: References: Message-ID: <481B17F2.10508@inescporto.pt> I could not agree more. Let me add a few more lines of thought on this to go a little beyond the dynamic ISP selection allowed by the unbundled model. In the past few years the end-users gained extra power with systems that allow them to play the role of connectivity providers (they were already a kind of content providers with p2p). If we join to this the increasing availability of open source tools and open systems, maybe we can picture a brave new world. Technically, this is not so difficult to do, and the prove if the existence of companies like FON and Wisher, already mentioned before in this mailing list. However, how BT, Virgin, T-mobile, ..., would see this is a totally different story. Paulo Jon Crowcroft wrote: > so in the rest of the world where we have competition in the last mile > (i.e. not the US:), it is surprisingly tricky to change ISP - > even though some massive fraction of xDSL is unbundled - > > [and the regulators looked recently and discovered that the widespread > competition between cable modem and (surpirsingly) broadband wireless access > (e.g. umts or high speed packet, or even wimax a bit) > means that you get techno-disversity as well as economic diversity > > in the UK for example, there's something like 35M > phone lines that bt owns copper up to the exchange building, > but then they get switced onto a humungous ATM net ('colossus'), > and can pop out at layer to below IP > in any virtual broadband provider's POP > > (asde: this business is almost like the way that > some cell phone service providers work who don't actually own spectrum, > but make a business out of leasing spectrum off of others- > i think Virgin does this with (maybe) t-mobile's )...their businesses are > basically being creative with contract models....i.e. combining other services > and content... > > so anyhow, in the uk (sorry to be so blighty-centric) > you can change your utilities pretty much once a month > (I just changed gas&electricity twice in the last two months to > get better deals) as the same virtualisation (unbundling) > of last mile is done, and the core provider > (actually 2 layers- grid and generation) > are seperated cleanly... > > so given i can go on the web and get my gas, electricity, water, sewage changed , > why is it not possible to get a > SPOT price > for broadband internet? > > i cannot see any technical barrier to this whatsoever:) > > j. -- ----------------------------------------------------- Paulo Mendes, Ph.D Area Leader, Internet Architectures and Networking Telecommunication and Multimedia Unit INESC Porto Tel. +351 22 209 4264 Fax. +351 22 209 4050 http://telecom.inescporto.pt/~ian From dga at cs.cmu.edu Thu May 1 12:19:00 2008 From: dga at cs.cmu.edu (David Andersen) Date: Thu, 1 May 2008 15:19:00 -0400 Subject: [e2e] HotNets-VII Call for Papers Message-ID: Speaking of Craig's recent list of hot things in networking, Steve Gribble and I would like to cordially invite you all to submit even more hot ideas for research to... ACM HotNets-VII October 6-7, 2008 Call for Papers The Seventh Workshop on Hot Topics in Networks (HotNets-VII), to be held in Calgary, Alberta, Canada, will bring together researchers in the networking systems community to engage in lively discussion of future trends in networking research and technology. The workshop, which is sponsored by ACM SIGCOMM, provides a venue for researchers to present and discuss ideas that have the potential to significantly influence the community in the long term; the goal is to promote community-wide discussions of those ideas. Each potential participant should submit a short position paper describing such an idea. The paper could, for example, expose a new problem, advocate a new solution, or re-frame or debunk existing work. We encourage submissions of early work, with novel and interesting ideas, across the broad range of networking systems research. We expect that work introduced at HotNets-VII, once fully thought through, completed, and described in a finished form, may be relevant to conferences such as SIGCOMM, NSDI, SOSP, OSDI, SenSys, or MobiCom. Topics of interest include, but are by no means limited to: Architectural support for security or availability Computing in the cloud: what role exists for networking research? Ensuring the correctness of distributed protocols Evolution of storage area networks Lessons drawn from failed research, and controversial or disruptive topics Measurement and management of metro-area WiFi networks Networking within the modern data center Power as a first-class design property; "green" protocols/ implementations Protocol design for optical switching Support for non-IP internetworking protocols Technical aspects of and solutions to network neutrality Third-world networking challenges Unique challenges of massive multi-player game systems Validation of measurement-based research: what are our standards? Position papers will be selected based on originality, likelihood of spawning insightful discussion and technical merit. Online copies of accepted position papers will be made publicly available via the Web prior to the workshop, and printed proceedings will be published. Additionally, a workshop summary will be published in ACM SIGCOMM's Computer Communication Review (CCR), widely disseminating the ideas discussed at the workshop. Attendance will be limited to around 60 people in order to ensure an interactive workshop atmosphere. Invitations to attend the workshop will be extended according to the following priorities: o the Program and Steering Committees, one author per paper, and any speakers invited by the Program Committee o co-authors of accepted and submitted papers, preferring students as available scholarships allow o event-sponsor representatives and additional authors of submitted papers at the discretion of the Program Committee The workshop will be held at the Rosza Centre, University of Calgary. Hotnets-VII is sponsored by ACM SIGCOMM. Submission Instructions Submitted papers must be no longer than 6 pages (10 point font, 1 inch margins). All submissions must be blind. Submissions must not indicate the names or affiliations of the authors in the paper. Only electronic submissions in PostScript or PDF will be accepted. Submissions must be written in English, render without error using standard tools (Ghostview or Acrobat Reader), and print on US-Letter sized paper. Please number your pages. HotNets-VII reviews will follow standard academic practice, although some rejected papers may not receive full-length reviews. Submission information will be posted by early June at http://www.acm.org/sigcomm/HotNets-VII Important Dates Submissions due: July 12, 2008 (11:59pm Pacific Daylight Time) No extensions will be granted. Notification of acceptance: August 18, 2008 Camera-ready copy due: September 15, 2008 (11:59pm PDT) Workshop (Calgary, AB, Canada): October 6 and 7, 2008 Organizers: General Chair Carey Williamson, University of Calgary Program Co-Chairs David Andersen, CMU Steve Gribble, UW From craig at aland.bbn.com Thu May 8 13:23:34 2008 From: craig at aland.bbn.com (Craig Partridge) Date: Thu, 08 May 2008 16:23:34 -0400 Subject: [e2e] end of interest In-Reply-To: Your message of "Thu, 01 May 2008 17:08:51 EDT." Message-ID: <20080508202334.444B228E155@aland.bbn.com> In message , John Day writes: >At 9:56 -0400 2008/05/01, Craig Partridge wrote: >>* Re-examining the middle of the network -- the best example here is what >> if the router has a 100 GB hard drive in it -- and we view the contents >> of the hard drive as entirely "soft" (can be lost in an instant). >> Can we do nifty things? [cf. DTN (which views the drive as reliable, >> but similar vein), Van's talk @ Google, etc.] > >I think I saw that. Was that the one where at the beginning he >called for a Copernican revolution in networking and then at end he >said don't bother touching TCP and below? I don't think it said don't bother touching TCP and below so much as said they don't matter. That's certainly what Van said in a more recent talk. And I think it is right -- if you think you have a game changing paradigm that can work over existing stuff but might work better over new stuff, focus on your core idea -- if it works, the rest of the network will morph to support it. > >> >>* Energy efficiency -- in this case I worked on an energy efficient radio >> project and discovered there's very little literature on saving >> energy in networks. (What are the design principles for an energy >> efficient transport protocol? Turns out that is a non-trivial and >> often counter-intuitive problem that has you looking at old ARQ work...) > >So I take it from this list you don't see much in the way of new >fundamental results coming out of FIND or any of this "new >architecture" stuff? As I understand it, FIND is pushing a different axis -- it is looking at architectural issues raised by the innovative research. Craig From ian.mcdonald at jandi.co.nz Thu May 8 16:53:08 2008 From: ian.mcdonald at jandi.co.nz (Ian McDonald) Date: Fri, 9 May 2008 11:53:08 +1200 Subject: [e2e] end of interest In-Reply-To: <20080508202334.444B228E155@aland.bbn.com> References: <20080508202334.444B228E155@aland.bbn.com> Message-ID: <5640c7e00805081653o21317982h24616d35df266bb0@mail.gmail.com> On Fri, May 9, 2008 at 8:23 AM, Craig Partridge wrote: > >>>* Re-examining the middle of the network -- the best example here is what >>> if the router has a 100 GB hard drive in it -- and we view the contents >>> of the hard drive as entirely "soft" (can be lost in an instant). >>> Can we do nifty things? [cf. DTN (which views the drive as reliable, >>> but similar vein), Van's talk @ Google, etc.] >> >>I think I saw that. Was that the one where at the beginning he >>called for a Copernican revolution in networking and then at end he >>said don't bother touching TCP and below? > > I don't think it said don't bother touching TCP and below so much as said > they don't matter. That's certainly what Van said in a more recent talk. > And I think it is right -- if you think you have a game changing paradigm > that can work over existing stuff but might work better over new stuff, > focus on your core idea -- if it works, the rest of the network will > morph to support it. > This is the video being referred to: http://www.youtube.com/watch?v=gqGEMQveoqg -- Web: http://wand.net.nz/~iam4/ Blog: http://iansblog.jandi.co.nz From day at std.com Fri May 9 06:45:13 2008 From: day at std.com (John Day) Date: Fri, 9 May 2008 09:45:13 -0400 Subject: [e2e] end of interest In-Reply-To: <20080508202334.444B228E155@aland.bbn.com> References: <20080508202334.444B228E155@aland.bbn.com> Message-ID: At 16:23 -0400 2008/05/08, Craig Partridge wrote: >In message , John Day writes: > >>At 9:56 -0400 2008/05/01, Craig Partridge wrote: > >>>* Re-examining the middle of the network -- the best example here is what >>> if the router has a 100 GB hard drive in it -- and we view the contents >>> of the hard drive as entirely "soft" (can be lost in an instant). >>> Can we do nifty things? [cf. DTN (which views the drive as reliable, >>> but similar vein), Van's talk @ Google, etc.] >> >>I think I saw that. Was that the one where at the beginning he >>called for a Copernican revolution in networking and then at end he >>said don't bother touching TCP and below? > >I don't think it said don't bother touching TCP and below so much as said >they don't matter. That's certainly what Van said in a more recent talk. >And I think it is right -- if you think you have a game changing paradigm >that can work over existing stuff but might work better over new stuff, >focus on your core idea -- if it works, the rest of the network will >morph to support it. Some time ago, Microsoft had the same idea about dealing with having half an operating system. Didn't work for them, not going to work here. Overlays are building on sand, or trying to sweep the mess under the layer. They can't fix what is fundamentally an incomplete architecture. > > >>> >>>* Energy efficiency -- in this case I worked on an energy efficient radio >>> project and discovered there's very little literature on saving >>> energy in networks. (What are the design principles for an energy >>> efficient transport protocol? Turns out that is a non-trivial and >>> often counter-intuitive problem that has you looking at old ARQ work...) >> >>So I take it from this list you don't see much in the way of new >>fundamental results coming out of FIND or any of this "new >>architecture" stuff? > >As I understand it, FIND is pushing a different axis -- it is looking >at architectural issues raised by the innovative research. God forbid, they should tackle the fundamental issues that have been around for a long time. Better to declare victory and move on, I guess. Problem is if they couldn't solve the old ones what confidence should we have they can solve the new ones. Especially when it is highly likely the new ones would not be much of a an issue if they had solved the old ones. Stagnation is so dull. Take care, John From Jon.Crowcroft at cl.cam.ac.uk Fri May 9 07:56:50 2008 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Fri, 09 May 2008 15:56:50 +0100 Subject: [e2e] end of interest In-Reply-To: References: <20080508202334.444B228E155@aland.bbn.com> Message-ID: one of the more novel ideas being re-propagated in the mad rush to claim some "new" ground in the geni-flecting of the internet is data driven networking a paper that landed on my desk from the sigcomm 2009 rejection heap entitled "Endless Arguments in Systems Design" has some nice protocol fragments based around a new taxonomy of nodes in the graph rather than host + router + middle box, (or end- and intermediate- system as ISO used to term them) we only have 1 type of node (lets call it a synch) instead of 1-1, 1-n, and n-1 communication, we have 0-n and n-0 communication the zero indicates uncertainty about the eventual recipient (or the antideluvian originator) of course, a sequence of 0-n, n-0 patterns can be constructed indicating cascades or swarms and possible aggregation and disaggregation of content. interesting questions arise about reliability, flow, and congestion control in such a system - using the famous arguments in their truest sense from that old paper from which this august list may take its marching orders, is quite tricky... the rest of the paper is left as an excercise for the reader In missive , John Day typed: >>At 16:23 -0400 2008/05/08, Craig Partridge wrote: >>>In message , John Day writes: >>> >>>>At 9:56 -0400 2008/05/01, Craig Partridge wrote: >>> >>>>>* Re-examining the middle of the network -- the best example here is what >>>>> if the router has a 100 GB hard drive in it -- and we view the contents >>>>> of the hard drive as entirely "soft" (can be lost in an instant). >>>>> Can we do nifty things? [cf. DTN (which views the drive as reliable, >>>>> but similar vein), Van's talk @ Google, etc.] >>>> >>>>I think I saw that. Was that the one where at the beginning he >>>>called for a Copernican revolution in networking and then at end he >>>>said don't bother touching TCP and below? >>> >>>I don't think it said don't bother touching TCP and below so much as said >>>they don't matter. That's certainly what Van said in a more recent talk. >>>And I think it is right -- if you think you have a game changing paradigm >>>that can work over existing stuff but might work better over new stuff, >>>focus on your core idea -- if it works, the rest of the network will >>>morph to support it. >> >>Some time ago, Microsoft had the same idea about dealing with having >>half an operating system. Didn't work for them, not going to work >>here. Overlays are building on sand, or trying to sweep the mess >>under the layer. They can't fix what is fundamentally an incomplete >>architecture. >> >>> > >>>>> >>>>>* Energy efficiency -- in this case I worked on an energy efficient radio >>>>> project and discovered there's very little literature on saving >>>>> energy in networks. (What are the design principles for an energy >>>>> efficient transport protocol? Turns out that is a non-trivial and >>>>> often counter-intuitive problem that has you looking at old ARQ work...) >>>> >>>>So I take it from this list you don't see much in the way of new >>>>fundamental results coming out of FIND or any of this "new >>>>architecture" stuff? >>> >>>As I understand it, FIND is pushing a different axis -- it is looking >>>at architectural issues raised by the innovative research. >> >>God forbid, they should tackle the fundamental issues that have been >>around for a long time. Better to declare victory and move on, I >>guess. Problem is if they couldn't solve the old ones what >>confidence should we have they can solve the new ones. Especially >>when it is highly likely the new ones would not be much of a an issue >>if they had solved the old ones. >> >>Stagnation is so dull. >> >>Take care, >>John cheers jon From touch at ISI.EDU Fri May 9 09:31:53 2008 From: touch at ISI.EDU (Joe Touch) Date: Fri, 09 May 2008 09:31:53 -0700 Subject: [e2e] end of interest In-Reply-To: References: <20080508202334.444B228E155@aland.bbn.com> Message-ID: <48247C79.70103@isi.edu> John Day wrote: > At 16:23 -0400 2008/05/08, Craig Partridge wrote: ... >> I don't think it said don't bother touching TCP and below so much as said >> they don't matter. That's certainly what Van said in a more recent talk. >> And I think it is right -- if you think you have a game changing paradigm >> that can work over existing stuff but might work better over new stuff, >> focus on your core idea -- if it works, the rest of the network will >> morph to support it. > > Some time ago, Microsoft had the same idea about dealing with having > half an operating system. Didn't work for them, not going to work > here. Overlays are building on sand, or trying to sweep the mess under > the layer. They can't fix what is fundamentally an incomplete > architecture. Does that go for virtual memory too? IMO, overlays are as integral to networking as VM is to memory - something we didn't put into the original architecture, but isn't a stop-gap either. VM, e.g., was originally to handle memory capacity limits, but has other benefits that persist even when RAM is plentiful: - providing a linear, contiguous memory view to processes - sandboxing processes from each other Overlays have very similar benefits to networking. No, they don't fix everything, and some of what they've been used to 'fix' really needs addressing in the underlying network. But that doesn't mean they're not a key part of the solution either. Joe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20080509/82048b51/signature.bin From day at std.com Fri May 9 10:43:56 2008 From: day at std.com (John Day) Date: Fri, 9 May 2008 13:43:56 -0400 Subject: [e2e] end of interest In-Reply-To: <48247C79.70103@isi.edu> References: <20080508202334.444B228E155@aland.bbn.com> <48247C79.70103@isi.edu> Message-ID: Joe, There is nothing wrong with overlays. I said nothing bad about overlays. But like many things overlays can't turn a sow's ear into a silk purse. What I did imply was it would be a mistake to think that overlays could fix fundamental underlying problems. Microsoft believed (or at least seemed to) for some number of years that that they could simply overlay windows on DOS. They finally realized that that was not going to work. You can overlay all you want as long as are building on a solid base. In fact, I would even argue that think of overlays as merely an *addition* to the architecture is a merely continuing down a blind alley. Take care, John At 9:31 -0700 2008/05/09, Joe Touch wrote: >John Day wrote: >>At 16:23 -0400 2008/05/08, Craig Partridge wrote: >... >>>I don't think it said don't bother touching TCP and below so much as said >>>they don't matter. That's certainly what Van said in a more recent talk. >>>And I think it is right -- if you think you have a game changing paradigm >>>that can work over existing stuff but might work better over new stuff, >>>focus on your core idea -- if it works, the rest of the network will >>>morph to support it. >> >>Some time ago, Microsoft had the same idea about dealing with >>having half an operating system. Didn't work for them, not going >>to work here. Overlays are building on sand, or trying to sweep >>the mess under the layer. They can't fix what is fundamentally an >>incomplete architecture. > >Does that go for virtual memory too? > >IMO, overlays are as integral to networking as VM is to memory - >something we didn't put into the original architecture, but isn't a >stop-gap either. > >VM, e.g., was originally to handle memory capacity limits, but has >other benefits that persist even when RAM is plentiful: > - providing a linear, contiguous memory view to processes > - sandboxing processes from each other > >Overlays have very similar benefits to networking. No, they don't >fix everything, and some of what they've been used to 'fix' really >needs addressing in the underlying network. But that doesn't mean >they're not a key part of the solution either. > >Joe > > >Content-Type: application/pgp-signature; name="signature.asc" >Content-Description: OpenPGP digital signature >Content-Disposition: attachment; filename="signature.asc" > >Attachment converted: Macintosh HD:signature 1.asc ( / ) (0026C086) From day at std.com Fri May 9 12:11:06 2008 From: day at std.com (John Day) Date: Fri, 9 May 2008 15:11:06 -0400 Subject: [e2e] end of interest In-Reply-To: References: <20080508202334.444B228E155@aland.bbn.com> Message-ID: > > >a paper that landed on my desk from the sigcomm 2009 rejection heap >entitled "Endless Arguments in Systems Design" has some nice protocol >fragments based around a new taxonomy of nodes in the graph > >rather than host + router + middle box, >(or end- and intermediate- system as ISO used to term them) >we only have 1 type of node (lets call it a synch) > >instead of 1-1, 1-n, and n-1 communication, >we have 0-n and n-0 communication > >the zero indicates uncertainty about the eventual recipient >(or the antideluvian originator) > >of course, a sequence of 0-n, n-0 >patterns can be constructed indicating cascades or swarms >and possible aggregation and disaggregation of content. Isn't this just a datagram or maybe a multicast datagram? > >interesting questions arise about >reliability, flow, and congestion control >in such a system - using the famous arguments >in their truest sense from that old paper >from which this august list may take its marching orders, >is quite tricky... This begins to sound like purposely constructing a convoluted situation so there is something to solve rather than simply adopting the straightforward solution. Have we really come to this? Perhaps Stoppard's Guildenstern (as opposed to Shakespeare's) would have an answer to that sort of question: how to do feedback with an uncertain source. Lets see, "I am getting these feedback control messages but I am not sure where they are coming from . . ." ;-) But then that has to be more fun than solving the long standing problems that might require some hard thinking. What was it that Ford Prefect said? Take care, John From touch at ISI.EDU Fri May 9 14:17:47 2008 From: touch at ISI.EDU (Joe Touch) Date: Fri, 09 May 2008 14:17:47 -0700 Subject: [e2e] end of interest In-Reply-To: References: <20080508202334.444B228E155@aland.bbn.com> <48247C79.70103@isi.edu> Message-ID: <4824BF7B.2000401@isi.edu> John, John Day wrote: > Joe, > > There is nothing wrong with overlays. I said nothing bad about > overlays. But like many things overlays can't turn a sow's ear into a > silk purse. > > What I did imply was it would be a mistake to think that overlays could > fix fundamental underlying problems. Microsoft believed (or at least > seemed to) for some number of years that that they could simply overlay > windows on DOS. They finally realized that that was not going to work. > You can overlay all you want as long as are building on a solid base. > > In fact, I would even argue that think of overlays as merely an > *addition* to the architecture is a merely continuing down a blind alley. Agreed. I view the current Internet as a 1-dimensional view of overlays, where overlays add dimensions; those added dimensions expose some 'defaults' in the current architecture that need to be parameterized (e.g., grouping of interfaces, both in the host and router, into strong and weak host model groups). There are some underlying problems that overlays do address, but not all, as you note. Joe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20080509/bf760bba/signature.bin From rkrishnan at comcast.net Fri May 9 17:02:46 2008 From: rkrishnan at comcast.net (Rajesh Krishnan) Date: Fri, 09 May 2008 20:02:46 -0400 Subject: [e2e] end of interest In-Reply-To: References: <20080508202334.444B228E155@aland.bbn.com> Message-ID: <1210377766.7973.100.camel@localhost.localdomain> John, and others, Hello, from a long-time lurker. > >I don't think it said don't bother touching TCP and below so much as said > >they don't matter. That's certainly what Van said in a more recent talk. > >And I think it is right -- if you think you have a game changing paradigm > >that can work over existing stuff but might work better over new stuff, > >focus on your core idea -- if it works, the rest of the network will > >morph to support it. > > Some time ago, Microsoft had the same idea about dealing with having > half an operating system. Didn't work for them, not going to work > here. Overlays are building on sand, or trying to sweep the mess > under the layer. They can't fix what is fundamentally an incomplete > architecture. Would it be blasphemous to suggest here that the TCP/IP Internet pretty much grew as an overlay on a network designed to do voice? When we take an IP-centric thin-waist view, we can usually ignore the fact that indeed there may be another Layer 3, (or in fact an entire protocol stack that we will reduce to a Layer 2 interface) below. The power of abstraction works great until the underlying problem changes radically. Several decades ago, did packet switching fix an incomplete architecture, or did it just change the problem? What if we could repeat that, especially if as Van suggests the key problem to solve has indeed changed? What if we only got another incomplete architecture that offered a different value proposition from IP, just as IP offered a different one from the phone network, but still arguably incomplete? As a practical matter, suppose we wanted to experiment with different abstractions that addressed storage concerns at the network layer and not insist that is an application layer concern. Suppose we wanted to move from content-agnostic, topology-obsessed ;) abstractions to ones that focused on reachability to descriptively named/tagged content. Our choices of deployment strategy include: (i) build everything from scratch, and reinvent a lot (ii) focus on the new abstractions, but roll it out as an overlay over the Internet, and if successful, who knows the core infrastructure may morph to match as Craig suggests I believe the latter strategy is more likely to succeed (unless a from-scratch solution that goes after a need that is under-served by the Internet succeeds and eventually moves up-market -- borrowing ideas from Christensen's book "The Innovator's Dilemma"). In any case, overlays are a cheaper way to weed out the evolutionary dead-ends. Networks seem to need a critical mass of adoption, and usually an evolutionarily smart vector to succeed. The DTN Bundle Protocol (or its successor, with metadata extension block semantics that will hopefully remain flexible and user-definable) might just offer an opportunity for acquiring a new critical mass, but only if it finds the smart vector (that does what Mosaic did for HTTP/HTML, and perhaps BSD for TCP/IP). I think overlays or middleware by some name or the other are inevitable (and perhaps even symptomatic of the network having failed us in some way :). Do we want a million fragmented stove-piped overlays that can never talk to each other, or should we attempt to change the (level of) abstraction for the services provided the network? Best Regards, Rajesh From day at std.com Fri May 9 18:18:20 2008 From: day at std.com (John Day) Date: Fri, 9 May 2008 21:18:20 -0400 Subject: [e2e] end of interest In-Reply-To: <1210377766.7973.100.camel@localhost.localdomain> References: <20080508202334.444B228E155@aland.bbn.com> <1210377766.7973.100.camel@localhost.localdomain> Message-ID: Rajesh, >John, and others, > >Hello, from a long-time lurker. > >> >I don't think it said don't bother touching TCP and below so much as said >> >they don't matter. That's certainly what Van said in a more recent talk. >> >And I think it is right -- if you think you have a game changing paradigm >> >that can work over existing stuff but might work better over new stuff, >> >focus on your core idea -- if it works, the rest of the network will >> >morph to support it. >> >> Some time ago, Microsoft had the same idea about dealing with having >> half an operating system. Didn't work for them, not going to work >> here. Overlays are building on sand, or trying to sweep the mess >> under the layer. They can't fix what is fundamentally an incomplete >> architecture. > >Would it be blasphemous to suggest here that the TCP/IP Internet pretty >much grew as an overlay on a network designed to do voice? No, not blasphemous. Just stretching a analogy beyond the breaking point. > >When we take an IP-centric thin-waist view, we can usually ignore the >fact that indeed there may be another Layer 3, (or in fact an entire >protocol stack that we will reduce to a Layer 2 interface) below. The >power of abstraction works great until the underlying problem changes >radically. > >Several decades ago, did packet switching fix an incomplete >architecture, or did it just change the problem? What if we could >repeat that, especially if as Van suggests the key problem to solve has >indeed changed? What if we only got another incomplete architecture >that offered a different value proposition from IP, just as IP offered a >different one from the phone network, but still arguably incomplete? I think you are so far inside the box, you can't see the top. If we continue to use the methods of the last 30 years, it will definitely turn out incomplete. We have know what the incompleteness were for 30 years. We simply put off solving them long enough that the newbies don't know they are out there. And having been raised with no one mentioning them, they think all is sweetness and light. Did you ever come across someone who had only ever seen DOS and never UNIX or a MAC? They thought DOS was wonderful and nothing could be better. It is similar affect. > >As a practical matter, suppose we wanted to experiment with different >abstractions that addressed storage concerns at the network layer and >not insist that is an application layer concern. Suppose we wanted to >move from content-agnostic, topology-obsessed ;) abstractions to ones >that focused on reachability to descriptively named/tagged content. One more indication of not having solved age old problems leads you to even more complex ways of not solving them. >Our choices of deployment strategy include: > > (i) build everything from scratch, and reinvent a lot > > (ii) focus on the new abstractions, but roll it out as an overlay > over the Internet, and if successful, who knows the core > infrastructure may morph to match as Craig suggests > >I believe the latter strategy is more likely to succeed (unless a >from-scratch solution that goes after a need that is under-served by the >Internet succeeds and eventually moves up-market -- borrowing ideas from >Christensen's book "The Innovator's Dilemma"). In any case, overlays >are a cheaper way to weed out the evolutionary dead-ends. Peculiar that the only two approaches seem to concentrate on building things, and not on a careful scientific investigation of the underlying structure to find out what is really going on. > >Networks seem to need a critical mass of adoption, and usually an >evolutionarily smart vector to succeed. The DTN Bundle Protocol (or its >successor, with metadata extension block semantics that will hopefully >remain flexible and user-definable) might just offer an opportunity for >acquiring a new critical mass, but only if it finds the smart vector >(that does what Mosaic did for HTTP/HTML, and perhaps BSD for TCP/IP). Not at all. There is no where in science that the investigation of scientific phenomena requires a critical mass of adoption. I think you are making the same confusion we saw on this list a few weeks back that confused understanding phenomena with commercial manufacture. > >I think overlays or middleware by some name or the other are inevitable middleware is and always has been snake oil. >(and perhaps even symptomatic of the network having failed us in some >way :). Do we want a million fragmented stove-piped overlays that can >never talk to each other, or should we attempt to change the (level of) >abstraction for the services provided the network? The network didn't fail us. We failed the network. We didn't do the serious hard work that needed to be done. We let Moore provide us silicon instead of using our brains. We educated a generation of engineers as technicians and called them PhDs. Now we seem to be stuck with less than half an architecture and an entire field who thinks it is wonderful and has forgotten how to do fundamental work. And pats themselves on the back for doing such a great job, while everything around them moves toward the kindling temperature. I suggest you keep the rose colored glasses on. The sun is going to get bright. Take care, John From rkrishnan at comcast.net Fri May 9 18:58:45 2008 From: rkrishnan at comcast.net (Rajesh Krishnan) Date: Fri, 09 May 2008 21:58:45 -0400 Subject: [e2e] end of interest In-Reply-To: References: <20080508202334.444B228E155@aland.bbn.com> <1210377766.7973.100.camel@localhost.localdomain> Message-ID: <1210384725.7973.130.camel@localhost.localdomain> John, > We have know what the incompleteness > were for 30 years. We simply put off solving them long enough that > the newbies don't know they are out there. I think it will be good for us newbies to hear these fundamental problems recounted. Knowing the problems, we could ask ourselves as Hamming urged: "What are the most important problems in our field, and why are we not working on them?" (On the other hand, if as you suggest below, our education turns out to be inadequate to solve those problems, we may still be able to pass it along to the next generation in an oral tradition. :) > We educated a generation of engineers as technicians and called them PhDs. It is interesting (or should that be depressing?) to note that Van makes a similar point about wasted graduate student time in his talk. Best Regards, Rajesh From jonathan at dsg.stanford.edu Fri May 9 19:14:59 2008 From: jonathan at dsg.stanford.edu (jonathan@dsg.stanford.edu) Date: Fri, 09 May 2008 19:14:59 -0700 Subject: [e2e] end of interest In-Reply-To: Your message of "Fri, 09 May 2008 09:31:53 PDT." <48247C79.70103@isi.edu> Message-ID: In message <48247C79.70103 at isi.edu>Joe Touch writes >John Day wrote: >> At 16:23 -0400 2008/05/08, Craig Partridge wrote: >>> I don't think it said don't bother touching TCP and below so much as said >>> they don't matter. That's certainly what Van said in a more recent talk. >>> And I think it is right -- if you think you have a game changing paradigm >>> that can work over existing stuff but might work better over new stuff, >>> focus on your core idea -- if it works, the rest of the network will >>> morph to support it. >> >> Some time ago, Microsoft had the same idea about dealing with having >> half an operating system. Didn't work for them, not going to wor >> here. Overlays are building on sand, or trying to sweep the mess under >> the layer. They can't fix what is fundamentally an incomplete >> architecture. > >Does that go for virtual memory too? > >IMO, overlays are as integral to networking as VM is to memory - >something we didn't put into the original architecture, but isn't a >stop-gap either. > >VM, e.g., was originally to handle memory capacity limits, but has other >benefits that persist even when RAM is plentiful: > - providing a linear, contiguous memory view to processes > - sandboxing processes from each other > >Overlays have very similar benefits to networking. [...] [[Caveat: on historical matters, I defer, before the fact, to those who were there before me, nevermind those there at the time! ]] Hi Joe, I don't think that's a good analogy. The contemporary literature -- as collected in Structured Computer Organization: Bell & Newell; or Bell, Newell and Siewioriek -- shows that sandboxing of process memory and "linearization" of process memory predates the Atlas and its "one-level store". Base/bounds registers go back much earlier, and provide both sandboxing and relocation. Hm. Didn't sharing of address spaces, via segments also predate demand-paged virtual memory? Last, As Seymour Cray put it, "Virtual memory is you don't have". Does this this VM ==> Overlay networks analogy stretch as far as: "Overlay networks you don't really have"? Joe, I'm guessing you'd disagree. Personally I don't have a stake either way. But I'm curious: if the question is posed that strongly, does anyone care to defend it? From touch at ISI.EDU Fri May 9 21:06:33 2008 From: touch at ISI.EDU (Joe Touch) Date: Fri, 09 May 2008 21:06:33 -0700 Subject: [e2e] end of interest In-Reply-To: References: Message-ID: <48251F49.4050205@isi.edu> jonathan at dsg.stanford.edu wrote: > In message <48247C79.70103 at isi.edu>Joe Touch writes ... >> VM, e.g., was originally to handle memory capacity limits, but has other >> benefits that persist even when RAM is plentiful: >> - providing a linear, contiguous memory view to processes >> - sandboxing processes from each other >> >> Overlays have very similar benefits to networking. [...] > > [[Caveat: on historical matters, I defer, before the fact, to those > who were there before me, nevermind those there at the time! ]] > > Hi Joe, > > I don't think that's a good analogy. The contemporary literature -- as > collected in Structured Computer Organization: Bell & Newell; or Bell, > Newell and Siewioriek -- shows that sandboxing of process memory and > "linearization" of process memory predates the Atlas and its > "one-level store". Base/bounds registers go back much earlier, and > provide both sandboxing and relocation. Hm. Didn't sharing of address > spaces, via segments also predate demand-paged virtual memory? Demand-paged VM is one version; some of the other mechanisms weren't called VM at the time, but might be in retrospect. Current virtualization, as you note, is cleaner than previous attempts, e.g., by not only linearizing and sandboxing, but by preventing checkerboarding (vs. base/bounds registers and segments). > Last, As Seymour Cray put it, "Virtual memory is you don't have". > Does this this VM ==> Overlay networks analogy stretch as far as: > > "Overlay networks you don't really have"? > > Joe, I'm guessing you'd disagree. Correct. IMO, Cray is focusing on the paging part, not the virtualization part. Joe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20080509/e70f2592/signature-0001.bin From Jon.Crowcroft at cl.cam.ac.uk Sat May 10 01:52:58 2008 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Sat, 10 May 2008 09:52:58 +0100 Subject: [e2e] end of interest In-Reply-To: References: <20080508202334.444B228E155@aland.bbn.com> Message-ID: IEN 1 has some discussion of isarythmic flow control - this applies quite well to aggregated traffic without source attribution In missive , John Day typed: >>> >>> >>>a paper that landed on my desk from the sigcomm 2009 rejection heap >>>entitled "Endless Arguments in Systems Design" has some nice protocol >>>fragments based around a new taxonomy of nodes in the graph >>> >>>rather than host + router + middle box, >>>(or end- and intermediate- system as ISO used to term them) >>>we only have 1 type of node (lets call it a synch) >>> >>>instead of 1-1, 1-n, and n-1 communication, >>>we have 0-n and n-0 communication >>> >>>the zero indicates uncertainty about the eventual recipient >>>(or the antideluvian originator) >>> >>>of course, a sequence of 0-n, n-0 >>>patterns can be constructed indicating cascades or swarms >>>and possible aggregation and disaggregation of content. >> >>Isn't this just a datagram or maybe a multicast datagram? >> >>> >>>interesting questions arise about >>>reliability, flow, and congestion control >>>in such a system - using the famous arguments >>>in their truest sense from that old paper >>>from which this august list may take its marching orders, >>>is quite tricky... >> >>This begins to sound like purposely constructing a convoluted >>situation so there is something to solve rather than simply adopting >>the straightforward solution. Have we really come to this? >> >>Perhaps Stoppard's Guildenstern (as opposed to Shakespeare's) would >>have an answer to that sort of question: how to do feedback with an >>uncertain source. Lets see, "I am getting these feedback control >>messages but I am not sure where they are coming from . . ." ;-) >> >>But then that has to be more fun than solving the long standing >>problems that might require some hard thinking. What was it that >>Ford Prefect said? >> >>Take care, >>John cheers jon From L.Wood at surrey.ac.uk Sat May 10 02:33:57 2008 From: L.Wood at surrey.ac.uk (L.Wood@surrey.ac.uk) Date: Sat, 10 May 2008 10:33:57 +0100 Subject: [e2e] end of interest References: <20080508202334.444B228E155@aland.bbn.com> <1210377766.7973.100.camel@localhost.localdomain> Message-ID: <603BF90EB2E7EB46BF8C226539DFC20701316B20@EVS-EC1-NODE1.surrey.ac.uk> Rajesh, The DTN Bundle Protocol uses an obscure binary format for its metadata blocks. If you want flexible user-definable metadata, it really has to be text. For this reason, we proposed using HTTP as a transport-layer-independent session layer for DTN networks: http://tools.ietf.org/html/draft-wood-dtnrg-http-dtn-delivery-01 http://www.ietf.org/proceedings/08mar/slides/DTNRG-6.pdf Getting content identification via MIME (which the Bundle Protocol doesn't do) is a bonus. Gopher is an example of a binary format that didn't succeed against HTTP. L. just another of those PhD-toting technicians. DTN work: http://www.ee.surrey.ac.uk/Personal/L.Wood/dtn/ Rajesh Krishnan wrote on Sat 2008-05-10 1:02: > Networks seem to need a critical mass of adoption, and usually an > evolutionarily smart vector to succeed. The DTN Bundle Protocol (or its > successor, with metadata extension block semantics that will hopefully > remain flexible and user-definable) might just offer an opportunity for > acquiring a new critical mass, but only if it finds the smart vector > (that does what Mosaic did for HTTP/HTML, and perhaps BSD for TCP/IP). -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20080510/61a87876/attachment.html From rkrishnan at comcast.net Sat May 10 08:15:15 2008 From: rkrishnan at comcast.net (Rajesh Krishnan) Date: Sat, 10 May 2008 11:15:15 -0400 Subject: [e2e] end of interest -- BP metadata / binary vs text In-Reply-To: <603BF90EB2E7EB46BF8C226539DFC20701316B20@EVS-EC1-NODE1.surrey.ac.uk> References: <20080508202334.444B228E155@aland.bbn.com> <1210377766.7973.100.camel@localhost.localdomain> <603BF90EB2E7EB46BF8C226539DFC20701316B20@EVS-EC1-NODE1.surrey.ac.uk> Message-ID: <1210432515.7973.171.camel@localhost.localdomain> Lloyd, Yes, we discussed this on dtn-interest. What you say regarding text being successful has been true of many application protocols, while network/lower layer protocols favor a binary form. I do not have a preference for either -- text is convenient until there is sizeable metadata that is intrinsically binary (index shot thumbnail of a movie scene). I view BP or its successor as having the potential for being a storage-aware network layer protocol rather than remain at the session/application layer of a TCP/IP network. Rose-colored glasses? :) Best Regards, Rajesh > Rajesh, > > The DTN Bundle Protocol uses an obscure binary format for its > metadata blocks. If you want flexible user-definable metadata, > it really has to be text. For this reason, we proposed using HTTP > as a transport-layer-independent session layer for DTN networks: > > http://tools.ietf.org/html/draft-wood-dtnrg-http-dtn-delivery-01 > http://www.ietf.org/proceedings/08mar/slides/DTNRG-6.pdf > > Getting content identification via MIME (which the Bundle Protocol > doesn't do) is a bonus. Gopher is an example of a binary format > that didn't succeed against HTTP. > > L. > > just another of those PhD-toting technicians. > > DTN work: http://www.ee.surrey.ac.uk/Personal/L.Wood/dtn/ > > > > Rajesh Krishnan wrote on Sat 2008-05-10 1:02: > > > Networks seem to need a critical mass of adoption, and usually an > > evolutionarily smart vector to succeed. The DTN Bundle Protocol (or > its > > successor, with metadata extension block semantics that will > hopefully > > remain flexible and user-definable) might just offer an opportunity > for > > acquiring a new critical mass, but only if it finds the smart vector > > (that does what Mosaic did for HTTP/HTML, and perhaps BSD for > TCP/IP). > > From Jon.Crowcroft at cl.cam.ac.uk Sat May 10 09:19:04 2008 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Sat, 10 May 2008 17:19:04 +0100 Subject: [e2e] end of interest -- BP metadata / binary vs text In-Reply-To: <1210432515.7973.171.camel@localhost.localdomain> References: <20080508202334.444B228E155@aland.bbn.com> <1210377766.7973.100.camel@localhost.localdomain> <603BF90EB2E7EB46BF8C226539DFC20701316B20@EVS-EC1-NODE1.surrey.ac.uk> <1210432515.7973.171.camel@localhost.localdomain> Message-ID: yep two alternates to the BP stuff suggest themselves... 1. Blocks/BEEP: http://www.ietf.org/rfc/rfc3080.txt if you are http mindfuk and 2. some of the data naming/framing ideas in Reliable Multicast http://portal.acm.org/citation.cfm?id=290808 if you are more rad In missive <1210432515.7973.171.camel at localhost.localdomain>, Rajesh Krishnan typed: >>Lloyd, >> >>Yes, we discussed this on dtn-interest. What you say regarding text >>being successful has been true of many application protocols, while >>network/lower layer protocols favor a binary form. I do not have a >>preference for either -- text is convenient until there is sizeable >>metadata that is intrinsically binary (index shot thumbnail of a movie >>scene). >> >>I view BP or its successor as having the potential for being a >>storage-aware network layer protocol rather than remain at the >>session/application layer of a TCP/IP network. Rose-colored >>glasses? :) >> >>Best Regards, >>Rajesh >> >> >> >>> Rajesh, >>> >>> The DTN Bundle Protocol uses an obscure binary format for its >>> metadata blocks. If you want flexible user-definable metadata, >>> it really has to be text. For this reason, we proposed using HTTP >>> as a transport-layer-independent session layer for DTN networks: >>> >>> http://tools.ietf.org/html/draft-wood-dtnrg-http-dtn-delivery-01 >>> http://www.ietf.org/proceedings/08mar/slides/DTNRG-6.pdf >>> >>> Getting content identification via MIME (which the Bundle Protocol >>> doesn't do) is a bonus. Gopher is an example of a binary format >>> that didn't succeed against HTTP. >>> >>> L. >>> >>> just another of those PhD-toting technicians. >>> >>> DTN work: http://www.ee.surrey.ac.uk/Personal/L.Wood/dtn/ >>> >>> >>> >>> Rajesh Krishnan wrote on Sat 2008-05-10 1:02: >>> >>> > Networks seem to need a critical mass of adoption, and usually an >>> > evolutionarily smart vector to succeed. The DTN Bundle Protocol (or >>> its >>> > successor, with metadata extension block semantics that will >>> hopefully >>> > remain flexible and user-definable) might just offer an opportunity >>> for >>> > acquiring a new critical mass, but only if it finds the smart vector >>> > (that does what Mosaic did for HTTP/HTML, and perhaps BSD for >>> TCP/IP). >>> >>> >> cheers jon From dpreed at reed.com Sat May 10 09:18:32 2008 From: dpreed at reed.com (David P. Reed) Date: Sat, 10 May 2008 12:18:32 -0400 Subject: [e2e] end of interest In-Reply-To: References: Message-ID: <4825CAD8.8050801@reed.com> Perhaps my main mission today can shed some light on the question of "overlay vs. rethink". My main mission is about interoperation among multi-radio communications systems. About 60 years ago, the core separation between the physics of (take your pick) long-wavelength photonics/electromagnetic wave dynamics and radio systems was formalized and hardened into engineering practice - creating the concept of "link" and "channel". These were characterized probabilistically using information theory, eliminating any notions of "shared medium" from systems design above the antenna layer. Now the "hot" areas of radio networking are focused where the drunk looks - under the lamppost of WiFi chipsets that are easily accessed (at least in Linux), and in trying to map the problems of networking into creating stable long-term relationships that look like IP Autonomous Systems and a teeny bit of mobility modeled on cellular roaming. It's too hard (for a researcher who wants a quick hit to keep the funding spigot turned on) to look at the other attributes of electromagnetic systems - the lack of source and channel coding separation, the physics of 4 dimensional propagation spaces, the constraints of antennae, and the complexities of synchronizing clocks and other means of overcoming the inherent costs of constantly sensing the electromagnetic coupling between distinct systems. So "overlays" on an existing, but very corroded, creaky, and bad underlying system called "radio engineering" is all that information theorists can manage, all that network theorists can manage, etc. This would not be bad if radio networking were a small, unimportant aspect of communications today and tomorrow. But that assumption is fundamentally wrong. Radio is *the* greenfield of communications opportunity, and most humans expect it to be where a large part of the future lies (if we include fiber and radio - all photons) we can cover the entire future. There are huge aspects of that future that depend on getting the low-level abstractions right (in the sense that they match real physical reality). And at the same time, constructing a stack of abstractions that work to maximize the utility of radio. Looking backwards at the abstractions we invented in 1977 or so for the Internet is useful, because one can easily see that we did not consider an all-radio, all-dynamic, adaptive model of communications. That said, we still need to avoid what we avoided in 1977 - thinking that "optimization" and elimination of layers would be a good thing because squeezing every bit of performance out of an arbitrarily chosen benchmark was going to help us in the future. It wasn't that we made the Internet inefficient that was the problem. It was that NEW problems like today's intense interest in radio and our ability to process it cheaply (SDR, DSP, pervasive radios, reconfigurable antennas) were not anticipated. It's foolish to focus on the light under the lamppost (as most of the field does today, trying to "optimize" the networks we already operate pretty damn well). Instead, look out into the darkness a bit more, and build tools to navigate, rough and ready, but learning all the time. From L.Wood at surrey.ac.uk Sat May 10 11:38:37 2008 From: L.Wood at surrey.ac.uk (L.Wood@surrey.ac.uk) Date: Sat, 10 May 2008 19:38:37 +0100 Subject: [e2e] end of interest -- BP metadata / binary vs text References: <20080508202334.444B228E155@aland.bbn.com> <1210377766.7973.100.camel@localhost.localdomain> <603BF90EB2E7EB46BF8C226539DFC20701316B20@EVS-EC1-NODE1.surrey.ac.uk> <1210432515.7973.171.camel@localhost.localdomain> Message-ID: <603BF90EB2E7EB46BF8C226539DFC20701316B22@EVS-EC1-NODE1.surrey.ac.uk> BEEP gives us MIME, but is inherently rigidly transactional (RFC3080 section 2.1.1), which puts it as a disadvantage for long-delay links. HTTP PUTs can be entirely one-way... never mind BEEP's profile uri's. (I do find the HTTP specs far easier to read and follow.) HTTP, BEEP and the Bundle Protocol aren't network protocols... really session layers (well, the BP's always compared to email, and an email message can be thought of as a session?). Doing sessions as text helps for debugging and modifications. L. -----Original Message----- From: Jon Crowcroft [mailto:Jon.Crowcroft at cl.cam.ac.uk] Sent: Sat 2008-05-10 17:19 To: Rajesh Krishnan Cc: Wood L Dr (Electronic Eng); craig at aland.bbn.com; day at std.com; end2end-interest at postel.org; Jon.Crowcroft at cl.cam.ac.uk Subject: Re: [e2e] end of interest -- BP metadata / binary vs text yep two alternates to the BP stuff suggest themselves... 1. Blocks/BEEP: http://www.ietf.org/rfc/rfc3080.txt if you are http mindfuk and 2. some of the data naming/framing ideas in Reliable Multicast http://portal.acm.org/citation.cfm?id=290808 if you are more rad In missive <1210432515.7973.171.camel at localhost.localdomain>, Rajesh Krishnan typed: >>Lloyd, >> >>Yes, we discussed this on dtn-interest. What you say regarding text >>being successful has been true of many application protocols, while >>network/lower layer protocols favor a binary form. I do not have a >>preference for either -- text is convenient until there is sizeable >>metadata that is intrinsically binary (index shot thumbnail of a movie >>scene). >> >>I view BP or its successor as having the potential for being a >>storage-aware network layer protocol rather than remain at the >>session/application layer of a TCP/IP network. Rose-colored >>glasses? :) >> >>Best Regards, >>Rajesh >> >> >> >>> Rajesh, >>> >>> The DTN Bundle Protocol uses an obscure binary format for its >>> metadata blocks. If you want flexible user-definable metadata, >>> it really has to be text. For this reason, we proposed using HTTP >>> as a transport-layer-independent session layer for DTN networks: >>> >>> http://tools.ietf.org/html/draft-wood-dtnrg-http-dtn-delivery-01 >>> http://www.ietf.org/proceedings/08mar/slides/DTNRG-6.pdf >>> >>> Getting content identification via MIME (which the Bundle Protocol >>> doesn't do) is a bonus. Gopher is an example of a binary format >>> that didn't succeed against HTTP. >>> >>> L. >>> >>> just another of those PhD-toting technicians. >>> >>> DTN work: http://www.ee.surrey.ac.uk/Personal/L.Wood/dtn/ >>> >>> >>> >>> Rajesh Krishnan wrote on Sat 2008-05-10 1:02: >>> >>> > Networks seem to need a critical mass of adoption, and usually an >>> > evolutionarily smart vector to succeed. The DTN Bundle Protocol (or >>> its >>> > successor, with metadata extension block semantics that will >>> hopefully >>> > remain flexible and user-definable) might just offer an opportunity >>> for >>> > acquiring a new critical mass, but only if it finds the smart vector >>> > (that does what Mosaic did for HTTP/HTML, and perhaps BSD for >>> TCP/IP). >>> >>> >> cheers jon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20080510/548c6600/attachment-0001.html From jg at laptop.org Sat May 10 11:53:45 2008 From: jg at laptop.org (Jim Gettys) Date: Sat, 10 May 2008 14:53:45 -0400 Subject: [e2e] end of interest In-Reply-To: <4825CAD8.8050801@reed.com> References: <4825CAD8.8050801@reed.com> Message-ID: <1210445625.6167.138.camel@jg-laptop> On Sat, 2008-05-10 at 12:18 -0400, David P. Reed wrote: > There are huge aspects of that future that depend on getting the > low-level abstractions right (in the sense that they match real physical > reality). And at the same time, constructing a stack of abstractions > that work to maximize the utility of radio. > First hand reality in the OLPC project: use of multicast/broadcast based protocols when crossed with nascent wireless protocols (802.11s), can cause spectacularly "interesting" (as in Chinese curse) interactions. First hand experience is showing that one had better understand what happens at the lowest wireless layers while building application middleware protocols and applications.... Some existing protocols that have worked well on wired networks, and sort of worked OK on 802.11abc networks, just doesn't work well (or scale well) on a mesh designed to try to hide what's going on under the covers. While overlays are going to play an important role in getting us out of the current morass (without transition strategies, we're toast; that was what got the Internet out of telecom circuit switching as the only mechanism), I have to emphatically agree with Dave that we'd better get moving on more fundamental redesign and rethinking of networking.... - Jim -- Jim Gettys One Laptop Per Child From rkrishnan at comcast.net Sat May 10 12:22:21 2008 From: rkrishnan at comcast.net (Rajesh Krishnan) Date: Sat, 10 May 2008 15:22:21 -0400 Subject: [e2e] end of interest -- BP metadata / binary vs text In-Reply-To: <603BF90EB2E7EB46BF8C226539DFC20701316B22@EVS-EC1-NODE1.surrey.ac.uk> References: <20080508202334.444B228E155@aland.bbn.com> <1210377766.7973.100.camel@localhost.localdomain> <603BF90EB2E7EB46BF8C226539DFC20701316B20@EVS-EC1-NODE1.surrey.ac.uk> <1210432515.7973.171.camel@localhost.localdomain> <603BF90EB2E7EB46BF8C226539DFC20701316B22@EVS-EC1-NODE1.surrey.ac.uk> Message-ID: <1210447341.7973.286.camel@localhost.localdomain> Lloyd, > ... the Bundle Protocol aren't network protocols... > really session layers (well, the BP's always compared to email, > and an email message can be thought of as a session?). Granted this matches the viewpoint presented in RFC 5050 of BP's (non-threatening ;) relationship to TCP/IP. By including forwarding and dynamic routing (L3?), retransmissions (L4? and L2?), and persistent storage and application metadata tagging (L7?) concerns within the same protocol, BP does not fit harmoniously at L5 of the TCP/IP Internet, IMHO. This challenge to traditional layering is precisely what I find most fascinating about DTN. With the CLA/BP split, there is still layering in DTN; just that the layering is not congruent to conventional TCP/IP layering. Effectively, DTN/BP seems to relate to TCP/IP more or less the same way IP looks at other network technologies. At least that is my interpretation of DTN/BP as an overlay abstraction (TCP/IP is relevant only as expedient means for early deployment. ;) I am speaking only for myself here (not past or present employers or funding agencies or IRTF WGs), and this thread ought to migrate to dtn-interest perhaps. Best Regards, Rajesh From Jon.Crowcroft at cl.cam.ac.uk Sun May 11 01:40:58 2008 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Sun, 11 May 2008 09:40:58 +0100 Subject: [e2e] end of interest Message-ID: nice example of 0nership by warring protocol layer factions mesh wifi people need to learn to do layer 3 snooping same way telecom people did... there's a great e2e topic - we have sort of gotten out of the denial phase on middle boxes and we're probably ok with multicast's niches now ... but should we raise the art of _snooping_ to being a first class component of any decent postmodern internet architecture? knowing multicast group members locations from lookin at IGMP traffic from "below" is one exxaple (think dslams too) but another would be P2P aware Traffic Engineering, for example "layer violations" as taught in protocls #101 has traditionally been restricted to upper layer tweaking layer-2 operating parameters (think Application/TCP causing Dial up), rather than vice versa - but the other way round stretches programming API paradigms more athletically so may be condusive to progress... In missive <1210445625.6167.138.camel at jg-laptop>, Jim Gettys typed: >>On Sat, 2008-05-10 at 12:18 -0400, David P. Reed wrote: >> >>> There are huge aspects of that future that depend on getting the >>> low-level abstractions right (in the sense that they match real physical >>> reality). And at the same time, constructing a stack of abstractions >>> that work to maximize the utility of radio. >>> >> >>First hand reality in the OLPC project: use of multicast/broadcast based >>protocols when crossed with nascent wireless protocols (802.11s), can >>cause spectacularly "interesting" (as in Chinese curse) interactions. >> >>First hand experience is showing that one had better understand what >>happens at the lowest wireless layers while building application >>middleware protocols and applications.... Some existing protocols that >>have worked well on wired networks, and sort of worked OK on 802.11abc >>networks, just doesn't work well (or scale well) on a mesh designed to >>try to hide what's going on under the covers. >> >>While overlays are going to play an important role in getting us out of >>the current morass (without transition strategies, we're toast; that was >>what got the Internet out of telecom circuit switching as the only >>mechanism), I have to emphatically agree with Dave that we'd better get >>moving on more fundamental redesign and rethinking of networking.... >> - Jim >> >>-- >>Jim Gettys >>One Laptop Per Child >> cheers jon From dpreed at reed.com Sun May 11 07:36:34 2008 From: dpreed at reed.com (David P. Reed) Date: Sun, 11 May 2008 10:36:34 -0400 Subject: [e2e] end of interest -- BP metadata / binary vs text In-Reply-To: <1210447341.7973.286.camel@localhost.localdomain> References: <20080508202334.444B228E155@aland.bbn.com> <1210377766.7973.100.camel@localhost.localdomain> <603BF90EB2E7EB46BF8C226539DFC20701316B20@EVS-EC1-NODE1.surrey.ac.uk> <1210432515.7973.171.camel@localhost.localdomain> <603BF90EB2E7EB46BF8C226539DFC20701316B22@EVS-EC1-NODE1.surrey.ac.uk> <1210447341.7973.286.camel@localhost.localdomain> Message-ID: <48270472.6070700@reed.com> "Traditional layering" my a**. When the present Internet architecture was constructed in the 1970's there was no OSI model at all. It would be revisionist (alas too common) to imagine that the *goal* of communications architecture is to fit into a frame (OSI) that was conceived ex post as merely an explanatory tool for the decisions about modularity that were made on far more serious grounds than a mere picture of a stack. The "end to end argument" was a pattern of argumentation used to organize architectural thinking. I am afraid that those who treat the RFCs as scripture from high priests mistake dogma for thoughtfulness. RFCs started out as a frame for discussion. For making convincing arguments. The act of granting an RFC a number does not (anymore than acceptance at a peer reviewed journal does not) create a "fact" or a "truth". And now many on this list (which used to be above all that) behave like Talmudic scholars or law professors - somehow thinking that by studying merely the grammar and symbols we can ascertain what is right, what is good, or what is fit to purpose. The essential valid measure of DTN ideas is that they work, and will continue to work well, *to organize the solution* to an interesting class of real-world problems. It is irrelevant whether they provide the basis for destroying some "traditional paradigm" and creating a new religion. What made the Internet architecture useful was its attention to "interoperation" and to facilitating support of "unanticipated" applications and implementation technologies. It framed those things well, making progress possible. DTN ideas frame a new set of issues well - communcations that occur between entities that occupy discontiguous regions of space-time influence. Such communications have always existed (books communicate across time in personal and public libraries, postal letters transcend spatial barriers in self-contained form) - DTN's merely ratify their importance by focusing framing on those issues. Rajesh Krishnan wrote: > > Granted this matches the viewpoint presented in RFC 5050 of BP's > (non-threatening ;) relationship to TCP/IP. > > By including forwarding and dynamic routing (L3?), retransmissions (L4? > and L2?), and persistent storage and application metadata tagging (L7?) > concerns within the same protocol, BP does not fit harmoniously at L5 of > the TCP/IP Internet, IMHO. This challenge to traditional layering is > precisely what I find most fascinating about DTN. > > With the CLA/BP split, there is still layering in DTN; just that the > layering is not congruent to conventional TCP/IP layering. Effectively, > DTN/BP seems to relate to TCP/IP more or less the same way IP looks at > other network technologies. At least that is my interpretation of > DTN/BP as an overlay abstraction (TCP/IP is relevant only as expedient > means for early deployment. ;) > > I am speaking only for myself here (not past or present employers or > funding agencies or IRTF WGs), and this thread ought to migrate to > dtn-interest perhaps. > > Best Regards, > Rajesh > > > > > From rkrishnan at comcast.net Sun May 11 08:30:07 2008 From: rkrishnan at comcast.net (Rajesh Krishnan) Date: Sun, 11 May 2008 11:30:07 -0400 Subject: [e2e] end of interest -- BP metadata / binary vs text In-Reply-To: <48270472.6070700@reed.com> References: <20080508202334.444B228E155@aland.bbn.com> <1210377766.7973.100.camel@localhost.localdomain> <603BF90EB2E7EB46BF8C226539DFC20701316B20@EVS-EC1-NODE1.surrey.ac.uk> <1210432515.7973.171.camel@localhost.localdomain> <603BF90EB2E7EB46BF8C226539DFC20701316B22@EVS-EC1-NODE1.surrey.ac.uk> <1210447341.7973.286.camel@localhost.localdomain> <48270472.6070700@reed.com> Message-ID: <1210519807.7973.346.camel@localhost.localdomain> David, My objective was to provoke discussion on the topic of whether it makes sense to think of DTN as a session layer, as an overlay above TCP, etc. from the e2e perspective. If you feel this topic is unsuitable for discussion here, I will gladly stop posting on this topic here of course. > I am afraid that those who treat the RFCs as scripture from high priests > mistake dogma for thoughtfulness. I am definitely not treating RFCs as scripture. I am not satisfied with the explanation that DTN/BP belongs to the "session layer" as the best or only model. I find this limiting, since DTN issues span all the way from applications to the physical "layer". > The essential valid measure of DTN ideas is that they work, and will > continue to work well, *to organize the solution* to an interesting > class of real-world problems. It is irrelevant whether they provide > the basis for destroying some "traditional paradigm" and creating a new > religion. Agreed. If anyone reads my post as being about destroying old religion and creating a new one, that is missing the point. It is about getting out of old shackles though; why keep doing the same thing (like the way we fit seem to try to fit DTN into the old mold) and expect a different result? If we say DTN is in the session layer, we will engineer a useful DTN session layer alright, but will a DTN session layer solve any fundamentally new problems? Does that limit new architectures that could be explored? > What made the Internet architecture useful was its attention to > "interoperation" and to facilitating support of "unanticipated" > applications and implementation technologies. It framed those things > well, making progress possible. DTN ideas frame a new set of issues > well - communcations that occur between entities that occupy > discontiguous regions of space-time influence. Such communications > have always existed (books communicate across time in personal and > public libraries, postal letters transcend spatial barriers in > self-contained form) - DTN's merely ratify their importance by focusing > framing on those issues. Agreed. In a DTN, among other things, we want to maximize the value of information exchanged within a transitory encounter, and this can not be framed entirely as a session-layer issue. Relegating DTN to the session layer will completely isolate it from a radio-aware CLA (or whatever name we want to call it) among other things, which relates to a point you made earlier. Hope that clarifies my original post somewhat. Best Regards, Rajesh > Rajesh Krishnan wrote: > > > > Granted this matches the viewpoint presented in RFC 5050 of BP's > > (non-threatening ;) relationship to TCP/IP. > > > > By including forwarding and dynamic routing (L3?), retransmissions > (L4? > > and L2?), and persistent storage and application metadata tagging > (L7?) > > concerns within the same protocol, BP does not fit harmoniously at > L5 of > > the TCP/IP Internet, IMHO. This challenge to traditional layering > is > > precisely what I find most fascinating about DTN. > > > > With the CLA/BP split, there is still layering in DTN; just that the > > layering is not congruent to conventional TCP/IP layering. > Effectively, > > DTN/BP seems to relate to TCP/IP more or less the same way IP looks > at > > other network technologies. At least that is my interpretation of > > DTN/BP as an overlay abstraction (TCP/IP is relevant only as > expedient > > means for early deployment. ;) > > > > I am speaking only for myself here (not past or present employers or > > funding agencies or IRTF WGs), and this thread ought to migrate to > > dtn-interest perhaps. > > > > Best Regards, > > Rajesh From Jon.Crowcroft at cl.cam.ac.uk Sun May 11 10:08:35 2008 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Sun, 11 May 2008 18:08:35 +0100 Subject: [e2e] end of interest In-Reply-To: <4827081A.1090305@reed.com> References: <4825CAD8.8050801@reed.com> <1210445625.6167.138.camel@jg-laptop> <4827081A.1090305@reed.com> Message-ID: In missive <4827081A.1090305 at reed.com>, "David P. Reed" typed: >>I'd suggest that first-principles thinking is harder than Jon thinks. gosh - i thought I just wrote something - i didnt realize andy lipmann's medialab goal of telepathy had already been reached and you could figure out how hard I thought first principles thinking is....thats neat - if you can teach the rest of us that trick, we could save a lot of bandwidth and carpal pain one of the first principles of architectural thinking is that architectures are not created fully formed by some smart person (no matter who gets the credit) but are in fact emergent from a bunch of work that is teasing out problems and partial solutions around the cracks of an earlier system what i was suggesting is that snooping is just such a piece of work. as are other layer "violations" to claim that the internet was not layered is pretty imaginative - the ncp->tcp/ip split and the bezerkeley socket code (and early System V and MIT C code) stacks were just that: stacks. OSs were layered at the same time (as were most systems) to pretend that the consequences of different ways of component thinking were intellectually avaialble to people before the 1980s would be fairly far fetched.... indeed the work by tenenhouse and clark way way after the layering religion on application layer framing, and integrated layer processing betray a DEEP embedded layerist mode of thinking - why not? it served incredibly well, in terms of seperating concerns and defining tussle spaces and software/hardware (pace, hennesy/patterson) division of labour >>It's not just a matter of choosing sides in a war, or acting as an arms >>merchant to both sides. It's about thinking more squarely about the >>real underlying issues that comprise communications . In fact, as some >>of us have suggested, perhaps the idea that communications can be >>considered as a "pure" architectural/linguistic frame independent of >>storage and computation and sensing is the real issue we ought to be >>addressing today, with pervasive comms/storage/computational elements >>capable of all three. sure - we have a number of NEW ideas on the table now that were not present the last time around - lets work with them, shall we? >>John Day wrote: >>> At 9:08 +0100 2008/05/11, Jon Crowcroft wrote: >>>> nice example of 0nership by warring protocol layer factions >>>> >>>> mesh wifi people need to learn to do layer 3 snooping >>>> same way telecom people did... >>> >>> The need to do snooping is an indication of the current model's >>> inability or refusal to innovate. A failure to dig more deeply into >>> the model. (Or a fear to challenge their religion.) >>> >>> (It turns out once there is a complete architecture, not one of these >>> DOS look-a-likes, snooping isn't necessary.) >>> >>>> >>>> there's a great e2e topic - >>>> we have sort of gotten out of the >>>> denial phase on middle boxes and >>>> we're probably ok with multicast's niches now ... >>> >>> Middleboxes are alos an artifact of an incomplete architecture. In a >>> full (shall we say, a wff) architecture, they aren't necessary. >>> >>>> >>>> but should we raise the >>>> art of _snooping_ to being a >>>> first class component of any decent >>>> postmodern internet architecture? >>> >>> No, snooping is an admission of failure. Calling it a component of a >>> "decent" internet architecture is merely making excuses for our failures. >>> >>>> >>>> knowing >>>> multicast group members locations from lookin at IGMP traffic from >>>> "below" >>>> is one exxaple (think dslams too) but another would be >>>> P2P aware Traffic Engineering, for example >>> >>> Recognizing that a multicast address is the name of a set deals with >>> most of this. (Of course, this means that strictly speaking multicast >>> addresses aren't really addresses but names.) A multicast or anycast >>> address must always resolve at some point to a normal address. The >>> idea of a multicast address as an ambiguous address is fundamentally >>> broken. >>> >>>> >>>> "layer violations" as taught in protocls #101 has traditionally >>>> been restricted to upper layer tweaking layer-2 operating parameters >>>> (think Application/TCP causing Dial up), rather than >>>> vice versa - but the other way round stretches >>>> programming API paradigms more athletically >>>> so may be condusive to progress... >>> >>> If I understand what you are alluding to, this is addressed by not >>> ignoring the existence of the enrollment phase in communication. >>> >>> What I have found is that in a wff architecture there are no need for >>> layer violations. In other words, if you have layer violations, you >>> are doing something wrong some place. Either in how you are trying to >>> do what you want to do, or in what you think a layer is. In this case >>> it seems to be a bit of both. >>> >>> Take care, >>> John >>> >>>> >>>> In missive <1210445625.6167.138.camel at jg-laptop>, Jim Gettys typed: >>>> >>>> >>On Sat, 2008-05-10 at 12:18 -0400, David P. Reed wrote: >>>> >> >>>> >>> There are huge aspects of that future that depend on getting the >>>> >>> low-level abstractions right (in the sense that they match real >>>> physical >>>> >>> reality). And at the same time, constructing a stack of >>>> abstractions >>>> >>> that work to maximize the utility of radio. >>>> >>> >>>> >> >>>> >>First hand reality in the OLPC project: use of multicast/broadcast >>>> based >>>> >>protocols when crossed with nascent wireless protocols (802.11s), can >>>> >>cause spectacularly "interesting" (as in Chinese curse) interactions. >>>> >> >>>> >>First hand experience is showing that one had better understand what >>>> >>happens at the lowest wireless layers while building application >>>> >>middleware protocols and applications.... Some existing protocols >>>> that >>>> >>have worked well on wired networks, and sort of worked OK on >>>> 802.11abc >>>> >>networks, just doesn't work well (or scale well) on a mesh >>>> designed to >>>> >>try to hide what's going on under the covers. >>>> >> >>>> >>While overlays are going to play an important role in getting us >>>> out of >>>> >>the current morass (without transition strategies, we're toast; >>>> that was >>>> >>what got the Internet out of telecom circuit switching as the only >>>> >>mechanism), I have to emphatically agree with Dave that we'd >>>> better get >>>> >>moving on more fundamental redesign and rethinking of networking.... >>>> >> - Jim >>>> >> >>>> >>-- >>>> >>Jim Gettys >>>> >>One Laptop Per Child >>>> >> >>>> >>>> cheers >>>> >>>> jon >>> >>> cheers jon From Jon.Crowcroft at cl.cam.ac.uk Sun May 11 01:08:04 2008 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Sun, 11 May 2008 09:08:04 +0100 Subject: [e2e] end of interest In-Reply-To: <1210445625.6167.138.camel@jg-laptop> References: <4825CAD8.8050801@reed.com> <1210445625.6167.138.camel@jg-laptop> Message-ID: nice example of 0nership by warring protocol layer factions mesh wifi people need to learn to do layer 3 snooping same way telecom people did... there's a great e2e topic - we have sort of gotten out of the denial phase on middle boxes and we're probably ok with multicast's niches now ... but should we raise the art of _snooping_ to being a first class component of any decent postmodern internet architecture? knowing multicast group members locations from lookin at IGMP traffic from "below" is one exxaple (think dslams too) but another would be P2P aware Traffic Engineering, for example "layer violations" as taught in protocls #101 has traditionally been restricted to upper layer tweaking layer-2 operating parameters (think Application/TCP causing Dial up), rather than vice versa - but the other way round stretches programming API paradigms more athletically so may be condusive to progress... In missive <1210445625.6167.138.camel at jg-laptop>, Jim Gettys typed: >>On Sat, 2008-05-10 at 12:18 -0400, David P. Reed wrote: >> >>> There are huge aspects of that future that depend on getting the >>> low-level abstractions right (in the sense that they match real physical >>> reality). And at the same time, constructing a stack of abstractions >>> that work to maximize the utility of radio. >>> >> >>First hand reality in the OLPC project: use of multicast/broadcast based >>protocols when crossed with nascent wireless protocols (802.11s), can >>cause spectacularly "interesting" (as in Chinese curse) interactions. >> >>First hand experience is showing that one had better understand what >>happens at the lowest wireless layers while building application >>middleware protocols and applications.... Some existing protocols that >>have worked well on wired networks, and sort of worked OK on 802.11abc >>networks, just doesn't work well (or scale well) on a mesh designed to >>try to hide what's going on under the covers. >> >>While overlays are going to play an important role in getting us out of >>the current morass (without transition strategies, we're toast; that was >>what got the Internet out of telecom circuit switching as the only >>mechanism), I have to emphatically agree with Dave that we'd better get >>moving on more fundamental redesign and rethinking of networking.... >> - Jim >> >>-- >>Jim Gettys >>One Laptop Per Child >> cheers jon From day at std.com Sun May 11 04:24:28 2008 From: day at std.com (John Day) Date: Sun, 11 May 2008 07:24:28 -0400 Subject: [e2e] end of interest In-Reply-To: References: <4825CAD8.8050801@reed.com> <1210445625.6167.138.camel@jg-laptop> Message-ID: At 9:08 +0100 2008/05/11, Jon Crowcroft wrote: >nice example of 0nership by warring protocol layer factions > >mesh wifi people need to learn to do layer 3 snooping >same way telecom people did... The need to do snooping is an indication of the current model's inability or refusal to innovate. A failure to dig more deeply into the model. (Or a fear to challenge their religion.) (It turns out once there is a complete architecture, not one of these DOS look-a-likes, snooping isn't necessary.) > >there's a great e2e topic - >we have sort of gotten out of the >denial phase on middle boxes and >we're probably ok with multicast's niches now ... Middleboxes are alos an artifact of an incomplete architecture. In a full (shall we say, a wff) architecture, they aren't necessary. > >but should we raise the >art of _snooping_ to being a >first class component of any decent >postmodern internet architecture? No, snooping is an admission of failure. Calling it a component of a "decent" internet architecture is merely making excuses for our failures. > >knowing >multicast group members locations from lookin at IGMP traffic from "below" >is one exxaple (think dslams too) but another would be >P2P aware Traffic Engineering, for example Recognizing that a multicast address is the name of a set deals with most of this. (Of course, this means that strictly speaking multicast addresses aren't really addresses but names.) A multicast or anycast address must always resolve at some point to a normal address. The idea of a multicast address as an ambiguous address is fundamentally broken. > >"layer violations" as taught in protocls #101 has traditionally >been restricted to upper layer tweaking layer-2 operating parameters >(think Application/TCP causing Dial up), rather than >vice versa - but the other way round stretches >programming API paradigms more athletically >so may be condusive to progress... If I understand what you are alluding to, this is addressed by not ignoring the existence of the enrollment phase in communication. What I have found is that in a wff architecture there are no need for layer violations. In other words, if you have layer violations, you are doing something wrong some place. Either in how you are trying to do what you want to do, or in what you think a layer is. In this case it seems to be a bit of both. Take care, John > >In missive <1210445625.6167.138.camel at jg-laptop>, Jim Gettys typed: > > >>On Sat, 2008-05-10 at 12:18 -0400, David P. Reed wrote: > >> > >>> There are huge aspects of that future that depend on getting the > >>> low-level abstractions right (in the sense that they match real physical > >>> reality). And at the same time, constructing a stack of abstractions > >>> that work to maximize the utility of radio. > >>> > >> > >>First hand reality in the OLPC project: use of multicast/broadcast based > >>protocols when crossed with nascent wireless protocols (802.11s), can > >>cause spectacularly "interesting" (as in Chinese curse) interactions. > >> > >>First hand experience is showing that one had better understand what > >>happens at the lowest wireless layers while building application > >>middleware protocols and applications.... Some existing protocols that > >>have worked well on wired networks, and sort of worked OK on 802.11abc > >>networks, just doesn't work well (or scale well) on a mesh designed to > >>try to hide what's going on under the covers. > >> > >>While overlays are going to play an important role in getting us out of > >>the current morass (without transition strategies, we're toast; that was > >>what got the Internet out of telecom circuit switching as the only > >>mechanism), I have to emphatically agree with Dave that we'd better get > >>moving on more fundamental redesign and rethinking of networking.... > >> - Jim > >> > >>-- > >>Jim Gettys > >>One Laptop Per Child > >> > > cheers > > jon From dpreed at reed.com Sun May 11 07:52:10 2008 From: dpreed at reed.com (David P. Reed) Date: Sun, 11 May 2008 10:52:10 -0400 Subject: [e2e] end of interest In-Reply-To: References: <4825CAD8.8050801@reed.com> <1210445625.6167.138.camel@jg-laptop> Message-ID: <4827081A.1090305@reed.com> Snooping honors the Layeristi - granting them rhetorical power they never deserved. It sounds like "cheating" or "illegal" operation. The Internet was born without layers - it used architectural framing differently (e.g., one arch principle illustrative: encapsulation is not layering, and even survives as IP gets encapsulated in TCP port 8 VPNs, much to the chagrin of the Layeristi purists who explain it as a "bug", rather than looking at its roots in passing IP datagrams over SNA and NCP virtual circuits). I'd suggest that first-principles thinking is harder than Jon thinks. It's not just a matter of choosing sides in a war, or acting as an arms merchant to both sides. It's about thinking more squarely about the real underlying issues that comprise communications . In fact, as some of us have suggested, perhaps the idea that communications can be considered as a "pure" architectural/linguistic frame independent of storage and computation and sensing is the real issue we ought to be addressing today, with pervasive comms/storage/computational elements capable of all three. John Day wrote: > At 9:08 +0100 2008/05/11, Jon Crowcroft wrote: >> nice example of 0nership by warring protocol layer factions >> >> mesh wifi people need to learn to do layer 3 snooping >> same way telecom people did... > > The need to do snooping is an indication of the current model's > inability or refusal to innovate. A failure to dig more deeply into > the model. (Or a fear to challenge their religion.) > > (It turns out once there is a complete architecture, not one of these > DOS look-a-likes, snooping isn't necessary.) > >> >> there's a great e2e topic - >> we have sort of gotten out of the >> denial phase on middle boxes and >> we're probably ok with multicast's niches now ... > > Middleboxes are alos an artifact of an incomplete architecture. In a > full (shall we say, a wff) architecture, they aren't necessary. > >> >> but should we raise the >> art of _snooping_ to being a >> first class component of any decent >> postmodern internet architecture? > > No, snooping is an admission of failure. Calling it a component of a > "decent" internet architecture is merely making excuses for our failures. > >> >> knowing >> multicast group members locations from lookin at IGMP traffic from >> "below" >> is one exxaple (think dslams too) but another would be >> P2P aware Traffic Engineering, for example > > Recognizing that a multicast address is the name of a set deals with > most of this. (Of course, this means that strictly speaking multicast > addresses aren't really addresses but names.) A multicast or anycast > address must always resolve at some point to a normal address. The > idea of a multicast address as an ambiguous address is fundamentally > broken. > >> >> "layer violations" as taught in protocls #101 has traditionally >> been restricted to upper layer tweaking layer-2 operating parameters >> (think Application/TCP causing Dial up), rather than >> vice versa - but the other way round stretches >> programming API paradigms more athletically >> so may be condusive to progress... > > If I understand what you are alluding to, this is addressed by not > ignoring the existence of the enrollment phase in communication. > > What I have found is that in a wff architecture there are no need for > layer violations. In other words, if you have layer violations, you > are doing something wrong some place. Either in how you are trying to > do what you want to do, or in what you think a layer is. In this case > it seems to be a bit of both. > > Take care, > John > >> >> In missive <1210445625.6167.138.camel at jg-laptop>, Jim Gettys typed: >> >> >>On Sat, 2008-05-10 at 12:18 -0400, David P. Reed wrote: >> >> >> >>> There are huge aspects of that future that depend on getting the >> >>> low-level abstractions right (in the sense that they match real >> physical >> >>> reality). And at the same time, constructing a stack of >> abstractions >> >>> that work to maximize the utility of radio. >> >>> >> >> >> >>First hand reality in the OLPC project: use of multicast/broadcast >> based >> >>protocols when crossed with nascent wireless protocols (802.11s), can >> >>cause spectacularly "interesting" (as in Chinese curse) interactions. >> >> >> >>First hand experience is showing that one had better understand what >> >>happens at the lowest wireless layers while building application >> >>middleware protocols and applications.... Some existing protocols >> that >> >>have worked well on wired networks, and sort of worked OK on >> 802.11abc >> >>networks, just doesn't work well (or scale well) on a mesh >> designed to >> >>try to hide what's going on under the covers. >> >> >> >>While overlays are going to play an important role in getting us >> out of >> >>the current morass (without transition strategies, we're toast; >> that was >> >>what got the Internet out of telecom circuit switching as the only >> >>mechanism), I have to emphatically agree with Dave that we'd >> better get >> >>moving on more fundamental redesign and rethinking of networking.... >> >> - Jim >> >> >> >>-- >> >>Jim Gettys >> >>One Laptop Per Child >> >> >> >> cheers >> >> jon > > From touch at ISI.EDU Sun May 11 10:35:54 2008 From: touch at ISI.EDU (Joe Touch) Date: Sun, 11 May 2008 10:35:54 -0700 Subject: [e2e] end of interest In-Reply-To: References: <4825CAD8.8050801@reed.com> <1210445625.6167.138.camel@jg-laptop> Message-ID: <48272E7A.1090402@isi.edu> Jon Crowcroft wrote: > nice example of 0nership by warring protocol layer factions > > mesh wifi people need to learn to do layer 3 snooping > same way telecom people did... ... Link layers can't be designed in the absence of considerations of the Internet without adverse impact either. Having the mesh wifi people check RFC3819 would be a start. Joe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20080511/8295bb7d/signature.bin From day at std.com Sun May 11 11:59:19 2008 From: day at std.com (John Day) Date: Sun, 11 May 2008 14:59:19 -0400 Subject: [e2e] end of interest -- BP metadata / binary vs text In-Reply-To: <48270472.6070700@reed.com> References: <20080508202334.444B228E155@aland.bbn.com> <1210377766.7973.100.camel@localhost.localdomain> <603BF90EB2E7EB46BF8C226539DFC20701316B20@EVS-EC1-NODE1.surrey.ac.uk> <1210432515.7973.171.camel@localhost.localdomain> <603BF90EB2E7EB46BF8C226539DFC20701316B22@EVS-EC1-NODE1.surrey.ac.uk> <1210447341.7973.286.camel@localhost.localdomain> <48270472.6070700@reed.com> Message-ID: At 10:36 -0400 2008/05/11, David P. Reed wrote: >"Traditional layering" my a**. When the present Internet >architecture was constructed in the 1970's there was no OSI model at >all. It would be revisionist (alas too common) to imagine that the >*goal* of communications architecture is to fit into a frame (OSI) >that was conceived ex post as merely an explanatory tool for the >decisions about modularity that were made on far more serious >grounds than a mere picture of a stack. ;-) lol. Ophelia methinks thou doth protest too much! I don't remember anyone mentioning OSI. If there is any revisionist history, Dave it is yours. I don't remember who said it, but when I refer "traditional layering" I am referring to the ideas we had of it prior to OSI. "Traditional" layering did predate the OSI model by probably 5 years or more. Bachman's 7 layers (OSI) were proposed in 78. I remember that by at least 72 - 73, it was common to talk about Physical, Data Link, Network and Transport Layers. In the ARPANet, we weren't sure what was above that. But ARPA shut down our attempt to find out in 1974. The Internet architecture was basically modelled on the CYCLADES network, which used the same separation of a connectionless network layer called CIGALE (corresponding to IP) and an end-to-end transport layer called TS (TCP). CYCLADES was up and running by 74. Everyone pretty much agreed on the lower 4. It turns out we were close but wrong. (This was in that idyllic period when all of the non-PTT research networks were collaborating and just as the world of politics foisted itself on us. In around 75, we found out about X.25 and were horrified.) Charlie (Bachman) working alone used the lower four everyone was talking about and took a stab at what he thought was above transport (the upper 3) and got them wrong as well. By 83, OSI had figured out there was only one layer on top of Transport, but politics prevented a revision, so the protocols were written so the clueful could build them as a single state machine. However, it turns out that they were wrong for other reasons as well. >The "end to end argument" was a pattern of argumentation used to >organize architectural thinking. > >I am afraid that those who treat the RFCs as scripture from high >priests mistake dogma for thoughtfulness. Indeed. I have always been amazed at how people treat these things as gospel or worse: design specs! ;-) They are distinctly thoughtful, especially the earlier ones before politics became so strong. Even in ITU, OSI and IEEE specs the same can be said. Although separating the politics is more difficult there. And in all cases, they represent our best understanding at the time. And if we are really as good as we say we are, that too will change. > >RFCs started out as a frame for discussion. For making convincing >arguments. The act of granting an RFC a number does not (anymore >than acceptance at a peer reviewed journal does not) create a "fact" >or a "truth". RFC started out being RFCs! ;-) They really were Requests for Comments. I still find it disturbing/ironic that an *RFC* is in effect the "standard" and an Internet-draft which sounds like it should be a draft "standard" is an RFC. Boy, we don't have anything on Alice or the ITU! > >And now many on this list (which used to be above all that) behave >like Talmudic scholars or law professors - somehow thinking that by >studying merely the grammar and symbols we can ascertain what is >right, what is good, or what is fit to purpose. Not me, bro! I knew all those early guys and hell, we were just trying to stay ahead of the game! The one I like though is hearing people explain why things I know were kludges are insightful pieces of design. Hate to think what they would think if they ever saw what we had in mind but didn't get to do. Best they don't find out. All that bowing would be embarrassing! ;-) > >The essential valid measure of DTN ideas is that they work, and will >continue to work well, *to organize the solution* to an interesting >class of real-world problems. It is irrelevant whether they >provide the basis for destroying some "traditional paradigm" and >creating a new religion. The trouble is we do have a religion and one that is poorly understood and full of misconceptions and myths. And we don't do a good job of debunking it to keep the inquiry fresh. Much of this came about from the bunker mentality we were conditioned into during the fights with the PTTs. Some of it is the natural behavior of new converts. I believe the real failure has been academics and textbook authors who should have been distilling general principles and pointing the way rather than merely reporting what was out there. To some degree this brings us back to the discussion of a week or so ago, about research vs engineering the Internet. As researchers, we were wont to let go of our creation. And the war with the PTTs helped by forcing us to protect it. O, well, too much rambling. Take care, John > >What made the Internet architecture useful was its attention to >"interoperation" and to facilitating support of "unanticipated" >applications and implementation technologies. It framed those >things well, making progress possible. DTN ideas frame a new set of >issues well - communcations that occur between entities that occupy >discontiguous regions of space-time influence. Such communications >have always existed (books communicate across time in personal and >public libraries, postal letters transcend spatial barriers in >self-contained form) - DTN's merely ratify their importance by >focusing framing on those issues. > > > >Rajesh Krishnan wrote: >> >>Granted this matches the viewpoint presented in RFC 5050 of BP's >>(non-threatening ;) relationship to TCP/IP. >> >>By including forwarding and dynamic routing (L3?), retransmissions (L4? >>and L2?), and persistent storage and application metadata tagging (L7?) >>concerns within the same protocol, BP does not fit harmoniously at L5 of >>the TCP/IP Internet, IMHO. This challenge to traditional layering is >>precisely what I find most fascinating about DTN. >> >>With the CLA/BP split, there is still layering in DTN; just that the >>layering is not congruent to conventional TCP/IP layering. Effectively, >>DTN/BP seems to relate to TCP/IP more or less the same way IP looks at >>other network technologies. At least that is my interpretation of >>DTN/BP as an overlay abstraction (TCP/IP is relevant only as expedient >>means for early deployment. ;) >> >>I am speaking only for myself here (not past or present employers or >>funding agencies or IRTF WGs), and this thread ought to migrate to >>dtn-interest perhaps. >> >>Best Regards, >>Rajesh >> >> >> >> From jg at laptop.org Sun May 11 17:02:39 2008 From: jg at laptop.org (Jim Gettys) Date: Sun, 11 May 2008 20:02:39 -0400 Subject: [e2e] end of interest In-Reply-To: References: Message-ID: <1210550559.8318.41.camel@jg-laptop> Snooping is insufficient. It appears you need to have additional knowledge at higher layers (that the wireless layers do not have), and design significantly different protocols for reasonable scaling. The consequences of trying to simulate a wired ethernet through first 802.11abg and then further the mesh's attempt to take this the next step, while preserving "standard" multicast/broadcast semantics results in "interesting" interactions. In our case, it is to provide a presence protocol that is the basis of our collaboration framework. ?Polychronis Ypodimatopoulos is, however, making good progress with a protocol that is designed much more from first principles, and getting very much better scaling properties. If you are interested, you can look up the work-in-progress. Wireless != wired networking. Everyone has to get over it... Rather than moaning about the end of end-to-end interest, how about solving some real problems? There is lots of interesting networking issues to be revisited and revised in fundamental ways. We *really* don't have the right abstractions (e.g. presence services). If all we have going forward is what was done for wired ethernet and the 1990-2000 Internet, we'll be missing the real boat. - Jim On Sun, 2008-05-11 at 09:40 +0100, Jon Crowcroft wrote: > nice example of 0nership by warring protocol layer factions > > mesh wifi people need to learn to do layer 3 snooping > same way telecom people did... > > there's a great e2e topic - > we have sort of gotten out of the > denial phase on middle boxes and > we're probably ok with multicast's niches now ... > > but should we raise the > art of _snooping_ to being a > first class component of any decent > postmodern internet architecture? > > knowing > multicast group members locations from lookin at IGMP traffic from "below" > is one exxaple (think dslams too) but another would be > P2P aware Traffic Engineering, for example > > "layer violations" as taught in protocls #101 has traditionally > been restricted to upper layer tweaking layer-2 operating parameters > (think Application/TCP causing Dial up), rather than > vice versa - but the other way round stretches > programming API paradigms more athletically > so may be condusive to progress... > > In missive <1210445625.6167.138.camel at jg-laptop>, Jim Gettys typed: > > >>On Sat, 2008-05-10 at 12:18 -0400, David P. Reed wrote: > >> > >>> There are huge aspects of that future that depend on getting the > >>> low-level abstractions right (in the sense that they match real physical > >>> reality). And at the same time, constructing a stack of abstractions > >>> that work to maximize the utility of radio. > >>> > >> > >>First hand reality in the OLPC project: use of multicast/broadcast based > >>protocols when crossed with nascent wireless protocols (802.11s), can > >>cause spectacularly "interesting" (as in Chinese curse) interactions. > >> > >>First hand experience is showing that one had better understand what > >>happens at the lowest wireless layers while building application > >>middleware protocols and applications.... Some existing protocols that > >>have worked well on wired networks, and sort of worked OK on 802.11abc > >>networks, just doesn't work well (or scale well) on a mesh designed to > >>try to hide what's going on under the covers. > >> > >>While overlays are going to play an important role in getting us out of > >>the current morass (without transition strategies, we're toast; that was > >>what got the Internet out of telecom circuit switching as the only > >>mechanism), I have to emphatically agree with Dave that we'd better get > >>moving on more fundamental redesign and rethinking of networking.... > >> - Jim > >> > >>-- > >>Jim Gettys > >>One Laptop Per Child > >> > > cheers > > jon > > -- Jim Gettys One Laptop Per Child From dpreed at reed.com Mon May 12 07:36:03 2008 From: dpreed at reed.com (David P. Reed) Date: Mon, 12 May 2008 10:36:03 -0400 Subject: [e2e] end of interest -- BP metadata / binary vs text In-Reply-To: <1210519807.7973.346.camel@localhost.localdomain> References: <20080508202334.444B228E155@aland.bbn.com> <1210377766.7973.100.camel@localhost.localdomain> <603BF90EB2E7EB46BF8C226539DFC20701316B20@EVS-EC1-NODE1.surrey.ac.uk> <1210432515.7973.171.camel@localhost.localdomain> <603BF90EB2E7EB46BF8C226539DFC20701316B22@EVS-EC1-NODE1.surrey.ac.uk> <1210447341.7973.286.camel@localhost.localdomain> <48270472.6070700@reed.com> <1210519807.7973.346.camel@localhost.localdomain> Message-ID: <482855D3.7030900@reed.com> Rajesh - I am really sorry if what I wrote implied any sense that you should stop posting! That was not my intention at all. I was using your post to comment on the rhetoric of Internet architecture that has grown up in the community, and in particular the assumption that formal layering of the ISO sort (strict and unique function placement into a totally ordered set of layers) is or was a crucial and important part of the Internet's design. No disparagement of you or the rest of your posting and its arguments was intended. Keep posting. Arguments are all we have going for us to organize a complex and difficult world. - David Rajesh Krishnan wrote: > David, > > My objective was to provoke discussion on the topic of whether it makes > sense to think of DTN as a session layer, as an overlay above TCP, etc. > from the e2e perspective. If you feel this topic is unsuitable for > discussion here, I will gladly stop posting on this topic here of > course. > > >> I am afraid that those who treat the RFCs as scripture from high priests >> mistake dogma for thoughtfulness. >> > > I am definitely not treating RFCs as scripture. I am not satisfied with > the explanation that DTN/BP belongs to the "session layer" as the best > or only model. I find this limiting, since DTN issues span all the way > from applications to the physical "layer". > > >> The essential valid measure of DTN ideas is that they work, and will >> continue to work well, *to organize the solution* to an interesting >> class of real-world problems. It is irrelevant whether they provide >> the basis for destroying some "traditional paradigm" and creating a new >> religion. >> > > Agreed. If anyone reads my post as being about destroying old religion > and creating a new one, that is missing the point. It is about getting > out of old shackles though; why keep doing the same thing (like the way > we fit seem to try to fit DTN into the old mold) and expect a different > result? If we say DTN is in the session layer, we will engineer a > useful DTN session layer alright, but will a DTN session layer solve any > fundamentally new problems? Does that limit new architectures that > could be explored? > > >> What made the Internet architecture useful was its attention to >> "interoperation" and to facilitating support of "unanticipated" >> applications and implementation technologies. It framed those things >> well, making progress possible. DTN ideas frame a new set of issues >> well - communcations that occur between entities that occupy >> discontiguous regions of space-time influence. Such communications >> have always existed (books communicate across time in personal and >> public libraries, postal letters transcend spatial barriers in >> self-contained form) - DTN's merely ratify their importance by focusing >> framing on those issues. >> > > Agreed. In a DTN, among other things, we want to maximize the value of > information exchanged within a transitory encounter, and this can not be > framed entirely as a session-layer issue. Relegating DTN to the session > layer will completely isolate it from a radio-aware CLA (or whatever > name we want to call it) among other things, which relates to a point > you made earlier. > > Hope that clarifies my original post somewhat. > > Best Regards, > Rajesh > > >> Rajesh Krishnan wrote: >> >>> Granted this matches the viewpoint presented in RFC 5050 of BP's >>> (non-threatening ;) relationship to TCP/IP. >>> >>> By including forwarding and dynamic routing (L3?), retransmissions >>> >> (L4? >> >>> and L2?), and persistent storage and application metadata tagging >>> >> (L7?) >> >>> concerns within the same protocol, BP does not fit harmoniously at >>> >> L5 of >> >>> the TCP/IP Internet, IMHO. This challenge to traditional layering >>> >> is >> >>> precisely what I find most fascinating about DTN. >>> >>> With the CLA/BP split, there is still layering in DTN; just that the >>> layering is not congruent to conventional TCP/IP layering. >>> >> Effectively, >> >>> DTN/BP seems to relate to TCP/IP more or less the same way IP looks >>> >> at >> >>> other network technologies. At least that is my interpretation of >>> DTN/BP as an overlay abstraction (TCP/IP is relevant only as >>> >> expedient >> >>> means for early deployment. ;) >>> >>> I am speaking only for myself here (not past or present employers or >>> funding agencies or IRTF WGs), and this thread ought to migrate to >>> dtn-interest perhaps. >>> >>> Best Regards, >>> Rajesh >>> > > > > From dpreed at reed.com Mon May 12 07:54:31 2008 From: dpreed at reed.com (David P. Reed) Date: Mon, 12 May 2008 10:54:31 -0400 Subject: [e2e] end of interest In-Reply-To: References: <4825CAD8.8050801@reed.com> <1210445625.6167.138.camel@jg-laptop> <4827081A.1090305@reed.com> Message-ID: <48285A27.9020205@reed.com> Jon Crowcroft wrote: > In missive <4827081A.1090305 at reed.com>, "David P. Reed" typed: > > >>I'd suggest that first-principles thinking is harder than Jon thinks. > > gosh - i thought I just wrote something - i didnt realize andy lipmann's > medialab goal of telepathy had already been reached and you could figure out > how hard I thought first principles thinking is....thats neat - > if you can teach the rest of us that trick, we > could save a lot of bandwidth and carpal pain > Of course, you are right. I achieved no telepathic insight into your self-perception of difficulty. Another inapt metaphor from inept writer committed, admitted, convicted, and sentenced to embarrassment. If I can save anyone carpal pain I'll license the invention for free. Nonetheless, I agree with John Day who pointed out much better than I that perhaps the problem is not to legitimize layer "violations" as good things, but to recognize that maybe the idea of ordered layering was injected way beyond its usefulness into the rhetorical framing of network protocol design, and now threatens clear thinking about a whole range of important issues. The end-to-end argument paper attempted to discuss a rationale for placement of functions in a modular architecture. That serves (imho) as a far better example of discussion of architectural principles than the ISO OSI attempt at a canon/scripture. The ISO OSI model was introduced without a discussion of tradeoffs (very much the content of the end-to-end argument paper), as a fait accompli. Rather than talk about "breaking with a religion" - perhaps it might be better to talk about the elements of thoughtful design principles again. The idea of a "connection" or "session" is very useful in some contexts, but highly distorts the conversation on DTNs which leads to odd circumlocutions like "connection *less*" networking. Well, if we hadn't canonized them, we wouldn't have them. Books are a form of communications, as are TV broadcasts. Neither form of communications ever called for connections as a concept. So to call them connectionless implies that they are somehow "exceptional" or "troublesome". But they are really only troublesome to people so steeped in the canon that they assume "connections" are first-order axioms of any theory of communications. Algebra exists without unique multiplicative inverses. Geometry exists without the parallel postulate. Communications exists without layers - especially without "session layers" that presuppose an artificial invention. From day at std.com Mon May 12 09:37:32 2008 From: day at std.com (John Day) Date: Mon, 12 May 2008 12:37:32 -0400 Subject: [e2e] end of interest In-Reply-To: <48285A27.9020205@reed.com> References: <4825CAD8.8050801@reed.com> <1210445625.6167.138.camel@jg-laptop> <4827081A.1090305@reed.com> <48285A27.9020205@reed.com> Message-ID: > > >Nonetheless, I agree with John Day who pointed out much better than >I that perhaps the problem is not to legitimize layer "violations" >as good things, but to recognize that maybe the idea of ordered >layering was injected way beyond its usefulness into the rhetorical >framing of network protocol design, and now threatens clear thinking >about a whole range of important issues. I tend to think of this in terms of "loci" of shared state and their scopes. (I use locus to indicate that the amount of shared state could be small or large) The problem with our early thinking on layering whether OSI or not was confusing the layer as a model of distributed components with the implementation of such a thing. (I tend to be proponent of letting the problem tell me what is going on rather me imposing what I think is going on. ;-) It is less embarrassing.) Also, it seems that if networking is composed of "loci of shared state with different scopes that we want to treat as black boxes" then there seems to be something vaguely like a layer going on from one perspective. Given the problems we have had with layers this would seem to imply that there is something about them that we aren't getting right. Something we aren't seeing. It is peculiar that our response has been to do anything else just because there seems to be something wrong with our early idea of layer. (As I said yesterday, the OSI concept of layer for the lower 4 started out as a reflection of what people thought they were at the time. The detailed definitions associated with it (which I doubt any of you are familiar with ;-) and I wish I weren't) are where it really goes wrong. But that is pretty irrelevant to this discussion. I would have thought some smart guy would have said, 'let me show you dummies how to get it right!' ;-) > >The end-to-end argument paper attempted to discuss a rationale for >placement of functions in a modular architecture. >That serves (imho) as a far better example of discussion of >architectural principles than the ISO OSI attempt at a >canon/scripture. The ISO OSI model was introduced without a >discussion of tradeoffs (very much the content of the end-to-end >argument paper), as a fait accompli. The problem with the OSI model was 1) very few have ever read it and 2) the PTTs ensured that the what was written was not really what the problem was saying but more the economics they desired. Constructing such models to develop a better understanding of what is going on is generally a good idea. Letting them be affected by politics and business models is generally a bad idea. The trouble was that there was one group who were sure that a different technical solution was required for what they saw as technical reasons. IMO, they were wrong. OSI's major mistake was including the phone companies, but given the environment at the time it may have been unavoidable. > >Rather than talk about "breaking with a religion" - perhaps it might >be better to talk about the elements of thoughtful design principles >again. > >The idea of a "connection" or "session" is very useful in some >contexts, but highly distorts the conversation on DTNs which leads >to odd circumlocutions like "connection *less*" networking. Well, >if we hadn't canonized them, we wouldn't have them. Books are a >form of communications, as are TV broadcasts. Neither form of >communications ever called for connections as a concept. So to >call them connectionless implies that they are somehow "exceptional" >or "troublesome". But they are really only troublesome to people >so steeped in the canon that they assume "connections" are >first-order axioms of any theory of communications. This is why I talk about locus of shared state: distinct entities who "understand one another." Regardless of how much, or on what time frame. I don't see a problem with DTNs in this view. It sounds to me that a some people have a narrow view of "connection" just like some people in OSI did. (I once greatly irritated a connection advocate when I pointed out that a circuit was just a very long packet.) >Algebra exists without unique multiplicative inverses. Geometry >exists without the parallel postulate. Communications exists >without layers - especially without "session layers" that presuppose >an artificial invention. Indeed. I often point out that good design is doing the algebra before the arithmetic. And we seem to get stuck in the arithmetic a lot. And we have known for more than a quarter century that there is no session layer. Why are people still bringing it up? ;-) Take care, John From Jon.Crowcroft at cl.cam.ac.uk Tue May 13 00:31:29 2008 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Tue, 13 May 2008 08:31:29 +0100 Subject: [e2e] end of interest In-Reply-To: <48285A27.9020205@reed.com> References: <4825CAD8.8050801@reed.com> <1210445625.6167.138.camel@jg-laptop> <4827081A.1090305@reed.com> <48285A27.9020205@reed.com> Message-ID: w.r.t non-layered architecture, I'd commend a large body of work (my list) starting with the x-kernel (protocol graph) work at Arizona (larry peterson, o'malley et al) http://www.cs.arizona.edu/projects/xkernel/docs.html then work by Diot, Torsten Braun and others at INRIA 10-15 years ago on protocol compilers and composition (published in sigcomm 95), (icorporating many of tennenhouse/clark ideas on ILP and ALF) through the work on protocol heaps by Braden/Handley et al, (ccr jan 2003) and a nice paper on the Haggle architecture by James Scott (ex intel, now microsoft research) at Ubicomp 07 Seamless Networking for Mobile Applications all of these (and much other work) do not dispense with modularisaton, but diverge from the classical layering - often by making use of programming language, compiler, type system and flxibility, and hardware advances (e.g. support for smarter buffer ownerhip - also useful for virtualusation) not available to earlier communicastions (or OS) researchers (at least not in any level of efficiency one would consider useable) - more dynamic composition of protocol systems does not imply throwing all discipline out the window but does lead to more subtle discussion of the modes of interaction between components when one restricts the components to being in an ordered stack and the API to being a customer/service (a.k.a request/indication response/confirmation) through a Service Access Point, one is inevitably restricted to thinking about Procedure Call as an implementation (despite the fact that most early stacks and most stuff below transport has always been implemented with an integrated sequece of encapsulate and downcall for output, and nested processing within interrupt/upcall for input (to get somewhere close to zero copy in each direction) plus some sort of timer and exception scheme for asynch events that aren't in the data plane...) it is the non-dataplane things that are where all the grot pops out and is generally swept under some sort of pretend procedural wrapper in many stacks (especially higher level) On the usual basis that programmers who inhabit the higher levels can't cope with events and threads or other abstractions of course, since web 1.9, most application programmers laugh at this patronising attitude from sys$hackers...oh well In missive <48285A27.9020205 at reed.com>, "David P. Reed" typed: >>Jon Crowcroft wrote: >>> In missive <4827081A.1090305 at reed.com>, "David P. Reed" typed: >>> >>> >>I'd suggest that first-principles thinking is harder than Jon thinks. >>> >>> gosh - i thought I just wrote something - i didnt realize andy lipmann's >>> medialab goal of telepathy had already been reached and you could figure out >>> how hard I thought first principles thinking is....thats neat - >>> if you can teach the rest of us that trick, we >>> could save a lot of bandwidth and carpal pain >>> >>Of course, you are right. I achieved no telepathic insight into your >>self-perception of difficulty. Another inapt metaphor from inept writer >>committed, admitted, convicted, and sentenced to embarrassment. If I >>can save anyone carpal pain I'll license the invention for free. >> >>Nonetheless, I agree with John Day who pointed out much better than I >>that perhaps the problem is not to legitimize layer "violations" as good >>things, but to recognize that maybe the idea of ordered layering was >>injected way beyond its usefulness into the rhetorical framing of >>network protocol design, and now threatens clear thinking about a whole >>range of important issues. >> >>The end-to-end argument paper attempted to discuss a rationale for >>placement of functions in a modular architecture. >>That serves (imho) as a far better example of discussion of >>architectural principles than the ISO OSI attempt at a >>canon/scripture. The ISO OSI model was introduced without a discussion >>of tradeoffs (very much the content of the end-to-end argument paper), >>as a fait accompli. >> >>Rather than talk about "breaking with a religion" - perhaps it might be >>better to talk about the elements of thoughtful design principles again. >> >>The idea of a "connection" or "session" is very useful in some contexts, >>but highly distorts the conversation on DTNs which leads to odd >>circumlocutions like "connection *less*" networking. Well, if we >>hadn't canonized them, we wouldn't have them. Books are a form of >>communications, as are TV broadcasts. Neither form of communications >>ever called for connections as a concept. So to call them >>connectionless implies that they are somehow "exceptional" or >>"troublesome". But they are really only troublesome to people so >>steeped in the canon that they assume "connections" are first-order >>axioms of any theory of communications. >> >>Algebra exists without unique multiplicative inverses. Geometry exists >>without the parallel postulate. Communications exists without layers - >>especially without "session layers" that presuppose an artificial invention. >> >> cheers jon From day at std.com Sun May 11 12:06:21 2008 From: day at std.com (John Day) Date: Sun, 11 May 2008 15:06:21 -0400 Subject: [e2e] end of interest In-Reply-To: <4827081A.1090305@reed.com> References: <4825CAD8.8050801@reed.com> <1210445625.6167.138.camel@jg-laptop> <4827081A.1090305@reed.com> Message-ID: At 10:52 -0400 2008/05/11, David P. Reed wrote: >Snooping honors the Layeristi - granting them rhetorical power they >never deserved. It sounds like "cheating" or "illegal" operation. >The Internet was born without layers - it used architectural See previous note. No matter how you cut it. Whether you call them layers or framing. If you have to look at stuff that doesn't belong to you, you haven't done something right or there is something you don't understand. >framing differently (e.g., one arch principle illustrative: >encapsulation is not layering, and even survives as IP gets >encapsulated in TCP port 8 VPNs, much to the chagrin of the >Layeristi purists who explain it as a "bug", rather than looking at >its roots in passing IP datagrams over SNA and NCP virtual circuits). Gosh. How is this a bug? Sounds right to me! >I'd suggest that first-principles thinking is harder than Jon thinks. Well, it is hard. I can testify to that! Not sure it is harder than Jon thinks. Jon seems to think pretty hard much of the time. Although he doesn't want it to show. ;-) >It's not just a matter of choosing sides in a war, or acting as an >arms merchant to both sides. It's about thinking more squarely >about the real underlying issues that comprise communications . In >fact, as some of us have suggested, perhaps the idea that >communications can be considered as a "pure" >architectural/linguistic frame independent of storage and >computation and sensing is the real issue we ought to be addressing >today, with pervasive comms/storage/computational elements capable >of all three. No, it is a question of learning to listen carefully to what the problem is telling you and not imposing your own ideas on it. (I will admit that I have found that we do the later it is often wrong. Embarrassing when it happens. But if you are careful when you write it up no one notices!) ;-) > > >John Day wrote: >>At 9:08 +0100 2008/05/11, Jon Crowcroft wrote: >>>nice example of 0nership by warring protocol layer factions >>> >>>mesh wifi people need to learn to do layer 3 snooping >>>same way telecom people did... >> >>The need to do snooping is an indication of the current model's >>inability or refusal to innovate. A failure to dig more deeply >>into the model. (Or a fear to challenge their religion.) >> >>(It turns out once there is a complete architecture, not one of >>these DOS look-a-likes, snooping isn't necessary.) >> >>> >>>there's a great e2e topic - >>>we have sort of gotten out of the >>>denial phase on middle boxes and >>>we're probably ok with multicast's niches now ... >> >>Middleboxes are alos an artifact of an incomplete architecture. In >>a full (shall we say, a wff) architecture, they aren't necessary. >> >>> >>>but should we raise the >>>art of _snooping_ to being a >>>first class component of any decent >>>postmodern internet architecture? >> >>No, snooping is an admission of failure. Calling it a component of >>a "decent" internet architecture is merely making excuses for our >>failures. >> >>> >>>knowing >>>multicast group members locations from lookin at IGMP traffic from "below" >>>is one exxaple (think dslams too) but another would be >>>P2P aware Traffic Engineering, for example >> >>Recognizing that a multicast address is the name of a set deals >>with most of this. (Of course, this means that strictly speaking >>multicast addresses aren't really addresses but names.) A multicast >>or anycast address must always resolve at some point to a normal >>address. The idea of a multicast address as an ambiguous address >>is fundamentally broken. >> >>> >>>"layer violations" as taught in protocls #101 has traditionally >>>been restricted to upper layer tweaking layer-2 operating parameters >>>(think Application/TCP causing Dial up), rather than >>>vice versa - but the other way round stretches >>>programming API paradigms more athletically >>>so may be condusive to progress... >> >>If I understand what you are alluding to, this is addressed by not >>ignoring the existence of the enrollment phase in communication. >> >>What I have found is that in a wff architecture there are no need >>for layer violations. In other words, if you have layer >>violations, you are doing something wrong some place. Either in >>how you are trying to do what you want to do, or in what you think >>a layer is. In this case it seems to be a bit of both. >> >>Take care, >>John >> >>> >>>In missive <1210445625.6167.138.camel at jg-laptop>, Jim Gettys typed: >>> >>> >>On Sat, 2008-05-10 at 12:18 -0400, David P. Reed wrote: >>> >> >>> >>> There are huge aspects of that future that depend on getting the >>> >>> low-level abstractions right (in the sense that they match >>>real physical >>> >>> reality). And at the same time, constructing a stack of abstractions >>> >>> that work to maximize the utility of radio. >>> >>> >>> >> >>> >>First hand reality in the OLPC project: use of multicast/broadcast based >>> >>protocols when crossed with nascent wireless protocols (802.11s), can >>> >>cause spectacularly "interesting" (as in Chinese curse) interactions. >>> >> >>> >>First hand experience is showing that one had better understand what >>> >>happens at the lowest wireless layers while building application >>> >>middleware protocols and applications.... Some existing protocols that >>> >>have worked well on wired networks, and sort of worked OK on 802.11abc >>> >>networks, just doesn't work well (or scale well) on a mesh designed to >>> >>try to hide what's going on under the covers. >>> >> >>> >>While overlays are going to play an important role in getting us out of >>> >>the current morass (without transition strategies, we're toast; that was >>> >>what got the Internet out of telecom circuit switching as the only >>> >>mechanism), I have to emphatically agree with Dave that we'd better get >>> >>moving on more fundamental redesign and rethinking of networking.... >>> >> - Jim >>> >> >>> >>-- >>> >>Jim Gettys >>> >>One Laptop Per Child >>> >> >>> >>> cheers >>> >>> jon From keshav at uwaterloo.ca Wed May 14 17:55:34 2008 From: keshav at uwaterloo.ca (S. Keshav) Date: Wed, 14 May 2008 20:55:34 -0400 Subject: [e2e] Layering vs. modularization Message-ID: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> This note addresses the recent discussion on layering as a form of modularization. Layering is one particular (but not very good) form of modularization. Modularization, as in programming, allows separation of concerns and clean interface design. Layering goes well beyond, insisting on (a) progressively higher levels of abstraction, i.e. an enforced conceptual hierarchy, (b) a progressively larger topological scope along this hierarchy, and (c) a single path through the set of modules. None of the three is strictly necessary, and, for example in the case of wireless networks, is broken. Jon's message pointed to several previous designs, notably x-kernel, that took a different cut. In recent work, (blatant self-promotion alert) we tried to formalize these approaches in our Sigcomm 2007 paper called "An Axiomatic Basis for Communication." Interestingly, our approach only addressed the data plane. When we move to the control plane, as Jon hinted, things get very hard very fast. Essentially, the problem is that of race conditions: the same state variable can be touched by different entities in different ways (think of routing updates), and so it becomes hard to tell what the data plane is going to actually do. In fact, given a sufficiently large network, some chunk of the network is always going to be in an inconsistent state. So, even eventual-always-convergence becomes hard to achieve or prove. Nevertheless, this line of attack does give some insights into alternatives to layer-ism. regards S. Keshav and Martin Karsten Jon Crowcroft said: > all of these (and much other work) > do not dispense with modularisaton, > but diverge from the classical layering - often by making use of > programming language, compiler, type system and flxibility, > and hardware advances (e.g. support for smarter buffer ownerhip - > also useful for virtualusation) > not available to earlier communicastions (or OS) researchers > (at least not in any level of efficiency one would consider useable) - > > more dynamic composition of protocol systems does not imply > throwing all discipline out the window but does lead to > more subtle discussion of the modes of interaction between > components > > ... > > it is the non-dataplane things that are where all the grot pops out > and is generally swept under some sort of pretend procedural wrapper > in many stacks (especially higher level) From ggm at apnic.net Wed May 14 18:47:14 2008 From: ggm at apnic.net (George Michaelson) Date: Thu, 15 May 2008 11:47:14 +1000 Subject: [e2e] end of interest In-Reply-To: <48285A27.9020205@reed.com> References: <4825CAD8.8050801@reed.com> <1210445625.6167.138.camel@jg-laptop> <4827081A.1090305@reed.com> <48285A27.9020205@reed.com> Message-ID: <02054F9C-C995-43E1-90F2-B99070E0C898@apnic.net> you would perhaps draw the line at the link layer, right? I mean, if I am on a local PPP over serial, and you are on optical FTTH, there is an element of difference over what we do to establish a local binding, and the choice of encoding that relates to that binding, and things which relate directly to meaningful events against each other... -I could even argue that if I do credentials presentation during a G3 binding on my phone and that uses IP in some wierd way, or god forbid MPLS, that its still notionally within link views, and not e2e meaningful. so in all this conversatzione about layers vs modules and models vs architectural reality, there is a slightly more 'concrete' aspect to the fuzz for these two.. (currently using some variant of manchester encoding on copper, although I'm blowed if I know which one it is...) -george From day at std.com Wed May 14 20:12:57 2008 From: day at std.com (John Day) Date: Wed, 14 May 2008 23:12:57 -0400 Subject: [e2e] Layering vs. modularization In-Reply-To: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> Message-ID: At 20:55 -0400 2008/05/14, S. Keshav wrote: >This note addresses the recent discussion on layering as a form of >modularization. > >Layering is one particular (but not very good) form of >modularization. Modularization, as in programming, allows separation >of concerns and clean interface design. Layering goes well beyond, >insisting on (a) >progressively higher levels of abstraction, i.e. an enforced >conceptual hierarchy, (b) a progressively larger topological scope >along this hierarchy, and (c) a single path through the set of >modules. None of the three is strictly necessary, and, for example >in the case of wireless networks, is broken. Gee, the only layering I have ever seen that had problems were the ones done badly, such as with the Internet and OSI. In my experience, if you do it right, it actually helps rather than gets in the way. Although, I know the first two conditions hold for properly layered systems and I am not sure I understand the third. Hmmm, guess I am missing something. Although, I have to admit I never quite understood how wireless caused problems. >Jon's message pointed to several previous designs, notably x-kernel, >that took a different cut. In recent work, (blatant >self-promotion alert) we tried to formalize these approaches in our >Sigcomm 2007 paper called "An Axiomatic Basis for Communication." > >Interestingly, our approach only addressed the data plane. When we >move to the control plane, as Jon hinted, things get very hard very >fast. Essentially, the problem is that of race conditions: the same >state variable can be touched by different entities in different >ways (think of routing updates), and so it becomes hard to tell what >the data plane is going to actually do. In fact, given a >sufficiently large network, some chunk of the network is always >going to be in an inconsistent state. So, even >eventual-always-convergence becomes hard to achieve or prove. >Nevertheless, this line of attack does give some insights into >alternatives to layer-ism. The solution to that then is to not let the network get too large! Simple. ;-) Looking at your paper it seems to tend toward the beads-on-a-string model that the phone companies have always favored. Those never had very good scaling properties. It is not clear how differences of scope are accommodated. But then differences of scope sort of requires some sort of layer, doesn't it? So how does this architecture facilitate scaling? Still confused, ;-) John Day From touch at ISI.EDU Wed May 14 22:06:56 2008 From: touch at ISI.EDU (Joe Touch) Date: Wed, 14 May 2008 22:06:56 -0700 Subject: [e2e] Layering vs. modularization In-Reply-To: References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> Message-ID: <482BC4F0.7020409@isi.edu> John Day wrote: ... > Gee, the only layering I have ever seen that had problems were the ones > done badly, such as with the Internet and OSI. In my experience, if you > do it right, it actually helps rather than gets in the way. That's been my perspective as well; most of the issues I've seen with layering problems - including "layer violation" really turned out to be gaps in the layering structure that were patched over with violations rather than fixed correctly. We've seen a few key issues with layering (insert self promotion warning here), and our observations suggest that it may be a fundamental construct, rather than an artifact to be "erased" in a clean-slate approach: 1. layering often focuses on the layers and ignores the inter-layer glue; Yu-Shun Wang's thesis (a recent PhD student of mine) focused on this issue, and found numerous ways in which the interlayer glue was similar at nearly every layer boundary, and could be addressed by an additional, meaningful generic mechanism 2. layering disagreements sometimes revolve around what each layer means, as unique from other layers; our NSF FIND "RNA" project (to appear at ICCCN, Future Internet track) is exploring the ways in which layers are internally similar, but their behavior is governed by their relative position (what's above and below) and their scope (distance/time extent). This turns the "one layer to bind them all" (ala XTP) into a single 'stem cell' layer that acts like different, well-known OSI layers when stacked. The idea is that the environment of a layer defines the layer, and that layering is semantically useful in composing services on each other. Joe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20080514/b0c3f7ab/signature.bin From Jon.Crowcroft at cl.cam.ac.uk Thu May 15 05:40:20 2008 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Thu, 15 May 2008 13:40:20 +0100 Subject: [e2e] end of interest In-Reply-To: References: <4825CAD8.8050801@reed.com> <1210445625.6167.138.camel@jg-laptop> <4827081A.1090305@reed.com> Message-ID: for me, modularisation in layers is part of hierachical categorisation, which is fine for librarians - categories (as in women, fire and dangerous things) led to a more complex way of modularisation which ended up being the OO paradigm with multiple inheritence - as an implementation pardigm, this turned out to be too hard for the hard of thinking, so most OO languages restricted inheritence and refinement to hierarchiy again - but in most programmes, one is concerned with a much lower dimensionality problem space tha the multi-stakeholder system that is a network architecture where a more suble and messay abstraction may be fine - for example one might just have constraints on resources and different constraints on identiifers, and different constrains on reliability and the way solutions are componentized across a set of nodes (hosts, routers, if you must) an architecture that was purely constraint based (i.e. just said what you DONT do) would be very interesting:) In missive , John Day typed: >>At 10:52 -0400 2008/05/11, David P. Reed wrote: >>>Snooping honors the Layeristi - granting them rhetorical power they >>>never deserved. It sounds like "cheating" or "illegal" operation. >>>The Internet was born without layers - it used architectural >> >>See previous note. No matter how you cut it. Whether you call them >>layers or framing. If you have to look at stuff that doesn't belong >>to you, you haven't done something right or there is something you >>don't understand. >> >>>framing differently (e.g., one arch principle illustrative: >>>encapsulation is not layering, and even survives as IP gets >>>encapsulated in TCP port 8 VPNs, much to the chagrin of the >>>Layeristi purists who explain it as a "bug", rather than looking at >>>its roots in passing IP datagrams over SNA and NCP virtual circuits). >> >>Gosh. How is this a bug? Sounds right to me! >> >>>I'd suggest that first-principles thinking is harder than Jon thinks. >> >>Well, it is hard. I can testify to that! Not sure it is harder than >>Jon thinks. Jon seems to think pretty hard much of the time. >>Although he doesn't want it to show. ;-) >> >>>It's not just a matter of choosing sides in a war, or acting as an >>>arms merchant to both sides. It's about thinking more squarely >>>about the real underlying issues that comprise communications . In >>>fact, as some of us have suggested, perhaps the idea that >>>communications can be considered as a "pure" >>>architectural/linguistic frame independent of storage and >>>computation and sensing is the real issue we ought to be addressing >>>today, with pervasive comms/storage/computational elements capable >>>of all three. >> >>No, it is a question of learning to listen carefully to what the >>problem is telling you and not imposing your own ideas on it. (I >>will admit that I have found that we do the later it is often wrong. >>Embarrassing when it happens. But if you are careful when you write >>it up no one notices!) ;-) >> >>> >>> >>>John Day wrote: >>>>At 9:08 +0100 2008/05/11, Jon Crowcroft wrote: >>>>>nice example of 0nership by warring protocol layer factions >>>>> >>>>>mesh wifi people need to learn to do layer 3 snooping >>>>>same way telecom people did... >>>> >>>>The need to do snooping is an indication of the current model's >>>>inability or refusal to innovate. A failure to dig more deeply >>>>into the model. (Or a fear to challenge their religion.) >>>> >>>>(It turns out once there is a complete architecture, not one of >>>>these DOS look-a-likes, snooping isn't necessary.) >>>> >>>>> >>>>>there's a great e2e topic - >>>>>we have sort of gotten out of the >>>>>denial phase on middle boxes and >>>>>we're probably ok with multicast's niches now ... >>>> >>>>Middleboxes are alos an artifact of an incomplete architecture. In >>>>a full (shall we say, a wff) architecture, they aren't necessary. >>>> >>>>> >>>>>but should we raise the >>>>>art of _snooping_ to being a >>>>>first class component of any decent >>>>>postmodern internet architecture? >>>> >>>>No, snooping is an admission of failure. Calling it a component of >>>>a "decent" internet architecture is merely making excuses for our >>>>failures. >>>> >>>>> >>>>>knowing >>>>>multicast group members locations from lookin at IGMP traffic from "below" >>>>>is one exxaple (think dslams too) but another would be >>>>>P2P aware Traffic Engineering, for example >>>> >>>>Recognizing that a multicast address is the name of a set deals >>>>with most of this. (Of course, this means that strictly speaking >>>>multicast addresses aren't really addresses but names.) A multicast >>>>or anycast address must always resolve at some point to a normal >>>>address. The idea of a multicast address as an ambiguous address >>>>is fundamentally broken. >>>> >>>>> >>>>>"layer violations" as taught in protocls #101 has traditionally >>>>>been restricted to upper layer tweaking layer-2 operating parameters >>>>>(think Application/TCP causing Dial up), rather than >>>>>vice versa - but the other way round stretches >>>>>programming API paradigms more athletically >>>>>so may be condusive to progress... >>>> >>>>If I understand what you are alluding to, this is addressed by not >>>>ignoring the existence of the enrollment phase in communication. >>>> >>>>What I have found is that in a wff architecture there are no need >>>>for layer violations. In other words, if you have layer >>>>violations, you are doing something wrong some place. Either in >>>>how you are trying to do what you want to do, or in what you think >>>>a layer is. In this case it seems to be a bit of both. >>>> >>>>Take care, >>>>John >>>> >>>>> >>>>>In missive <1210445625.6167.138.camel at jg-laptop>, Jim Gettys typed: >>>>> >>>>> >>On Sat, 2008-05-10 at 12:18 -0400, David P. Reed wrote: >>>>> >> >>>>> >>> There are huge aspects of that future that depend on getting the >>>>> >>> low-level abstractions right (in the sense that they match >>>>>real physical >>>>> >>> reality). And at the same time, constructing a stack of abstractions >>>>> >>> that work to maximize the utility of radio. >>>>> >>> >>>>> >> >>>>> >>First hand reality in the OLPC project: use of multicast/broadcast based >>>>> >>protocols when crossed with nascent wireless protocols (802.11s), can >>>>> >>cause spectacularly "interesting" (as in Chinese curse) interactions. >>>>> >> >>>>> >>First hand experience is showing that one had better understand what >>>>> >>happens at the lowest wireless layers while building application >>>>> >>middleware protocols and applications.... Some existing protocols that >>>>> >>have worked well on wired networks, and sort of worked OK on 802.11abc >>>>> >>networks, just doesn't work well (or scale well) on a mesh designed to >>>>> >>try to hide what's going on under the covers. >>>>> >> >>>>> >>While overlays are going to play an important role in getting us out of >>>>> >>the current morass (without transition strategies, we're toast; that was >>>>> >>what got the Internet out of telecom circuit switching as the only >>>>> >>mechanism), I have to emphatically agree with Dave that we'd better get >>>>> >>moving on more fundamental redesign and rethinking of networking.... >>>>> >> - Jim >>>>> >> >>>>> >>-- >>>>> >>Jim Gettys >>>>> >>One Laptop Per Child >>>>> >> >>>>> >>>>> cheers >>>>> >>>>> jon >> cheers jon From bran at metacomm.com Thu May 15 10:30:18 2008 From: bran at metacomm.com (bran@metacomm.com) Date: Thu, 15 May 2008 10:30:18 -0700 (PDT) Subject: [e2e] end of interest In-Reply-To: References: <4825CAD8.8050801@reed.com> <1210445625.6167.138.camel@jg-laptop> <4827081A.1090305@reed.com> Message-ID: <25712.205.172.229.252.1210872618.squirrel@webmail.metacomm.com> > an architecture that was purely > constraint based (i.e. just said what > you DONT do) would be very > interesting:) I did that in the eighties/nineties. See "Integration Through Meta-Communication" of INFOCOM 90 and "Archetype A Unified Method for the Design and Implementation of Protocol Architectures". IEEE Trans. Software Eng. 14(6): 822-837 (1988). Branislav > > > In missive , > John Day typed: > > >>At 10:52 -0400 2008/05/11, David P. Reed wrote: > >>>Snooping honors the Layeristi - granting them rhetorical power they > >>>never deserved. It sounds like "cheating" or "illegal" operation. > >>>The Internet was born without layers - it used architectural > >> > >>See previous note. No matter how you cut it. Whether you call them > >>layers or framing. If you have to look at stuff that doesn't belong > >>to you, you haven't done something right or there is something you > >>don't understand. > >> > >>>framing differently (e.g., one arch principle illustrative: > >>>encapsulation is not layering, and even survives as IP gets > >>>encapsulated in TCP port 8 VPNs, much to the chagrin of the > >>>Layeristi purists who explain it as a "bug", rather than looking at > >>>its roots in passing IP datagrams over SNA and NCP virtual circuits). > >> > >>Gosh. How is this a bug? Sounds right to me! > >> > >>>I'd suggest that first-principles thinking is harder than Jon thinks. > >> > >>Well, it is hard. I can testify to that! Not sure it is harder than > >>Jon thinks. Jon seems to think pretty hard much of the time. > >>Although he doesn't want it to show. ;-) > >> > >>>It's not just a matter of choosing sides in a war, or acting as an > >>>arms merchant to both sides. It's about thinking more squarely > >>>about the real underlying issues that comprise communications . In > >>>fact, as some of us have suggested, perhaps the idea that > >>>communications can be considered as a "pure" > >>>architectural/linguistic frame independent of storage and > >>>computation and sensing is the real issue we ought to be addressing > >>>today, with pervasive comms/storage/computational elements capable > >>>of all three. > >> > >>No, it is a question of learning to listen carefully to what the > >>problem is telling you and not imposing your own ideas on it. (I > >>will admit that I have found that we do the later it is often wrong. > >>Embarrassing when it happens. But if you are careful when you write > >>it up no one notices!) ;-) > >> > >>> > >>> > >>>John Day wrote: > >>>>At 9:08 +0100 2008/05/11, Jon Crowcroft wrote: > >>>>>nice example of 0nership by warring protocol layer factions > >>>>> > >>>>>mesh wifi people need to learn to do layer 3 snooping > >>>>>same way telecom people did... > >>>> > >>>>The need to do snooping is an indication of the current model's > >>>>inability or refusal to innovate. A failure to dig more deeply > >>>>into the model. (Or a fear to challenge their religion.) > >>>> > >>>>(It turns out once there is a complete architecture, not one of > >>>>these DOS look-a-likes, snooping isn't necessary.) > >>>> > >>>>> > >>>>>there's a great e2e topic - > >>>>>we have sort of gotten out of the > >>>>>denial phase on middle boxes and > >>>>>we're probably ok with multicast's niches now ... > >>>> > >>>>Middleboxes are alos an artifact of an incomplete architecture. In > >>>>a full (shall we say, a wff) architecture, they aren't necessary. > >>>> > >>>>> > >>>>>but should we raise the > >>>>>art of _snooping_ to being a > >>>>>first class component of any decent > >>>>>postmodern internet architecture? > >>>> > >>>>No, snooping is an admission of failure. Calling it a component of > >>>>a "decent" internet architecture is merely making excuses for our > >>>>failures. > >>>> > >>>>> > >>>>>knowing > >>>>>multicast group members locations from lookin at IGMP traffic from > "below" > >>>>>is one exxaple (think dslams too) but another would be > >>>>>P2P aware Traffic Engineering, for example > >>>> > >>>>Recognizing that a multicast address is the name of a set deals > >>>>with most of this. (Of course, this means that strictly speaking > >>>>multicast addresses aren't really addresses but names.) A multicast > >>>>or anycast address must always resolve at some point to a normal > >>>>address. The idea of a multicast address as an ambiguous address > >>>>is fundamentally broken. > >>>> > >>>>> > >>>>>"layer violations" as taught in protocls #101 has traditionally > >>>>>been restricted to upper layer tweaking layer-2 operating parameters > >>>>>(think Application/TCP causing Dial up), rather than > >>>>>vice versa - but the other way round stretches > >>>>>programming API paradigms more athletically > >>>>>so may be condusive to progress... > >>>> > >>>>If I understand what you are alluding to, this is addressed by not > >>>>ignoring the existence of the enrollment phase in communication. > >>>> > >>>>What I have found is that in a wff architecture there are no need > >>>>for layer violations. In other words, if you have layer > >>>>violations, you are doing something wrong some place. Either in > >>>>how you are trying to do what you want to do, or in what you think > >>>>a layer is. In this case it seems to be a bit of both. > >>>> > >>>>Take care, > >>>>John > >>>> > >>>>> > >>>>>In missive <1210445625.6167.138.camel at jg-laptop>, Jim Gettys typed: > >>>>> > >>>>> >>On Sat, 2008-05-10 at 12:18 -0400, David P. Reed wrote: > >>>>> >> > >>>>> >>> There are huge aspects of that future that depend on getting > the > >>>>> >>> low-level abstractions right (in the sense that they match > >>>>>real physical > >>>>> >>> reality). And at the same time, constructing a stack of > abstractions > >>>>> >>> that work to maximize the utility of radio. > >>>>> >>> > >>>>> >> > >>>>> >>First hand reality in the OLPC project: use of > multicast/broadcast based > >>>>> >>protocols when crossed with nascent wireless protocols > (802.11s), can > >>>>> >>cause spectacularly "interesting" (as in Chinese curse) > interactions. > >>>>> >> > >>>>> >>First hand experience is showing that one had better understand > what > >>>>> >>happens at the lowest wireless layers while building application > >>>>> >>middleware protocols and applications.... Some existing > protocols that > >>>>> >>have worked well on wired networks, and sort of worked OK on > 802.11abc > >>>>> >>networks, just doesn't work well (or scale well) on a mesh > designed to > >>>>> >>try to hide what's going on under the covers. > >>>>> >> > >>>>> >>While overlays are going to play an important role in getting us > out of > >>>>> >>the current morass (without transition strategies, we're toast; > that was > >>>>> >>what got the Internet out of telecom circuit switching as the > only > >>>>> >>mechanism), I have to emphatically agree with Dave that we'd > better get > >>>>> >>moving on more fundamental redesign and rethinking of > networking.... > >>>>> >> - Jim > >>>>> >> > >>>>> >>-- > >>>>> >>Jim Gettys > >>>>> >>One Laptop Per Child > >>>>> >> > >>>>> > >>>>> cheers > >>>>> > >>>>> jon > >> > > cheers > > jon > > From kempf at docomolabs-usa.com Thu May 15 11:14:27 2008 From: kempf at docomolabs-usa.com (James Kempf) Date: Thu, 15 May 2008 11:14:27 -0700 Subject: [e2e] Layering vs. modularization References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> Message-ID: <04dc01c8b6b7$83221b20$1a6115ac@dcml.docomolabsusa.com> Layering has problems in wireless, ad hoc networks. There is a whole collection of cross-layer design papers that show substantial improvements when layers are broken down. For example: - Chen & Hsia, 2004: 1000% gain in e2e SNR using joint channel coding (PHY) and compression (APP) for video over wireless - Chaing, 2005: 82% improvement in throughput per watt by joint TCP/PHY optimization. - Pursley, 2002: 50%-400% improvement in various delay, throughput and efficiency metrics relative to min-hop routing for cross-layer MAC/NET - Bougard, et. al, 2004: 50-90% reduction in energy while meeting user requirements depending on channel conditions through joint APP/MAC/PHY design ...etc. The list is courtesy of Chris Ramming, former program director for the DARPA MARCONI project. The layers in the traditional IP stack often don't match well with concerns relevant to ad hoc wireless networks. Energy use, for example, is a big concern in ad hoc (and any for that matter) wireless but optimization of energy use wasn't a big issue when the IP stack was finalized in the 80's. So it is not reasonable to expect that the IP stack would do a good job at addressing it. The DARPA MARCONI project looked at using network utility maximization (NUM) to rearrange the layers (review paper on NUM by Chaing, Low, Calderbank, and Doyle, 2007) with promising results. NUM basically formulates a network architecture as the maximization of a utility function over a constraint set. The architecture, including protocol layers and intermediate network entities (middleboxes if you will), falls out as a solution to the optimization problem. My take on the NUM work, which originated through analyzing TCP, is that, although promising, it is still unclear a) how to take an unstructured problem and formulate the NUM problem, and b) once the NUM solution is available, how to map that into a network architeture and protocol design. These processes seem to me to need more work, especially in areas outside of traditional transport concerns such as security which Chaing, et. al. call "externalities". Until these are solved, I think it will be difficult to use NUM for general architectural synthesis (i.e. you need to be a optimal control theorist to do a good job using it). Solving them is going to take some collaborative work between control theorists such as Calderbank, et. al. and the networking types which hang out on this list. jak ----- Original Message ----- From: "John Day" To: "S. Keshav" ; Sent: Wednesday, May 14, 2008 8:12 PM Subject: Re: [e2e] Layering vs. modularization At 20:55 -0400 2008/05/14, S. Keshav wrote: >This note addresses the recent discussion on layering as a form of >modularization. > >Layering is one particular (but not very good) form of >modularization. Modularization, as in programming, allows separation >of concerns and clean interface design. Layering goes well beyond, >insisting on (a) >progressively higher levels of abstraction, i.e. an enforced >conceptual hierarchy, (b) a progressively larger topological scope >along this hierarchy, and (c) a single path through the set of >modules. None of the three is strictly necessary, and, for example >in the case of wireless networks, is broken. Gee, the only layering I have ever seen that had problems were the ones done badly, such as with the Internet and OSI. In my experience, if you do it right, it actually helps rather than gets in the way. Although, I know the first two conditions hold for properly layered systems and I am not sure I understand the third. Hmmm, guess I am missing something. Although, I have to admit I never quite understood how wireless caused problems. >Jon's message pointed to several previous designs, notably x-kernel, >that took a different cut. In recent work, (blatant >self-promotion alert) we tried to formalize these approaches in our >Sigcomm 2007 paper called "An Axiomatic Basis for Communication." > >Interestingly, our approach only addressed the data plane. When we >move to the control plane, as Jon hinted, things get very hard very >fast. Essentially, the problem is that of race conditions: the same >state variable can be touched by different entities in different >ways (think of routing updates), and so it becomes hard to tell what >the data plane is going to actually do. In fact, given a >sufficiently large network, some chunk of the network is always >going to be in an inconsistent state. So, even >eventual-always-convergence becomes hard to achieve or prove. >Nevertheless, this line of attack does give some insights into >alternatives to layer-ism. The solution to that then is to not let the network get too large! Simple. ;-) Looking at your paper it seems to tend toward the beads-on-a-string model that the phone companies have always favored. Those never had very good scaling properties. It is not clear how differences of scope are accommodated. But then differences of scope sort of requires some sort of layer, doesn't it? So how does this architecture facilitate scaling? Still confused, ;-) John Day From rkrishnan at comcast.net Thu May 15 14:10:03 2008 From: rkrishnan at comcast.net (Rajesh Krishnan) Date: Thu, 15 May 2008 17:10:03 -0400 Subject: [e2e] end of interest Message-ID: <200805152109.m4FL9WqE014661@boreas.isi.edu> Similar arguments (any modularization can ultimately see cross-cutting concerns if the requirements placed on the design change radically) are made for aspect-oriented programming; AOP has been most successful in environments that support reflection. Other than some research systems (such as Singularity from Microsoft Research), systems programming environments currently are not friendly to reflection (and the cost of indirection). Best Regards, Rajesh 617 512 1550 -- Sent from my Treo. Sorry about typos. -- -----Original Message----- From: "Jon Crowcroft" To: "John Day" Cc: end2end-interest at postel.org Sent: 5/15/08 8:40 AM Subject: Re: [e2e] end of interest for me, modularisation in layers is part of hierachical categorisation, which is fine for librarians - categories (as in women, fire and dangerous things) led to a more complex way of modularisation which ended up being the OO paradigm with multiple inheritence - as an implementation pardigm, this turned out to be too hard for the hard of thinking, so most OO languages restricted inheritence and refinement to hierarchiy again - but in most programmes, one is concerned with a much lower dimensionality problem space tha the multi-stakeholder system that is a network architecture where a more suble and messay abstraction may be fine - for example one might just have constraints on resources and different constraints on identiifers, and different constrains on reliability and the way solutions are componentized across a set of nodes (hosts, routers, if you must) an architecture that was purely constraint based (i.e. just said what you DONT do) would be very interesting:) In missive , John Day typed: >>At 10:52 -0400 2008/05/11, David P. Reed wrote: >>>Snooping honors the Layeristi - granting them rhetorical power they >>>never deserved. It sounds like "cheating" or "illegal" operation. >>>The Internet was born without layers - it used architectural >> From dpreed at reed.com Thu May 15 15:20:48 2008 From: dpreed at reed.com (David P. Reed) Date: Thu, 15 May 2008 18:20:48 -0400 Subject: [e2e] Layering vs. modularization In-Reply-To: <482BC4F0.7020409@isi.edu> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> Message-ID: <482CB740.9080106@reed.com> Two challenges to proponents of the thesis that "layering can be done right". a) What specific problem in network systems design does a particular layering solve? Can you quantify the adequacy of solution (without resorting to use of the concept of layering in a circular manner). And is there an alternative solution to that problem? I'd suggest that if you can't state the problem unambiguously and in a way that admits of alternative approaches, then layering always "works". It's like saying that evolution creates the optimum result. It's either vacuous to say that, or worse, it's eugenicism. As a matter of rhetoric, saying that "layering" works when "done right" is unfalsifiable. It's a tautology because it's essentially just uninterpreted symbols on the page. (I would be presumptive to state the problem, since I would claim that it's not very useful. But I'd be convinced, for example, if someone picked a problem like: "allowing the Internet to evolve with minimum disruption due to new applications demanded by users". An alternative approach would be to state that all implementations of network protocols should be written in a common language with an open source license agreement and a community process. One could then test the hypothesis that layering helps by various evaluative metrics). b) Explain protocol encapsulation (sending IPv6 datagrams within UDP VPN packets over a TCP based overlay network implemented in userspace stacks on machines that offload part of the VPN implementation to a peer within a bluetooth subnet) as a form of layering? It seems to me that encapsulation is akin to allowing recursion in one's language. Languages that allow recursion are unlike FORTRAN 77, which is "layered". c) where does "security" go in a functional layering? I would argue - everywhere, and nowhere. Information leaks at all levels of a layered system. Denial of service is possible at all layers. It seems to me that "layering" is a collective hallucination. Joe Touch wrote: > > > John Day wrote: > ... >> Gee, the only layering I have ever seen that had problems were the >> ones done badly, such as with the Internet and OSI. In my >> experience, if you do it right, it actually helps rather than gets in >> the way. > > That's been my perspective as well; most of the issues I've seen with > layering problems - including "layer violation" really turned out to > be gaps in the layering structure that were patched over with > violations rather than fixed correctly. > > We've seen a few key issues with layering (insert self promotion > warning here), and our observations suggest that it may be a > fundamental construct, rather than an artifact to be "erased" in a > clean-slate approach: > > 1. layering often focuses on the layers and ignores the inter-layer > glue; Yu-Shun Wang's thesis (a recent PhD student of mine) focused on > this issue, and found numerous ways in which the interlayer glue was > similar at nearly every layer boundary, and could be addressed by an > additional, meaningful generic mechanism > > 2. layering disagreements sometimes revolve around what each layer > means, as unique from other layers; our NSF FIND "RNA" project (to > appear at ICCCN, Future Internet track) is exploring the ways in which > layers are internally similar, but their behavior is governed by their > relative position (what's above and below) and their scope > (distance/time extent). This turns the "one layer to bind them all" > (ala XTP) into a single 'stem cell' layer that acts like different, > well-known OSI layers when stacked. The idea is that the environment > of a layer defines the layer, and that layering is semantically useful > in composing services on each other. > > Joe > From touch at ISI.EDU Thu May 15 15:48:19 2008 From: touch at ISI.EDU (Joe Touch) Date: Thu, 15 May 2008 15:48:19 -0700 Subject: [e2e] Layering vs. modularization In-Reply-To: <482CB740.9080106@reed.com> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> Message-ID: <482CBDB3.2040904@isi.edu> David P. Reed wrote: > Two challenges to proponents of the thesis that "layering can be done > right". > > a) What specific problem in network systems design does a particular > layering solve? There are a few: - spatial localization of scope - supporting functional composition (which supports things like function reuse, which makes the result more accommodating to redesign and or dynamic use) > Can you quantify the adequacy of solution (without > resorting to use of the concept of layering in a circular manner). Part of it is supporting the modularity of protocol functions and reuse of capabilities. Another part is the simplicity of the result. It's like the difference between structured programming on a stack machine and machine language programming with only global variables. > And > is there an alternative solution to that problem? Of course there is; the question is whether other solutions map well to the problem, or are unnecessarily inefficient or awkward. > I'd suggest that if > you can't state the problem unambiguously and in a way that admits of > alternative approaches, then layering always "works". "works" means that layering maps better to how we develop and use protocols than non-layered solutions. It's relative; you can always shoe-horn a solution. > It's like saying > that evolution creates the optimum result. It's either vacuous to say > that, or worse, it's eugenicism. As a matter of rhetoric, saying that > "layering" works when "done right" is unfalsifiable. It's a tautology > because it's essentially just uninterpreted symbols on the page. I'll agree that "layering works when it's done right" is a useless assertion. I'll contend instead that "layering is easier to get right than non-layering". ... > b) Explain protocol encapsulation (sending IPv6 datagrams within UDP > VPN packets over a TCP based overlay network implemented in userspace > stacks on machines that offload part of the VPN implementation to a peer > within a bluetooth subnet) as a form of layering? See our RNA paper. A reason we developed RNA was to explain the contradiction that: A- there are "7 layers", each defining a unique set of services B- services once believed to be unique to a single layer are getting replicated at various layers C- overlays should be consistent with layering (B) suggests there might be a common template that describes what a protocol is, which is instantiated in different ways (a 'stem' protocol) (C) suggests that the stem protocol would be partly defined by the layers above and layers below, i.e., its behavior is relative to the services it expects (below) and those it provides (above) > It seems to me that > encapsulation is akin to allowing recursion in one's language. > Languages that allow recursion are unlike FORTRAN 77, which is "layered". Languages that allow recursion can be layered too, ala Pascal. It just means that the layers are defined at runtime. > c) where does "security" go in a functional layering? Layering isn't purely functional. That is what, IMO, is wrong with (A) in the list above. Functions aren't constrained to a single layer. > I would argue - > everywhere, and nowhere. Information leaks at all levels of a layered > system. Denial of service is possible at all layers. DOS at layer K obviously affects layers (>=K), but not layers below ( It seems to me that "layering" is a collective hallucination. The same was said about structured programming, and unfortunately we've somehow ended up in a pseudo-Flatland in which there are only 2-3 levels of namespaces and yet we have over-structured our datatypes (C++). That move was motivated, AFAICT, by low-level programmers who didn't appreciate the need for structure until it was too late, and ended up adding it in the wrong place (IMO). I hope we don't make the same mistake with protocol architectures. Joe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20080515/e448def4/signature.bin From ggm at apnic.net Thu May 15 15:56:03 2008 From: ggm at apnic.net (George Michaelson) Date: Fri, 16 May 2008 08:56:03 +1000 Subject: [e2e] Layering vs. modularization In-Reply-To: <482CB740.9080106@reed.com> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> Message-ID: <45353451-432E-43FD-BDA7-DCFBAC755C82@apnic.net> On 16/05/2008, at 8:20 AM, David P. Reed wrote: > > b) Explain protocol encapsulation (sending IPv6 datagrams within > UDP VPN packets over a TCP based overlay network implemented in > userspace stacks on machines that offload part of the VPN > implementation to a peer within a bluetooth subnet) as a form of > layering? It seems to me that encapsulation is akin to allowing > recursion in one's language. Languages that allow recursion are > unlike FORTRAN 77, which is "layered". recursion requires that first-class data constructs in the language be respected, so stack frame boundaries, globals etc are meaningful. encapsulation doesn't require this. the encapsulated protocol has its own e2e significance and its own routing. for the purposes of encapsulation, its just data. therefore the comparison (as in most analogies?) for me, is not a good fit. actually, I think most things described as recursive usually aren't. stateful packet inspectors *might* need a re-write, but that aside, I don't see how anything other than a bug would make the outer V6 active units need to read the inner V4 payload, or vice versa -G From touch at ISI.EDU Thu May 15 17:55:03 2008 From: touch at ISI.EDU (Joe Touch) Date: Thu, 15 May 2008 17:55:03 -0700 Subject: [e2e] Layering vs. modularization In-Reply-To: <45353451-432E-43FD-BDA7-DCFBAC755C82@apnic.net> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> <45353451-432E-43FD-BDA7-DCFBAC755C82@apnic.net> Message-ID: <482CDB67.1030306@isi.edu> Hi, George (et al.), George Michaelson wrote: > > On 16/05/2008, at 8:20 AM, David P. Reed wrote: >> >> b) Explain protocol encapsulation (sending IPv6 datagrams within UDP >> VPN packets over a TCP based overlay network implemented in userspace >> stacks on machines that offload part of the VPN implementation to a >> peer within a bluetooth subnet) as a form of layering? It seems to >> me that encapsulation is akin to allowing recursion in one's >> language. Languages that allow recursion are unlike FORTRAN 77, >> which is "layered". > > recursion requires that first-class data constructs in the language be > respected, so stack frame boundaries, globals etc are meaningful. > > encapsulation doesn't require this Strictly, recursion and encapsulation assume that the inner/lower layer respects the boundary. It doesn't have to unless there are enforcement mechanisms - that's the difference between strict and non-strict languages. Encapsulation strictness is enforceable - just encrypt the payload. > stateful packet inspectors *might* need a re-write, but that aside, I > don't see how anything other than a bug would make the outer V6 active > units need to read the inner V4 payload, or vice versa Outer V6 would read inner V4 to support path 'fate sharing', i.e., when doing multipath routing it's useful to ensure that 'flows' traverse similar paths, and in this case the V4 address could be the best cue to a flow. Joe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20080515/86b8094c/signature.bin From ggm at apnic.net Thu May 15 18:15:21 2008 From: ggm at apnic.net (George Michaelson) Date: Fri, 16 May 2008 11:15:21 +1000 Subject: [e2e] Layering vs. modularization In-Reply-To: <482CDB67.1030306@isi.edu> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> <45353451-432E-43FD-BDA7-DCFBAC755C82@apnic.net> <482CDB67.1030306@isi.edu> Message-ID: <0CD10179-A30E-4744-9C5D-59128489B203@apnic.net> On 16/05/2008, at 10:55 AM, Joe Touch wrote: > Hi, George (et al.), > > George Michaelson wrote: >> On 16/05/2008, at 8:20 AM, David P. Reed wrote: >>> >>> b) Explain protocol encapsulation (sending IPv6 datagrams within >>> UDP VPN packets over a TCP based overlay network implemented in >>> userspace stacks on machines that offload part of the VPN >>> implementation to a peer within a bluetooth subnet) as a form of >>> layering? It seems to me that encapsulation is akin to allowing >>> recursion in one's language. Languages that allow recursion are >>> unlike FORTRAN 77, which is "layered". >> recursion requires that first-class data constructs in the language >> be respected, so stack frame boundaries, globals etc are meaningful. >> encapsulation doesn't require this > > Strictly, recursion and encapsulation assume that the inner/lower > layer respects the boundary. It doesn't have to unless there are > enforcement mechanisms - that's the difference between strict and > non-strict languages. > > Encapsulation strictness is enforceable - just encrypt the payload. I think there is a conversation about 'is-ness' here. A router (for instance) can quite happily pass encapsulated traffic without looking at it. The "me" of the router doesn't have to look at payload. Might be slower, but it will work. A compiler on the other hand, has to know how to manage true in- language recursion because the "me" is meaningful for all recursively called instances. They might indeed map into discrete cores of CPU run, but something has to be able to unwind the stack of nested recursive calls. there is a "me" which has to know recursion is being used. I would suggest even in the more lax languages this is true. > > >> stateful packet inspectors *might* need a re-write, but that aside, >> I don't see how anything other than a bug would make the outer V6 >> active units need to read the inner V4 payload, or vice versa > > Outer V6 would read inner V4 to support path 'fate sharing', i.e., > when doing multipath routing it's useful to ensure that 'flows' > traverse similar paths, and in this case the V4 address could be the > best cue to a flow. to abuse an analogy, dealing with 'fate sharing' is inventing optimisations. For the purposes of this discussion and its comparisons, if you compile gcc -O then you get what you paid for. If you want to complain about an ICE, you can expect to be asked to use lower levels of optimization. Likewise with any router which wants to be smart about its encapsulated payload. I'm told amusing stories about the MPLS routers which mistook the conceptual layerings (for want of a better word) in their internal route map, and can cause havoc. Likewise the external debug paths which sometimes use the MPLS, and sometimes use independent routing: one kind of view says "its not broken" and the other kind doesn't and it depends which you are on, to see the problem.. I also think layering is an over-stated concept. Having worked on systems which mapped the logical layers strictly into procedural boundaries, each independently implemented, I am very aware of the runtime cost of that strict separation. Equally, debugging systems which dive down or use language tricks like macros to implement functional elements more effectively is hard. Explaining to people that your hardware is calculating the TCP checksum for you can be quite interesting. I think humans by default obey mental models of layers quite a lot in their internal conceptualizations, and layer violations are moments for the thinker. OTOH explanations simplifying things is what Terry Pratchett calls 'lies to children' in education, and arguments persist about the neccessary lies, and when you reveal truth.. -George > > > Joe > From dpreed at reed.com Thu May 15 18:24:15 2008 From: dpreed at reed.com (David P. Reed) Date: Thu, 15 May 2008 21:24:15 -0400 Subject: [e2e] Layering vs. modularization In-Reply-To: <482CBDB3.2040904@isi.edu> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> <482CBDB3.2040904@isi.edu> Message-ID: <482CE23F.2060205@reed.com> Perhaps when you write "layering" you include ANY form of modularization, rather than a *totally ordered* stack of protocol implementations, where each element in the stack merely takes the interface provided by the layer below and implements the new layer in terms of the primitives exported by the layer below. The layering concept is a particularly narrow form of structuring. It implies that there is a single correct linear sequence for building up a complete functional system. While it admits a nice sequential proof structure - each layer can be proved based on the proof of the layer below - the idea that modifiability is simplified is hardly true. A counterexample: if you change/extend the behavior of a layer, you must reimplement and re-prove all layers above that layer. Thus, the deeper the stack, the less benefit of modularity. And "splitting a function across multiple layers" runs a huge risk. A function is typically specified by a small number of specification clauses. One must partition the function specification into multiple subproblems (presumably not present in the original spec, because they are partial) in order to prove one layer at a time. That means that the proof strategy depends on "inventing invariants" within a function's specification creatively. This means that when functions are added or modified, the invented invariants that are needed to split the function across layers have to be "re-invented". So the idea that modularity gains by totally ordered layers is not at all obvious, and the RNA idea sounds like it only makes the matter worse! Joe Touch wrote: > > > David P. Reed wrote: >> Two challenges to proponents of the thesis that "layering can be done >> right". >> >> a) What specific problem in network systems design does a particular >> layering solve? > > There are a few: > - spatial localization of scope > - supporting functional composition (which supports > things like function reuse, which makes the result > more accommodating to redesign and or dynamic use) > >> Can you quantify the adequacy of solution (without resorting to use >> of the concept of layering in a circular manner). > > Part of it is supporting the modularity of protocol functions and > reuse of capabilities. Another part is the simplicity of the result. > It's like the difference between structured programming on a stack > machine and machine language programming with only global variables. > >> And is there an alternative solution to that problem? > > Of course there is; the question is whether other solutions map well > to the problem, or are unnecessarily inefficient or awkward. > >> I'd suggest that if you can't state the problem unambiguously and in >> a way that admits of alternative approaches, then layering always >> "works". > > "works" means that layering maps better to how we develop and use > protocols than non-layered solutions. It's relative; you can always > shoe-horn a solution. > >> It's like saying that evolution creates the optimum result. It's >> either vacuous to say that, or worse, it's eugenicism. As a matter >> of rhetoric, saying that "layering" works when "done right" is >> unfalsifiable. It's a tautology because it's essentially just >> uninterpreted symbols on the page. > > I'll agree that "layering works when it's done right" is a useless > assertion. I'll contend instead that "layering is easier to get right > than non-layering". > > ... >> b) Explain protocol encapsulation (sending IPv6 datagrams within UDP >> VPN packets over a TCP based overlay network implemented in userspace >> stacks on machines that offload part of the VPN implementation to a >> peer within a bluetooth subnet) as a form of layering? > > See our RNA paper. A reason we developed RNA was to explain the > contradiction that: > > A- there are "7 layers", each defining a unique set of services > B- services once believed to be unique to a single layer are > getting replicated at various layers > C- overlays should be consistent with layering > > (B) suggests there might be a common template that describes what a > protocol is, which is instantiated in different ways (a 'stem' protocol) > > (C) suggests that the stem protocol would be partly defined by the > layers above and layers below, i.e., its behavior is relative to the > services it expects (below) and those it provides (above) > > > It seems to me that >> encapsulation is akin to allowing recursion in one's language. >> Languages that allow recursion are unlike FORTRAN 77, which is >> "layered". > > Languages that allow recursion can be layered too, ala Pascal. It just > means that the layers are defined at runtime. > >> c) where does "security" go in a functional layering? > > Layering isn't purely functional. That is what, IMO, is wrong with (A) > in the list above. Functions aren't constrained to a single layer. > >> I would argue - everywhere, and nowhere. Information leaks at all >> levels of a layered system. Denial of service is possible at all >> layers. > > DOS at layer K obviously affects layers (>=K), but not layers below > ( layer", i.e., the lowest layer it affects. > >> It seems to me that "layering" is a collective hallucination. > > The same was said about structured programming, and unfortunately > we've somehow ended up in a pseudo-Flatland in which there are only > 2-3 levels of namespaces and yet we have over-structured our datatypes > (C++). > > That move was motivated, AFAICT, by low-level programmers who didn't > appreciate the need for structure until it was too late, and ended up > adding it in the wrong place (IMO). I hope we don't make the same > mistake with protocol architectures. > > Joe > From dpreed at reed.com Thu May 15 18:27:28 2008 From: dpreed at reed.com (David P. Reed) Date: Thu, 15 May 2008 21:27:28 -0400 Subject: [e2e] Layering vs. modularization In-Reply-To: <45353451-432E-43FD-BDA7-DCFBAC755C82@apnic.net> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> <45353451-432E-43FD-BDA7-DCFBAC755C82@apnic.net> Message-ID: <482CE300.9070902@reed.com> Encasulation means that the "totally ordered stack of layers" suddenly can grow to infinite depth, as the sequence of layers is laid on top of a subsequence selected from the same layers. Layering has nothing to do with "looking inside the packets". That concept is not layering, it is something quite different - having to do with lack of "interpretation" of bit strings. George Michaelson wrote: > > On 16/05/2008, at 8:20 AM, David P. Reed wrote: >> >> b) Explain protocol encapsulation (sending IPv6 datagrams within UDP >> VPN packets over a TCP based overlay network implemented in userspace >> stacks on machines that offload part of the VPN implementation to a >> peer within a bluetooth subnet) as a form of layering? It seems to >> me that encapsulation is akin to allowing recursion in one's >> language. Languages that allow recursion are unlike FORTRAN 77, >> which is "layered". > > recursion requires that first-class data constructs in the language be > respected, so stack frame boundaries, globals etc are meaningful. > > encapsulation doesn't require this. the encapsulated protocol has its > own e2e significance and its own routing. for the purposes of > encapsulation, its just data. > > therefore the comparison (as in most analogies?) for me, is not a good > fit. actually, I think most things described as recursive usually > aren't. > > stateful packet inspectors *might* need a re-write, but that aside, I > don't see how anything other than a bug would make the outer V6 > active units need to read the inner V4 payload, or vice versa > > -G > From dpreed at reed.com Thu May 15 18:30:01 2008 From: dpreed at reed.com (David P. Reed) Date: Thu, 15 May 2008 21:30:01 -0400 Subject: [e2e] Layering vs. modularization In-Reply-To: <482CDB67.1030306@isi.edu> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> <45353451-432E-43FD-BDA7-DCFBAC755C82@apnic.net> <482CDB67.1030306@isi.edu> Message-ID: <482CE399.1040507@reed.com> The following argument that you make by dragging in "fate sharing" suggests that your mental model is not about layering at all. You are discussing dynamic behaviors, and layering has NOTHING to do with dynamics of packet transport, and more than modularity in a programming language has anything to do with the speed of a CPU's various ALU and memory operations. Joe Touch wrote: > Hi, George (et al.), > > George Michaelson wrote: >> >> On 16/05/2008, at 8:20 AM, David P. Reed wrote: >>> >>> b) Explain protocol encapsulation (sending IPv6 datagrams within >>> UDP VPN packets over a TCP based overlay network implemented in >>> userspace stacks on machines that offload part of the VPN >>> implementation to a peer within a bluetooth subnet) as a form of >>> layering? It seems to me that encapsulation is akin to allowing >>> recursion in one's language. Languages that allow recursion are >>> unlike FORTRAN 77, which is "layered". >> >> recursion requires that first-class data constructs in the language >> be respected, so stack frame boundaries, globals etc are meaningful. >> >> encapsulation doesn't require this > > Strictly, recursion and encapsulation assume that the inner/lower > layer respects the boundary. It doesn't have to unless there are > enforcement mechanisms - that's the difference between strict and > non-strict languages. > > Encapsulation strictness is enforceable - just encrypt the payload. > >> stateful packet inspectors *might* need a re-write, but that aside, I >> don't see how anything other than a bug would make the outer V6 >> active units need to read the inner V4 payload, or vice versa > > Outer V6 would read inner V4 to support path 'fate sharing', i.e., > when doing multipath routing it's useful to ensure that 'flows' > traverse similar paths, and in this case the V4 address could be the > best cue to a flow. > > Joe > From ggm at apnic.net Thu May 15 19:02:56 2008 From: ggm at apnic.net (George Michaelson) Date: Fri, 16 May 2008 12:02:56 +1000 Subject: [e2e] Layering vs. modularization In-Reply-To: <482CE300.9070902@reed.com> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> <45353451-432E-43FD-BDA7-DCFBAC755C82@apnic.net> <482CE300.9070902@reed.com> Message-ID: <51D5A086-EBB2-485D-B0BB-EC10CB8AFA51@apnic.net> On 16/05/2008, at 11:27 AM, David P. Reed wrote: > Encasulation means that the "totally ordered stack of layers" > suddenly can grow to infinite depth, as the sequence of layers is > laid on top of a subsequence selected from the same layers. but only if you violate some pretend privacy rule and look at what in context is actually just data. What element of a given nodes functional role, in a layer context, demanded it have consciousness of the payload to perform its role? > > > Layering has nothing to do with "looking inside the packets". That > concept is not layering, it is something quite different - having to > do with lack of "interpretation" of bit strings. I can't help feeling that there is a chinese-room or black-box view on this that says if you don't NEED to look at whats in a bit string to perform your functional role, then if you do, you are possibly exhibiting side effects.. -George From ggm at apnic.net Thu May 15 19:10:25 2008 From: ggm at apnic.net (George Michaelson) Date: Fri, 16 May 2008 12:10:25 +1000 Subject: [e2e] Layering vs. modularization In-Reply-To: <482CE300.9070902@reed.com> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> <45353451-432E-43FD-BDA7-DCFBAC755C82@apnic.net> <482CE300.9070902@reed.com> Message-ID: On 16/05/2008, at 11:27 AM, David P. Reed wrote: > Encasulation means that the "totally ordered stack of layers" > suddenly can grow to infinite depth, as the sequence of layers is > laid on top of a subsequence selected from the same layers. to riff on the encapsulation word, there are some real-world views on it emerging, for instance the V6 routing community in Europe very strongly deprecate and police world-wide emerging V6 tunnels. They don't like 'em, because of the end to end RTT and wierd routing consequences for the 'system as a whole' -OTOH many people who want to 'promote' V6 love them, because they magically increase the footprint. Likewise considering MPLS (which is almost always an IP sublayer providing magic flow routing to an IP payload encapsulation) you get some very odd real-world outcomes when you eg forget the MPLS cloud is worldwide, link in Asia, and see a low-AS path preference emerge in France for a device in .. Tokyo. We (the wider we) all expect our VPN to work. Demand it even. I've even been shown that in some circumstances the net benefit to me of using far-off-faroffia's routing view when I am on VPN is magically 'better' even allowing for the double-packet transits. Which is odd, but then, so is the likely double or triple compress/encode consequences of SSH over VPN over ... -George From touch at ISI.EDU Thu May 15 19:33:14 2008 From: touch at ISI.EDU (Joe Touch) Date: Thu, 15 May 2008 19:33:14 -0700 Subject: [e2e] Layering vs. modularization In-Reply-To: <482CE23F.2060205@reed.com> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> <482CBDB3.2040904@isi.edu> <482CE23F.2060205@reed.com> Message-ID: <482CF26A.1040407@isi.edu> David P. Reed wrote: > Perhaps when you write "layering" you include ANY form of > modularization, rather than > > a *totally ordered* stack of protocol implementations, where each > element in the stack merely takes the interface provided by the > layer below and implements the new layer in terms of the primitives > exported by the layer below. > > The layering concept is a particularly narrow form of structuring. It > implies that there is a single correct linear sequence for building up a > complete functional system. It presumes only that there is a total ordering. Such ordering can include branches and variations - only that any particular path has a unique order. > While it admits a nice sequential proof structure - each layer can be > proved based on the proof of the layer below - the idea that > modifiability is simplified is hardly true. A counterexample: > > if you change/extend the behavior of a layer, you must reimplement > and re-prove all layers above that layer. Thus, the deeper the stack, > the less benefit of modularity. That's no more true for protocols than for structured programming. Changes within a layer are hidden from others; changes that traverse layers need to be accommodated only by at least the next layer. The extent of that impact depends on the nature of the change. > And "splitting a function across multiple layers" runs a huge risk. It depends on how functions are split. Retransmission, e.g., can happen within fragments at a link layer (ARQ), and within segments at a transport layer, and both can - and arguably should - co-exist. > A > function is typically specified by a small number of specification > clauses. One must partition the function specification into multiple > subproblems (presumably not present in the original spec, because they > are partial) in order to prove one layer at a time. > > That means that the proof strategy depends on "inventing invariants" > within a function's specification creatively. This means that when > functions are added or modified, the invented invariants that are needed > to split the function across layers have to be "re-invented". It seems like you're convincing yourself that it's hard to redefine a stack to retain a service; that may or may not be true. It may be easier, though, to redefine a stack by reassembling components to provide different services. Joe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20080515/7747b4a1/signature.bin From touch at ISI.EDU Thu May 15 19:35:44 2008 From: touch at ISI.EDU (Joe Touch) Date: Thu, 15 May 2008 19:35:44 -0700 Subject: [e2e] Layering vs. modularization In-Reply-To: <482CE399.1040507@reed.com> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> <45353451-432E-43FD-BDA7-DCFBAC755C82@apnic.net> <482CDB67.1030306@isi.edu> <482CE399.1040507@reed.com> Message-ID: <482CF300.3070205@isi.edu> David, David P. Reed wrote: > The following argument that you make by dragging in "fate sharing" > suggests that your mental model is not about layering at all. You are > discussing dynamic behaviors, and layering has NOTHING to do with > dynamics of packet transport, and more than modularity in a programming > language has anything to do with the speed of a CPU's various ALU and > memory operations. I gave a specific example below of cases where it's useful for one layer to expose information (in this cases, groups of V4 addresses within a single V6 address) to another. That's precisely about layering, and something that elsewhere has been called layer violation. Joe ... >>> stateful packet inspectors *might* need a re-write, but that aside, I >>> don't see how anything other than a bug would make the outer V6 >>> active units need to read the inner V4 payload, or vice versa >> >> Outer V6 would read inner V4 to support path 'fate sharing', i.e., >> when doing multipath routing it's useful to ensure that 'flows' >> traverse similar paths, and in this case the V4 address could be the >> best cue to a flow. >> >> Joe >> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20080515/d99f10bf/signature.bin From touch at ISI.EDU Thu May 15 19:39:51 2008 From: touch at ISI.EDU (Joe Touch) Date: Thu, 15 May 2008 19:39:51 -0700 Subject: [e2e] Layering vs. modularization In-Reply-To: <482CE300.9070902@reed.com> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> <45353451-432E-43FD-BDA7-DCFBAC755C82@apnic.net> <482CE300.9070902@reed.com> Message-ID: <482CF3F7.8080001@isi.edu> David P. Reed wrote: > Encasulation means that the "totally ordered stack of layers" suddenly > can grow to infinite depth, as the sequence of layers is laid on top of > a subsequence selected from the same layers. Packets have finite length. Infinite layering is the layering equivalent of nonterminating recursion, which isn't useful either. > Layering has nothing to do with "looking inside the packets". That > concept is not layering, it is something quite different - having to do > with lack of "interpretation" of bit strings. It's typically called "layer violation", and often means that the information needed at a particular layer has insufficiently been passed properly by the (typically higher) layer. What those bits mean is "interpretation", but we're talking about peeking across the boundary, not how to map what you see to its meaning. Joe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20080515/30466100/signature.bin From L.Wood at surrey.ac.uk Fri May 16 01:13:21 2008 From: L.Wood at surrey.ac.uk (L.Wood@surrey.ac.uk) Date: Fri, 16 May 2008 09:13:21 +0100 Subject: [e2e] Layering vs. modularization References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> <482CBDB3.2040904@isi.edu> <482CE23F.2060205@reed.com> Message-ID: <603BF90EB2E7EB46BF8C226539DFC20701316B28@EVS-EC1-NODE1.surrey.ac.uk> David P. Reed wrote on Fri 2008-05-16 2:24: > if you change/extend the behavior of a layer, you must reimplement > and re-prove all layers above that layer. Thus, the deeper the stack, > the less benefit of modularity. No, just the layer above it - develop a new link layer, describe how to run IP over it, you're done as far as IP-based apps are concerned. This is an example of the classic computer science 'you can do anything with a single level of indirection'. L. if layering is a hallucination, so is "computer science". -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20080516/ee11d041/attachment-0001.html From dpreed at reed.com Fri May 16 16:55:11 2008 From: dpreed at reed.com (David P. Reed) Date: Fri, 16 May 2008 19:55:11 -0400 Subject: [e2e] Layering vs. modularization In-Reply-To: <482CF26A.1040407@isi.edu> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> <482CBDB3.2040904@isi.edu> <482CE23F.2060205@reed.com> <482CF26A.1040407@isi.edu> Message-ID: <482E1EDF.2020608@reed.com> Joe Touch wrote: > >> The layering concept is a particularly narrow form of structuring. >> It implies that there is a single correct linear sequence for >> building up a complete functional system. > > It presumes only that there is a total ordering. Such ordering can > include branches and variations - only that any particular path has a > unique order. > Huh? I think you are describing a lattice or poset, which is called a Partial ordering, not Total. Interesting, but not a "layering". Youre describing a modular structure, not a layered architecture, From touch at ISI.EDU Fri May 16 17:15:20 2008 From: touch at ISI.EDU (Joe Touch) Date: Fri, 16 May 2008 17:15:20 -0700 Subject: [e2e] Layering vs. modularization In-Reply-To: <482E1EDF.2020608@reed.com> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482BC4F0.7020409@isi.edu> <482CB740.9080106@reed.com> <482CBDB3.2040904@isi.edu> <482CE23F.2060205@reed.com> <482CF26A.1040407@isi.edu> <482E1EDF.2020608@reed.com> Message-ID: <482E2398.2040307@isi.edu> Hi, David, David P. Reed wrote: > > > Joe Touch wrote: >> >>> The layering concept is a particularly narrow form of structuring. >>> It implies that there is a single correct linear sequence for >>> building up a complete functional system. >> >> It presumes only that there is a total ordering. Such ordering can >> include branches and variations - only that any particular path has a >> unique order. >> > Huh? I think you are describing a lattice or poset, which is called a > Partial ordering, not Total. Interesting, but not a "layering". Lattice is probably closest. Partial order implies a few things I'm not sure I want. > Youre describing a modular structure, not a layered architecture, The order defines the layers; the fact that there is an order isn't sufficient to define either layering or modularity. Joe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20080516/2bbd3a48/signature.bin From day at std.com Fri May 16 18:24:12 2008 From: day at std.com (John Day) Date: Fri, 16 May 2008 21:24:12 -0400 Subject: [e2e] Layering vs. modularization In-Reply-To: <482E23BF.8070109@cs.uwaterloo.ca> References: <661C7B2F-04D6-41C7-A108-01A046E35D17@uwaterloo.ca> <482E23BF.8070109@cs.uwaterloo.ca> Message-ID: At 20:15 -0400 2008/05/16, Martin Karsten wrote: >John Day wrote: >>At 20:55 -0400 2008/05/14, S. Keshav wrote: >>>This note addresses the recent discussion on layering as a form of >>>modularization. >>> >>>Layering is one particular (but not very good) form of >>>modularization. Modularization, as in programming, allows >>>separation of concerns and clean interface design. Layering goes >>>well beyond, insisting on (a) progressively higher levels of >>>abstraction, i.e. an enforced conceptual hierarchy, (b) a >>>progressively larger topological scope along this hierarchy, and >>>(c) a single path through the set of modules. None of the three is >>>strictly necessary, and, for example in the case of wireless >>>networks, is broken. >> >>Gee, the only layering I have ever seen that had problems were the >>ones done badly, such as with the Internet and OSI. In my >>experience, if you do it right, it actually helps rather than gets >>in the way. Although, I know the first two conditions hold for >>properly layered systems and I am not sure I understand the third. >>Hmmm, guess I am missing something. Although, I have to admit I >>never quite understood how wireless caused problems. > >I guess the problem lies with the exact understanding of "layering", >as has been pointed out by several others in this thread. If >layering refers to a protocol instance completely hiding all >services that are used to provide this instance's service, that >seems to be a problem. Layering per se is a different story. For >example, naming/addressing across non-cooperative networks seems to >always result in a stack of names with different scopes and of >course: stack of names == layers. This is even less understandable. Why would a layer hide all services? A layer should make services visible but hide the functions. A service is by definition what is visible across the layer boundary. > >However, tightly integrating a certain functionality package with a >specific addressing scheme (such as in contemporary "layers") seems >to inhibit flexibility unnecessarily. This sounds to me to have a number of unwarranted assumptions in the axioms. >>>Jon's message pointed to several previous designs, notably >>>x-kernel, that took a different cut. In recent work, (blatant >>>self-promotion alert) we tried to formalize these approaches in >>>our Sigcomm 2007 paper called "An Axiomatic Basis for >>>Communication." >>> >>>Interestingly, our approach only addressed the data plane. When we >>>move to the control plane, as Jon hinted, things get very hard >>>very fast. Essentially, the problem is that of race conditions: >>>the same state variable can be touched by different entities in >>>different ways (think of routing updates), and so it becomes hard >>>to tell what the data plane is going to actually do. In fact, >>>given a sufficiently large network, some chunk of the network is >>>always going to be in an inconsistent state. So, even >>>eventual-always-convergence becomes hard to achieve or prove. >>>Nevertheless, this line of attack does give some insights into >>>alternatives to layer-ism. >> >>The solution to that then is to not let the network get too large! >>Simple. ;-) >> >>Looking at your paper it seems to tend toward the beads-on-a-string >>model that the phone companies have always favored. Those never had >>very good scaling properties. It is not clear how differences of >>scope are accommodated. But then differences of scope sort of >>requires some sort of layer, doesn't it? So how does this >>architecture facilitate scaling? > >It's not an architecture (as in 'implementation blueprint'), but a >model with the goal of helping to reason about network >architectures. In its current form, it really only covers >naming/addressing. The strings that you mention are just >representative of the existing stacks of protocol headers, so >there's nothing special here. I agree, multiple layers of naming >might be the only option to facilitate scaling, but then again, why >does a naming layer always have to be bundled with a specific >package of other functionality? That would seem to depend on for what resources scaling was required. Take care, John From liqian at microsoft.com Thu May 29 11:30:23 2008 From: liqian at microsoft.com (Liqian Luo) Date: Thu, 29 May 2008 11:30:23 -0700 Subject: [e2e] ACM SenSys 2008: Call for Posters and Demos Message-ID: <715287EC0AFFA842B30C3431B73D1AE14F8D66D705@NA-EXMSG-C105.redmond.corp.microsoft.com> *** We apologize in advance if you receive multiple copies of this CfP *** ============================================================================ ACM SenSys 2008: Call for Posters and Demos The 6th ACM Conference on Embedded Networked Sensor Systems November 5-7, 2008 Raleigh, NC, USA http://sensys.acm.org/2008/ ============================================================================ Overview Demonstrations Sensys 2008 solicits demonstrations showing innovative research and applications. Sensys is interested in demonstrations of technology, platforms, algorithms and applications of wireless sensor networks. The demo session will provide researchers the opportunity to demonstrate their work and obtain feedback from interested conference attendees. Submissions from both industry and universities are encouraged. Demos will be evaluated based on technical merit and innovation as well as their potential to stimulate interesting discussions and exchange of ideas. Accepted demos will appear in Sensys proceedings. Overview Posters Sensys 2008 solicits posters showing exciting early work on sensor systems. While the poster need not describe completed work, it should report on research for which at least preliminary results are available. Submissions from both industry and universities are encouraged. Posters will be evaluated based on technical merit and innovation as well as their potential to stimulate interesting discussions and exchange of ideas. Accepted posters will appear in Sensys proceedings. Important dates: Demo and poster submission deadline: July 25th, 2008 Notification of acceptance : August 8th, 2008 Camera-ready abstract: August 22th, 2008 Topics of Interest: - Sensor network architecture and protocols - Rich sensor systems leveraging RFID, mobile devices (e.g., cell phones), cameras, robotics, etc. - Analysis of real-world systems and fundamental limits - Sensor network planning, provisioning, calibration and deployment - Deployment experience and testbeds - Experimental methods, including measurement, simulation, and emulation infrastructure - Programming methodology - Operating systems - Sensor network algorithms such as localization, routing, time synchronization, clustering, topology control, and coverage control algorithms - Failure resilience and fault isolation - Energy management - Data, information, and signal processing - Data storage and management - Distributed actuation and control - Applications - Security and privacy - Integration with back-end systems such as web-based information systems, process control, and Enterprise software Submission Instructions for Demonstrations: Demo abstracts must be formatted according to the ACM conference proceedings style with a font size of at least 10 points in PDF format. Style format is identical to main conference submissions. Demo abstracts may not exceed 2 pages and should be submitted by email to whitehouse AT cs . Virginia . edu with a subject line reading SENSYS 2008 DEMO SUBMISSION Demo Co-Chairs: Kamin Whitehouse (UVa) Yunhao Liu (HKUST) Submission Instructions for Posters: Poster abstracts must be formatted according to the ACM conference proceedings style with a font size of at least 10 points in PDF format. Style format is identical to main conference submissions. Poster abstracts may not exceed 2 pages and should be submitted by email to tianhe @ cs . umn . edu with a subject line reading SENSYS 2008 POSTER SUBMISSION Poster Co-Chairs: Phillipe Bonnet (U. Copenhagen) Tian He (UMN) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20080529/14dfe512/attachment.html