[e2e] Layering vs. modularization

James Kempf kempf at docomolabs-usa.com
Thu May 15 11:14:27 PDT 2008


Layering has problems in wireless, ad hoc networks. There is a whole 
collection of cross-layer design papers that show substantial improvements 
when layers are broken down. For example:

- Chen & Hsia, 2004: 1000% gain in e2e SNR using joint channel coding (PHY) 
and compression (APP) for video over wireless
- Chaing, 2005: 82% improvement in throughput per watt by joint TCP/PHY 
optimization.
- Pursley, 2002: 50%-400% improvement in various delay, throughput and 
efficiency metrics relative to min-hop routing for cross-layer MAC/NET
- Bougard, et. al, 2004: 50-90% reduction in energy while meeting user 
requirements depending on channel conditions through joint APP/MAC/PHY 
design
...etc.

The list is courtesy of Chris Ramming, former program director for the DARPA 
MARCONI project.

The layers in the traditional IP stack often don't match well with concerns 
relevant to ad hoc wireless networks. Energy use, for example, is a big 
concern in ad hoc (and any for that matter) wireless but optimization of 
energy use wasn't a big issue when the IP stack was finalized in the 80's. 
So it is not reasonable to expect that the IP stack would do a good job at 
addressing it.

The DARPA MARCONI project looked at using network utility maximization (NUM) 
to rearrange the layers (review paper on NUM by Chaing, Low, Calderbank, and 
Doyle, 2007) with promising results. NUM basically formulates a network 
architecture as the maximization of a utility function over a constraint 
set. The architecture, including protocol layers and intermediate network 
entities (middleboxes if you will), falls out as a solution to the 
optimization problem. My take on the NUM work, which originated through 
analyzing TCP, is that, although promising, it is still unclear a) how to 
take an unstructured problem and formulate the NUM problem, and b) once the 
NUM solution is available, how to map that into a network architeture and 
protocol design. These processes seem to me to need more work, especially in 
areas outside of traditional transport concerns such as security which 
Chaing, et. al. call "externalities". Until these are solved, I think it 
will be difficult to use NUM for general architectural synthesis (i.e. you 
need to be a optimal control theorist to do a good job using it). Solving 
them is going to take some collaborative work between control theorists such 
as Calderbank, et. al. and the networking types which hang out on this list.

               jak


----- Original Message ----- 
From: "John Day" <day at std.com>
To: "S. Keshav" <keshav at uwaterloo.ca>; <end2end-interest at postel.org>
Sent: Wednesday, May 14, 2008 8:12 PM
Subject: Re: [e2e] Layering vs. modularization


At 20:55 -0400 2008/05/14, S. Keshav wrote:
>This note addresses the recent discussion on layering as a form of
>modularization.
>
>Layering is one particular (but not very good) form of
>modularization. Modularization, as in programming, allows separation
>of concerns and clean interface design. Layering goes well beyond,
>insisting on (a)
>progressively higher levels of abstraction, i.e. an enforced
>conceptual hierarchy, (b) a progressively larger topological scope
>along this hierarchy, and (c) a single path through the set of
>modules. None of the three is strictly necessary, and, for example
>in the case of wireless networks, is broken.

Gee, the only layering I have ever seen that had problems were the
ones done badly, such as with the Internet and OSI.  In my
experience, if you do it right, it actually helps rather than gets in
the way.  Although, I know the first two conditions hold for properly
layered systems and I am not sure I understand the third.  Hmmm,
guess I am missing something.  Although, I have to admit I never
quite understood how wireless caused problems.

>Jon's message pointed to several previous designs, notably x-kernel,
>that took a different cut. In recent work, (blatant
>self-promotion alert) we tried to formalize these approaches in our
>Sigcomm 2007 paper called "An Axiomatic Basis for Communication."
>
>Interestingly, our approach only addressed the data plane. When we
>move to the control plane, as Jon hinted, things get very hard very
>fast.  Essentially, the problem is that of race conditions: the same
>state variable can be touched by different entities in different
>ways (think of routing updates), and so it becomes hard to tell what
>the data plane is going to actually do. In fact, given a
>sufficiently large network, some chunk of the network is always
>going to be in an inconsistent state. So, even
>eventual-always-convergence becomes hard to achieve or prove.
>Nevertheless, this line of attack does give some insights into
>alternatives to layer-ism.

The solution to that then is to not let the network get too large!
Simple.  ;-)

Looking at your paper it seems to tend toward the beads-on-a-string
model that the phone companies have always favored. Those never had
very good scaling properties.  It is not clear how differences of
scope are accommodated.  But then differences of scope sort of
requires some sort of layer, doesn't it?  So how does this
architecture facilitate scaling?

Still confused,  ;-)

John Day




More information about the end2end-interest mailing list