[e2e] end of interest

David P. Reed dpreed at reed.com
Sat May 10 09:18:32 PDT 2008


Perhaps my main mission today can shed some light on the question of 
"overlay vs. rethink".

My main mission is about interoperation among multi-radio communications 
systems.   About 60 years ago, the core separation between the physics 
of (take your pick) long-wavelength photonics/electromagnetic wave 
dynamics and radio systems was formalized and hardened into engineering 
practice - creating the concept of "link" and "channel".   These were 
characterized probabilistically using information theory, eliminating 
any notions of "shared medium" from systems design above the antenna layer.

Now the "hot" areas of radio networking are focused where the drunk 
looks - under the lamppost of WiFi chipsets that are easily accessed (at 
least in Linux), and in trying to map the problems of networking into 
creating stable long-term relationships that look like IP Autonomous 
Systems and a teeny bit of mobility modeled on cellular roaming.

It's too hard (for a researcher who wants a quick hit to keep the 
funding spigot turned on) to look at the other attributes of 
electromagnetic systems - the lack of source and channel coding 
separation, the physics of 4 dimensional propagation spaces, the 
constraints of antennae, and the complexities of synchronizing clocks 
and other means of overcoming the inherent costs of constantly sensing 
the electromagnetic coupling between distinct systems.

So "overlays" on an existing, but very corroded, creaky, and bad 
underlying system called "radio engineering" is all that information 
theorists can manage, all that network theorists can manage, etc.

This would not be bad if radio networking were a small, unimportant 
aspect of communications today and tomorrow.   But that assumption is 
fundamentally wrong.  Radio is *the* greenfield of communications 
opportunity, and most humans expect it to be where a large part of the 
future lies (if we include fiber and radio - all photons) we can cover 
the entire future.

There are huge aspects of that future that depend on getting the 
low-level abstractions right (in the sense that they match real physical 
reality).  And at the same time, constructing a stack of abstractions 
that work to maximize the utility of radio.

Looking backwards at the abstractions we invented in 1977 or so for the 
Internet is useful, because one can easily see that we did not consider 
an all-radio, all-dynamic, adaptive model of communications.

That said, we still need to avoid what we avoided in 1977 - thinking 
that "optimization" and elimination of layers would be a good thing 
because squeezing every bit of performance out of an arbitrarily chosen 
benchmark was going to help us in the future.

It wasn't that we made the Internet inefficient that was the problem.  
It was that NEW problems like today's intense interest in radio and our 
ability to process it cheaply (SDR, DSP, pervasive radios, 
reconfigurable antennas) were not anticipated.

It's foolish to focus on the light under the lamppost (as most of the 
field does today, trying to "optimize" the networks we already operate 
pretty damn well).   Instead, look out into the darkness a bit more, and 
build tools to navigate, rough and ready, but learning all the time.


More information about the end2end-interest mailing list