[e2e] Re: crippled Internet

Dennis Ferguson dennis at juniper.net
Sat Apr 28 09:41:16 PDT 2001


Sean,

> | (2) if it turns out that some characteristic of the infrastructure is a
> |     bad match for the application, figure out what causes the problem
> |     characteristic; and
> |
> | (3) fix that problem, then go back to (1)
> 
> Under what conditions do we fix (2) by adjusting the application,
> rather than by adjusting the infrastructure?   It seems to me that
> while the Internet infrastructure IS NOT LIKE TDM today, there are
> a vast number of applications which are quite happy with it.  

If the world of applications could be cleanly divided into those that
are clearly "TDM" and those that are clearly "Internet" then life would
be very simple indeed.  To tell the truth I like TDM networks (the kind
you might use when, for example, you need 1.536 Mbps of full duplex user
payload between point A and point B for a long time) a whole lot.  The
equipment needed to support this can be dumb as a rock, which means there
is a high probability that this network will end up being both reliable
and scalable.  The equipment is often expensive for no good reason that
I can ascertain, but this seems more like a market opportunity waiting
to be addressed rather than anything fundamental.

The problem is that there are lots of applications which don't fall
cleanly into either category.  Voice seems like one of these.  It
requires a routing and switching infrastructure of considerable
complexity on top of the TDM network to support broad connectivity.
The Internet provides the same (or essentially the same, if you
squint).  Voice conventionally occupies 64 kbps full duplex channels,
a level of overprovisioning which matches or exceeds the backbone
overprovisioning of even the most conservative providers.  First,
conversations are normally half duplex, with only one end talking
at once, but there is no way to recover the half of the bandwidth
which goes unused.  Second, compression, which is now cheap as dirt,
reduces the necessary bandwidth requirement further but inherently
wants to produce variable bit-rate output since the redundancy in the
data stream it can squeeze out varies.  The codecs which produce a
constant bit-rate output are actually performing an unnatural act
useful only in a TDM infrastructure (i.e. adjusting the application
to fit the infrastructure) where you can accomodate variable
bit-rate data only by provisioning for the worst case.  At the middle
of an Internet service serving lots of such applications, however,
you can provision for the average, half-duplex bandwidth and not be
wrong very often.  TDM voice works by substantial overprovisioning
using a complex, dedicated purpose infrastructure.  Moving the
application to your complex, general purpose Internet infrastructure
has some advantages if it can be done.

So the question still remains, can it be done?  I didn't mean to
imply above that all problems are fixable, but you have to know
what the problem is in some detail before you can determine the
relative difficulty of making it better.  If the problem is
reliability then we need to understand what it is which causes
the unreliability before we know what to fix, or can conclude
that there is something fundamental about the problem which makes
it hard to fix.  If some of the existing infrastructure is perfect
for the application except for some occasional delay excursions, I'd
like to know exactly what it is that is causing the delays rather
than either assuming the cause, or just assuming they are inherent
and unavoidable.  I've accepted (and defended) more than a few
problems as fundamental and unfixable, and seen them go unfixed
because everyone else did too, only to discover that when someone
got around to actually understanding the real life circumstances
of the problem the root cause was a very dumb, mundane bug that
everyone just got used to looking at.  Finding and fixing the problems
which are low hanging fruit makes the infrastructure better for
everything, and so has broad benefit even if you can never convince
yourself that you've got it good enough for voice.

So no, I don't believe that every application that has been or
will be invented can necessarily be run over the Internet, and
I accept that some applications which can't be accomodated by
the Internet may be important enough to justify separate
infrastucture.  I just don't want to preclude any such application
from the Internet until there is a fundamental understanding of
why it is that the characteristics of the Internet which the
application objects to are inherently hard to change, and why
it is that the application can't be adjusted to accomodate the
hard-to-fix characteristics.  For voice it seems like we're still
at the surface-scratching phase of gaining this understanding
and, while I'm aware of lots of a priori pronouncements on what
can and can't be done and what it will take, to tell the truth I
don't trust any of that nearly as much as I value measurements and
observations from real infrastructure.

Note that the hardest part of building anything is having a crystal
clear view of the problems you need to solve by building it, and the
problems you don't need to tackle.  People who own infrastructure
used in real life have it good because, if they've cared to observe,
they know stuff about real life that others can only guess at.  I know
this because I used to know stuff too, stuff that other people found
surprising.  The problem was, and is, getting this knowledge quantified
sufficiently so others can understand it.  While I don't doubt that
your intuition that some of this stuff won't coexist has a good
chance of being correct, I'd prefer to see some numbers so that I
could understand it too.

Dennis Ferguson



More information about the end2end-interest mailing list