[e2e] experimenting on customers
Jon.Crowcroft at cl.cam.ac.uk
Thu Apr 24 04:59:09 PDT 2008
it is crucial to realize that the regulation of experiments of any kind is a
post hoc rationalisation rather than any kidn of actual model of what
ought to be the outcome
almost all new succesfully deployed
protocols in the last 15-20 years have been ahead of any curve
to do with IETF processes, ISPs planning, provisioning,
legal system comprehension...
end users download and run stuff
(even if they dont compromse their OS, they
in the millions download facebook apps daily that compromise their privacy and
potentially break laws in some parts of the world
they upgrde or dont upgrade their end systems and their home xDSL/wifi routers'
every one of these may be a controlled experiment when conducted in isolation,
and with full support of the software developer, but in combination
they are clear
we don't know the emergent properties of these things until people notice them
(in nanog, in court, in government, or, occasionalyl, by doing measurement
frankly, even within a single node, i remember roger needham explaining over 10
years ago that it had become impossible for microsoft to run regression testing
across all combinations of devices and drivers and OS versions because the
numbers had just Got Too Big already (2-3 new devices per day etc etc)
so now do that with networked interactions...
all experiments by net customers are experiments on net customers...
of course, the one thing we can't do with the one true internet (since it is now
holy critical infrastructure) is proper destrctive testing
(we can't even figure out LD50:)
In missive <480F296F.7020807 at reed.com>, "David P. Reed" typed:
>>Dave - as I mentioned earlier, there is a huge difference between
>>experimenting on customers and letting customers experiment.
>>Your post equates the two. I suggest that the distinction is crucial.
>>And that is the point of the end-to-end argument, at least 80% of it,
>>when applied to modularity between carriers and users, or the modularity
>>between systems vendors and users, or the modularity between companies
>>that would hope to support innovative research and researchers.
>>Dave Crocker wrote:
>>> Jon Crowcroft wrote:
>>>> I dont understand all this - most software in the last 30
>>>> years is an experiment on customers - the internet as a whole
>>>> is an experiment
>>>> so if we are honest, we'd admit this and say
>>>> what we need is a pharma model of informed consent
>>>> yeah, even discounts
>>> In looking over the many postings in this thread, the word
>>> "experiment" provides the most leverage both for insight and for
>>> Experiments come in very different flavors, notably with very
>>> different risk.
>>> When talking about something on the scale of the current, public
>>> or American democracy or global jet travel, the term "experiment"
>>> reminds us
>>> that we do not fully understand impact. But the term also denotes a
>>> risk of
>>> failure which cannot reasonably apply for these grander uses. (After
>>> a few
>>> hundred years, if a civilization dies off, is it a "failure", even
>>> though we
>>> label it an experiment?) In other words, we use the word "experiment"
>>> here in
>>> a non-technical way, connoting the unknown, rather the denoting
>>> manipulation, diligent study and incremental refinement.
>>> So, some of the complaints about being unable to experiment on the open
>>> Internet simply do not make sense, any more than "testing" a radically
>>> new concrete -- with no use experience -- on a freeway bridge would
>>> make sense. Risk is obviously too high; in fact, failure early in the
>>> lifecycle of a new
>>> technology is typically guaranteed. Would you drive over that sucker?
>>> Or under it?
>>> So if someone is going to express concerns about barriers to adoption,
>>> such as
>>> a lack of flexibility by providers or product companies, they need to
>>> accompany it will a compelling adoption case that shows sufficiently
>>> low risk
>>> and sufficiently high benefit. Typically, that needs to come from real
>>> experimentation, meaning early-stage development, real testing, and pilot
>>> deployment. (Quite nicely this has the not-so-minor side benefit of
>>> grooming an increasingly significant constituency that wants the
>>> technology adopted.)
>>> Businesses do not deploy real experiments in their products and services.
>>> Costs and risks are far both too high. What they deploy are features
>>> that provide relatively assured benefits.
>>> As for "blocking" experiments by others, think again of the bridge.
>>> Collateral damage requires that public infrastructure services been
>>> particularly conservative in permitting change.
>>> In the early 90s, when new routing protocols were being defined and
>>> debated, it was also noted that there was no 'laboratory' large enough
>>> to test the protocols to scale, prior to deployment in the open
>>> Internet. One thought was to couple smaller test networks, via
>>> tunnels across the Internet. I suppose Internet II counts as a modern
>>> alternative. In other words, real experiments needs real laboratories.
>>> The other challenge for this particular thread is that the term
>>> end-to-end is
>>> treated as a rigid absolute, but never has actually been that. It is
>>> a term
>>> of relativity defined by two boundary points. The the modern Internet
>>> has added more complexity between the points, as others have noted.
>>> Rather than a simplistic host-net dichotomy we have layers of
>>> intermediary nets, and often layers of application hosts. (Thank you,
>>> akamai...) We also have layers outside of what we typically call
>>> end-points, such as services that treat email as underlying
>>> infrastructure, rather than "the" application.
>>> And we have layers of trust (tussles).
>>> And, and, and...
>>> So when claiming to need end-to-end, the question is which of many
>>> possible ends?
>>> And, for what purpose, or one might say... to what end?
More information about the end2end-interest