[e2e] experimenting on customers
dhc2 at dcrocker.net
Tue Apr 22 20:55:08 PDT 2008
Jon Crowcroft wrote:
> I dont understand all this - most software in the last 30
> years is an experiment on customers - the internet as a whole
> is an experiment
> so if we are honest, we'd admit this and say
> what we need is a pharma model of informed consent
> yeah, even discounts
In looking over the many postings in this thread, the word "experiment"
provides the most leverage both for insight and for confusion.
Experiments come in very different flavors, notably with very different risk.
When talking about something on the scale of the current, public Internet,
or American democracy or global jet travel, the term "experiment" reminds us
that we do not fully understand impact. But the term also denotes a risk of
failure which cannot reasonably apply for these grander uses. (After a few
hundred years, if a civilization dies off, is it a "failure", even though we
label it an experiment?) In other words, we use the word "experiment" here in
a non-technical way, connoting the unknown, rather the denoting controlled
manipulation, diligent study and incremental refinement.
So, some of the complaints about being unable to experiment on the open
Internet simply do not make sense, any more than "testing" a radically new
concrete -- with no use experience -- on a freeway bridge would make sense.
Risk is obviously too high; in fact, failure early in the lifecycle of a new
technology is typically guaranteed. Would you drive over that sucker? Or
So if someone is going to express concerns about barriers to adoption, such as
a lack of flexibility by providers or product companies, they need to
accompany it will a compelling adoption case that shows sufficiently low risk
and sufficiently high benefit. Typically, that needs to come from real
experimentation, meaning early-stage development, real testing, and pilot
deployment. (Quite nicely this has the not-so-minor side benefit of grooming
an increasingly significant constituency that wants the technology adopted.)
Businesses do not deploy real experiments in their products and services.
Costs and risks are far both too high. What they deploy are features that
provide relatively assured benefits.
As for "blocking" experiments by others, think again of the bridge. Collateral
damage requires that public infrastructure services been particularly
conservative in permitting change.
In the early 90s, when new routing protocols were being defined and debated,
it was also noted that there was no 'laboratory' large enough to test the
protocols to scale, prior to deployment in the open Internet. One thought was
to couple smaller test networks, via tunnels across the Internet. I suppose
Internet II counts as a modern alternative. In other words, real experiments
needs real laboratories.
The other challenge for this particular thread is that the term end-to-end is
treated as a rigid absolute, but never has actually been that. It is a term
of relativity defined by two boundary points. The the modern Internet has
added more complexity between the points, as others have noted. Rather than a
simplistic host-net dichotomy we have layers of intermediary nets, and often
layers of application hosts. (Thank you, akamai...) We also have layers
outside of what we typically call end-points, such as services that treat
email as underlying infrastructure, rather than "the" application.
And we have layers of trust (tussles).
And, and, and...
So when claiming to need end-to-end, the question is which of many possible
And, for what purpose, or one might say... to what end?
More information about the end2end-interest