[e2e] experimenting on customers

David P. Reed dpreed at reed.com
Fri Apr 25 14:21:03 PDT 2008


There is no doubt that individuals and small groups can have big 
effects.  Ken Thompson and a couple others created Unix that 
disentangled the idea of an operating system from particular hardware in 
a few months of inspired hacking, Dan Bricklin and Bob Frankston 
launched the business use of personal computing with a few months coding 
in Bob's attic, and 30-40 grad students across the Northern Hemisphere 
launched the Internet as an overlay network that changed the world.

Such small experiments, which scale by explicitly exploiting, without 
need to "ask permission", others' technology, need to be encouraged, not 
discouraged.

Encouraged even though they "gore the oxen" of vendors of the exploited 
technologies, and threaten to exceed the bounds of control desired by 
control-freak governments, fears promoted by those who dislike new ideas 
and unpredictability.

We NEED fast, cheap, and out-of-control inventions.   Those inventions 
are the only source of results that can deal with the equally dangerous 
exponential growth of populations, pollution, climate risks, etc.

With all the attendant risks and need for mindfulness on the part of the 
inventors themselves that attend upon such powerful agencies of change.

Jon Crowcroft wrote:
> the point is that very very small people can do very very big experiments - there
> was some controversay about this, for example, in NSI Last year when the
> bittyrant people reveleaved that they had released their variant of the torrent
> tool with a modified incentive algorthm to see what would happen with a lot of
> users - as with all good psycho-science (and some anthropolgy:), the users
> can't know you are doing the experiment, coz that might interfere with the
> validity (sounds like asimov's psychohistory;).....
> but of course that has interesting ethical impact...
>
> but thats not my main point, which is:
>
> something as small as few lines change in a p2p program 
> which is then run by 1000s or millions of users, 
> has a MASSIVE potential (and actual) impact on the traffic pattern, 
> which has a massive impact on the ISPs (infrastructure) 
> which has a massive impact on the other users.
>
> so just because you cannot alter an IP header or a TCP option saying 
> a) the middleboxes get in the way,
> and
> b) the vendors wont put it in the OS stack for you anyhow,
> does NOT mean you cannot do BIG Network Science
> one bit. not at all.
>
> oh no
>
>
> In missive <48109844.2040209 at reed.com>, "David P. Reed" typed:
>
>  >>So what's the point you're making Jon?   Users' experiments impact other 
>  >>users?  What does that have to do with vendors experimenting on their 
>  >>users?  The cases are different qualitatively and in kind, and the risks 
>  >>and liabilities are different, as in any multiparty system of 
>  >>constraints and desires.
>  >>
>  >>Unless of course, you think that there is no difference at all between 
>  >>the King and his subjects, the President and commander-in-chief and a 
>  >>homeless person.
>  >>
>  >>Experiments by the powerful upon the weak/dependent are quite different 
>  >>from experiments with limited impact and scale taking place in a vast 
>  >>ocean of relatively unaffected people.
>  >>
>  >>There is no binary "good vs. evil" logic here.  There is no black and 
>  >>white.  But the difference is plain unless you abstract away every 
>  >>aspect of reality other than the term "experiment".
>  >>
>  >>Jon Crowcroft wrote:
>  >>> it is crucial to realize that the regulation of experiments of any kind is a 
>  >>> post hoc rationalisation rather than any kidn of actual model of what 
>  >>> ought to be the outcome
>  >>>
>  >>> almost all new succesfully deployed
>  >>> protocols in the last 15-20 years have been ahead of any curve 
>  >>> to do with IETF processes, ISPs planning, provisioning,
>  >>> legal system comprehension...
>  >>>
>  >>> end users download and run stuff 
>  >>> (even if they dont compromse their OS, they
>  >>> in the millions download facebook apps daily  that compromise their privacy and
>  >>> potentially break laws in some parts of the world
>  >>>
>  >>> they upgrde or dont upgrade their end systems and their home xDSL/wifi routers'
>  >>> firmware
>  >>>
>  >>> every one of these may be a controlled experiment when conducted in isolation,
>  >>> and with full support of the software developer, but in combination
>  >>> they are clear
>  >>> blue sky...
>  >>>
>  >>> we don't know the emergent properties of these things until people notice them
>  >>> (in nanog, in court, in government, or, occasionalyl, by doing measurement
>  >>> experiments...
>  >>>
>  >>> frankly, even within a single node, i remember roger needham explaining over 10
>  >>> years ago that it had become impossible for microsoft to run regression testing
>  >>> across all combinations of devices and drivers and OS versions because the
>  >>> numbers had just Got Too Big already (2-3 new devices per day etc etc)
>  >>> so now do that with networked interactions...
>  >>>
>  >>> ergo:
>  >>> all experiments by net customers are experiments on net customers...
>  >>>
>  >>> of course, the one thing we can't do with the one true internet (since it is now
>  >>> holy critical infrastructure) is proper destrctive testing 
>  >>> (we can't even figure out LD50:)
>  >>>
>  >>> In missive <480F296F.7020807 at reed.com>, "David P. Reed" typed:
>  >>>
>  >>>  >>Dave - as I mentioned earlier, there is a huge difference between 
>  >>>  >>experimenting on customers and letting customers experiment.
>  >>>  >>
>  >>>  >>Your post equates the two.  I suggest that the distinction is crucial.  
>  >>>  >>And that is the point of the end-to-end argument, at least 80% of it, 
>  >>>  >>when applied to modularity between carriers and users, or the modularity 
>  >>>  >>between systems vendors and users, or the modularity between companies 
>  >>>  >>that would hope to support innovative research and researchers.
>  >>>  >>
>  >>>  >>
>  >>>  >>
>  >>>  >>Dave Crocker wrote:
>  >>>  >>>
>  >>>  >>>
>  >>>  >>> Jon Crowcroft wrote:
>  >>>  >>>> I dont understand all this - most software in the last 30
>  >>>  >>>> years is an experiment on customers - the internet as a whole
>  >>>  >>>> is an experiment 
>  >>>  >>> ...
>  >>>  >>>> so if we are honest, we'd admit this and say
>  >>>  >>>> what we need is a pharma model of informed consent
>  >>>  >>>> yeah, even discounts
>  >>>  >>>
>  >>>  >>>
>  >>>  >>> In looking over the many postings in this thread, the word 
>  >>>  >>> "experiment" provides the most leverage both for insight and for 
>  >>>  >>> confusion.
>  >>>  >>>
>  >>>  >>> Experiments come in very different flavors, notably with very 
>  >>>  >>> different risk.
>  >>>  >>> When talking about something on the scale of the current, public 
>  >>>  >>> Internet,
>  >>>  >>> or American democracy or global jet travel, the term "experiment" 
>  >>>  >>> reminds us
>  >>>  >>> that we do not fully understand impact. But the term also denotes a 
>  >>>  >>> risk of
>  >>>  >>> failure which cannot reasonably apply for these grander uses.  (After 
>  >>>  >>> a few
>  >>>  >>> hundred years, if a civilization dies off, is it a "failure", even 
>  >>>  >>> though we
>  >>>  >>> label it an experiment?) In other words, we use the word "experiment" 
>  >>>  >>> here in
>  >>>  >>> a non-technical way, connoting the unknown, rather the denoting 
>  >>>  >>> controlled
>  >>>  >>> manipulation, diligent study and incremental refinement.
>  >>>  >>>
>  >>>  >>> So, some of the complaints about being unable to experiment on the open
>  >>>  >>> Internet simply do not make sense, any more than "testing" a radically 
>  >>>  >>> new concrete -- with no use experience -- on a freeway bridge would 
>  >>>  >>> make sense. Risk is obviously too high; in fact, failure early in the 
>  >>>  >>> lifecycle of a new
>  >>>  >>> technology is typically guaranteed.  Would you drive over that sucker? 
>  >>>  >>> Or under it?
>  >>>  >>>
>  >>>  >>> So if someone is going to express concerns about barriers to adoption, 
>  >>>  >>> such as
>  >>>  >>> a lack of flexibility by providers or product companies, they need to
>  >>>  >>> accompany it will a compelling adoption case that shows sufficiently 
>  >>>  >>> low risk
>  >>>  >>> and sufficiently high benefit.  Typically, that needs to come from real
>  >>>  >>> experimentation, meaning early-stage development, real testing, and pilot
>  >>>  >>> deployment.  (Quite nicely this has the not-so-minor side benefit of 
>  >>>  >>> grooming an increasingly significant constituency that wants the 
>  >>>  >>> technology adopted.)
>  >>>  >>>
>  >>>  >>> Businesses do not deploy real experiments in their products and services.
>  >>>  >>> Costs and risks are far both too high.  What they deploy are features 
>  >>>  >>> that provide relatively assured benefits.
>  >>>  >>>
>  >>>  >>> As for "blocking" experiments by others, think again of the bridge. 
>  >>>  >>> Collateral damage requires that public infrastructure services been 
>  >>>  >>> particularly conservative in permitting change.
>  >>>  >>>
>  >>>  >>> In the early 90s, when new routing protocols were being defined and 
>  >>>  >>> debated, it was also noted that there was no 'laboratory' large enough 
>  >>>  >>> to test the protocols to scale, prior to deployment in the open 
>  >>>  >>> Internet.  One thought was to couple smaller test networks, via 
>  >>>  >>> tunnels across the Internet.  I suppose Internet II counts as a modern 
>  >>>  >>> alternative.  In other words, real experiments needs real laboratories.
>  >>>  >>>
>  >>>  >>> The other challenge for this particular thread is that the term 
>  >>>  >>> end-to-end is
>  >>>  >>> treated as a rigid absolute, but never has actually been that.  It is 
>  >>>  >>> a term
>  >>>  >>> of relativity defined by two boundary points.  The the modern Internet 
>  >>>  >>> has added more complexity between the points, as others have noted.  
>  >>>  >>> Rather than a simplistic host-net dichotomy we have layers of 
>  >>>  >>> intermediary nets, and often layers of application hosts.  (Thank you, 
>  >>>  >>> akamai...) We also have layers outside of what we typically call 
>  >>>  >>> end-points, such as services that treat email as underlying 
>  >>>  >>> infrastructure, rather than "the" application.
>  >>>  >>>
>  >>>  >>> And we have layers of trust (tussles).
>  >>>  >>>
>  >>>  >>> And, and, and...
>  >>>  >>>
>  >>>  >>> So when claiming to need end-to-end, the question is which of many 
>  >>>  >>> possible ends?
>  >>>  >>>
>  >>>  >>> And, for what purpose, or one might say... to what end?
>  >>>  >>>
>  >>>  >>> d/
>  >>>  >>>
>  >>>  >>>
>  >>>  >>>
>  >>>  >>>
>  >>>  >>>
>  >>>  >>>
>  >>>  >>>
>  >>>  >>>
>  >>>  >>>
>  >>>  >>>
>  >>>
>  >>>  cheers
>  >>>
>  >>>    jon
>  >>>
>  >>>
>  >>>   
>
>  cheers
>
>    jon
>
>
>   


More information about the end2end-interest mailing list