[e2e] was double blind, now reproduceable results

Jon Crowcroft Jon.Crowcroft at cl.cam.ac.uk
Thu May 20 09:34:40 PDT 2004

I completely agree with your message here

so 2 things prompted me to write the original note

1/ a message from a HCI person in a UK university who was bemanig problems of a similar nature and

2/an experience we had around 4 years backwitha 1st yr phd student who was (is) very very good and tried to work on
TCP on Diff Serve - as an excercise, we took around 100 papers (there are a lot more) that had simulated TCP variants
on diffserve variants and tried simply to udnerstand them by running the (almost all) ns code that they provided
for their TCP or their DIffserve router/queue manager algorithm, 
tyogether with as much as we could work out of the script and config and analysis that they almost all did not provide.

from memory, the dfference in results were explained roughly as this:
in about 40 cases, the results could not (initially) be reproduced at all - on close examination,
in about 15 cases, one could get the same results if one took the 1st run ONLY of the simulation
(didnt any of them read Raj Jain's book?)
in some cases they were running a buggy variant of a TCP (actually I only found this out for a few cases when someone
else shared a simiular experience ,and said they had found the bug that was caused by failing to bind
a tcl variable to the C one, so when tracing the tcl variable and plotting it, effectively random plots were
generated:-) lets just say the variable might or might not have been cwnd
in some cases there wasnt an explanation that could easily be found

so notice this argues possibly AGAINST sharing code with papers as well as FOR it - along the lines of Ross Andersons
results recently (and others) Suggesting that reporting of exploits on open source, and keeping secret exploits on
proprietaryt source are about as effective statisticallt as preventing further problems:-)

the above ns/tcp/diffserve example is probably made up to illustrate this argument isnt simple, but it could also be
true -

In missive <20040520152621.GD13154 at lcs.mit.edu>, "David G. Andersen" typed:

 >>On Thu, May 20, 2004 at 09:23:02AM -0500, Saad Biaz scribed:
 >>> Reproducible results, checking, .... It sounds to me that academic
 >>> integrity is dead. Worse, it seems that we accept that it is dead.
 >>> Now, we want to set a batterie of checks... This is Tom Ridge's approach,
 >>> forgetting the source of the problem...
 >>Most reasons to attempt to reproduce earlier results have nothing
 >>to do with integrity:
 >>  * Unstated assumptions
 >>  * Refining or correcting hypotheses
 >>  * Bugs/etc.  (How much code do we all write w/in 24 hours of the deadline?)
 >>  * Measurement error
 >>  * Re-testing previous results in a somewhat different environment
 >>One of the really nice things about being forced to make your experiments
 >>as reproducible as possible is that you have to go through and examine
 >>all of your assumptions and carefully describe your experimental process.
 >>This is probably as revealing to the authors as it is to the readers
 >>in terms of understanding what it is you've actually done.
 >>  "The most exciting phrase to hear in science, the one that heralds
 >>   new discoveries, is not 'Eureka!' (I found it!) but 'That's funny...'"
 >>     --Isaac Asimov
 >>(thanks to Mike Dahlin's page for that one).  Finding results that 
 >>contradict earlier results is an opportunity to dig down and find
 >>a better answer.
 >>> of thought, and ..... particularly because we hated becoming a
 >>> salesperson. Here we are that now where many researchers must behave
 >>> like a salesperson using hype, deceit, unreproducible results....
 >>There are undoubtedly people in any community who will fudge their
 >>results in some way (witness very high profile examples with discovery
 >>of new chemical elements).  But this is not due to pressure to
 >>publish happy results - it's because they were able to obtain {fame,
 >>funding, tenure, etc.} by doing it.  It's an unavoidable problem
 >>in any circumstances.  Does computer science / networking have more
 >>of a problem with intentional dishonesty than any other field?
 >>Being a salesperson is the reality for all scientists.  Why give a
 >>good conference talk?  Why announce your cool new system on
 >>e2e-interest?  Why make your grad students read Strunk & White and
 >>spend hours editing their papers?  Because you're justifiably proud
 >>of your work and want to make sure it has impact.  Don't confound
 >>sales and dishonesty.  We should encourage one (to the proper degree!)
 >>and strongly, strongly discourage the other.
 >>  -Dave
 >>work: dga at lcs.mit.edu                          me:  dga at pobox.com
 >>      MIT Laboratory for Computer Science           http://www.angio.net/



More information about the end2end-interest mailing list