[e2e] was double blind, now reproduceable results
Jon.Crowcroft at cl.cam.ac.uk
Wed May 19 14:42:39 PDT 2004
so in fact i wasnt particlalry aiming at the internet measurement
commnity - in fact, looking at IMW, PAM etc, they over the last 5-10
years have amassed a set of
ii) measurement archives
iii) analytical approaches
and related things that put the rest of us to shame
the point is that when things work, they give the evidence that IF we
were handed the same system circumstances, we would have got the same
results, and more generally, that interpolation and extrapolation from
the set of results in a genre, fit together in some fashio that leads
towards improvement in our confidence ina dominant hypothesis
on the other hand a LOT of stuff is submitted to conferences
that one can make no such claim about -
my goal is a piece with my goal of thinking about public explosure of
papers - if we encourag submission of work only if the tools and
circumstances are also subject to public scrutiny, we will create a
situation where people will stop submitting so much half baked stuff -
thus increasing the quality of work, and decreasing the workload of
program committees and editors - a classic win win -
and the number of papers needed to get tenure wil, on average, be
reduced too since the impact of each work will increase.
who loses by this?
In missive <20040519161946.9B77B77AA5C at guns.icir.org>, Mark Allman typed:
>>Content-Type: text/plain; charset=us-ascii
>>> Jon - there are a large class of experimental results that can never
>>> be reproduced again (measurements of network at a point in time).
>>Let me push back for a moment ...
>> + One can argue that in the strictest sense Internet measurements can
>> never be reproduced. There are too many variables in play to ever
>> even hope to exactly gather the same data twice. (In labs one can
>> obtain more reproducability, of course. But, not live, production
>> + That said, this argues for sharing of raw measurement results so
>> that various researchers can verify the analysis, at least. Or,
>> compare various schemes for measuring some piece of information on
>> the same dataset (which is not necessarily about reproducability,
>> but certainly within the realm of conducting good science).
>> + And, just because we cannot reproduce numbers doesn't mean we cannot
>> gather a subsequent set of data and reproduce the insights gained.
>> I.e., "reproducable" can be defined in more than one way.
>> For instance, we have a few claimed invariants in networks (ideally,
>> we should have more), such as that FTP data transfers follow some
>> particular distribution. We have evidence that the distribution is
>> the same across networks, even if the mean of the distribution is
>> different. So, someone can reproduce the insight, but maybe not the
>> exact same exact numbers that were found at some other point in the
>> network (and in time). Or, alternatively, if someone found that the
>> distribution didn't hold then the claimed invariant wouldn't be.
>> That's valueable input, too.
>>So, I think that I agree with you that we cannot strictly reproduce
>>Internet measurements. However, I think there are things we can do to
>>cope and to improve our overall methodology to make it more scientific
>>(carefully not stringing the words "scientific method" together, you'll
>>Mark Allman -- ICIR -- http://www.icir.org/mallman/
>>-----BEGIN PGP SIGNATURE-----
>>Version: GnuPG v1.2.3 (FreeBSD)
>>-----END PGP SIGNATURE-----
More information about the end2end-interest