[e2e] Re: [Tsvwg] Really End-to-end or CRC vs everything else?

Jonathan Stone jonathan at DSG.Stanford.EDU
Mon Jun 11 15:02:30 PDT 2001


In message <5.1.0.14.2.20010611172743.04607430 at mail.reed.com>,
"David P. Reed" writes:


>At 04:48 PM 6/11/01 -0400, Craig Partridge wrote:
>>I think you've missed the point.  In a prior note, you suggested a line of
>>thinking of assume an adversary.  Implicitly, that's an error model.

>When you are trying to think through what kinds of errors software and 
>hardware design flaws might introduce, it helps to classify them by 
>computational complexity - it takes resources to screw stuff up in a 
>complicated way.

*Almost*.  I think what you are reaching for here is more the
Kolmorogov-Chaitin information in the errors.  But even
that isn't quite right, for error detection.

The problem is that for error-detection, what counts as
``complicated'' is determined by the error-detection function.  Look
at the 16-bit deletions we caught on a LAN at Stanford.  They have a
*very* complicated description, if we have to describe them as
error polynomials. If we are allowed to describe them as a deletion,
they have very short descriptions.

Problem is that this very short description doesn't tell you
how "complex" the error appears to your error-check function.
And that's what you care about.



>The computational complexity needed to screw up cryptographic hash is known 
>to be high,

Not quite. what's beleived to be computationally intractrable about
one-way hashes is coming up with a message which matches some
pre-specified hash; or a pair of inputs which collide on the same
hash. It doesn't say that coming up with M1:H1 such that H1 == H(M1)
is fiendishly hard; your argument about MD5 was, after all, that this
was not much worse than CRCs to compute in software, right?

We can extend this to shard-secret cryptographic systems, but it
seems to me that your motivation for doing so is not detectinge errors
(per se) at all.


> whereas there are computationally simple functions that can 
>screw up  simple checksums.

It really doesn't matter.  Over the space of all possible errors,
they both do as well, given equivalent numbers of `bits'.


If you are trying to argue that, *in pracrtice*, errors with higher
Kolmorogov-Chaitin information are less likely than errors with low
information....  well, we both know what white-noise looks like, right?



More information about the end2end-interest mailing list