[e2e] Re: [Tsvwg] Really End-to-end or CRC vs everything else?

David P. Reed dpreed at reed.com
Mon Jun 11 15:14:19 PDT 2001


Not to belabor this, but just passing the check-function test is not 
sufficient.  It must also pass a variety of other tests to be accepted in 
the context of a protocol (correct sequence number, correct address, ...).

So the simple combinatoric argument  you sketch below doesn't quite 
work.  You can't synthesize just any datagram that passes the checksum.  It 
also has to meet the other constraints.

At 03:02 PM 6/11/01 -0700, Jonathan Stone wrote:
>In message <5.1.0.14.2.20010611172743.04607430 at mail.reed.com>,
>"David P. Reed" writes:
>
>
> >At 04:48 PM 6/11/01 -0400, Craig Partridge wrote:
> >>I think you've missed the point.  In a prior note, you suggested a line of
> >>thinking of assume an adversary.  Implicitly, that's an error model.
>
> >When you are trying to think through what kinds of errors software and
> >hardware design flaws might introduce, it helps to classify them by
> >computational complexity - it takes resources to screw stuff up in a
> >complicated way.
>
>*Almost*.  I think what you are reaching for here is more the
>Kolmorogov-Chaitin information in the errors.  But even
>that isn't quite right, for error detection.
>
>The problem is that for error-detection, what counts as
>``complicated'' is determined by the error-detection function.  Look
>at the 16-bit deletions we caught on a LAN at Stanford.  They have a
>*very* complicated description, if we have to describe them as
>error polynomials. If we are allowed to describe them as a deletion,
>they have very short descriptions.
>
>Problem is that this very short description doesn't tell you
>how "complex" the error appears to your error-check function.
>And that's what you care about.
>
>
>
> >The computational complexity needed to screw up cryptographic hash is known
> >to be high,
>
>Not quite. what's beleived to be computationally intractrable about
>one-way hashes is coming up with a message which matches some
>pre-specified hash; or a pair of inputs which collide on the same
>hash. It doesn't say that coming up with M1:H1 such that H1 == H(M1)
>is fiendishly hard; your argument about MD5 was, after all, that this
>was not much worse than CRCs to compute in software, right?
>
>We can extend this to shard-secret cryptographic systems, but it
>seems to me that your motivation for doing so is not detectinge errors
>(per se) at all.
>
>
> > whereas there are computationally simple functions that can
> >screw up  simple checksums.
>
>It really doesn't matter.  Over the space of all possible errors,
>they both do as well, given equivalent numbers of `bits'.
>
>
>If you are trying to argue that, *in pracrtice*, errors with higher
>Kolmorogov-Chaitin information are less likely than errors with low
>information....  well, we both know what white-noise looks like, right?

- David
--------------------------------------------
WWW Page: http://www.reed.com/dpr.html





More information about the end2end-interest mailing list