[e2e] Re: [Tsvwg] Really End-to-end or CRC vs everything else?

Jonathan Stone jonathan at DSG.Stanford.EDU
Mon Jun 11 13:00:24 PDT 2001


In message <5.1.0.14.2.20010611143202.0462bec0 at mail.reed.com>
"David P. Reed" writes:


>At 11:22 AM 6/11/01 -0700, Jonathan Stone wrote:

[...]


>I think I side with Vernon Schryver (as I understand his 
>point).  Checksumming is going to be done in outboard boxes even if it is 
>cheap, 

Yes, it will be done there. Yes, it is a cheap hardware speedup.

But outboard checksumming is source of additional uncaught errors:
errors which would be caught by a software implementation of
a checksum but not by an outboard implementation of the same checksum,
because the actual cause of the error was between the hardware checksum
and the in-memory buffer..

You and Vern seem to be assuming that is a rare case. The data I have
indicates that, amonsgt the end-hosts we could directly monitor and
caught sending errored packets, it is a common case.

For the end-hosts we could directly observe, the conditonal
probabillity of an error damaging a packet between the end-host's
computation of IP checksum and CRC computation, given that an error
happens at all, was significant.

Whether that is an artifact of the local environement or a general
observation, I can't say.


>I did, but it does not make sense.  Detecting the same fraction is not the 
>same as having a good probability of detecting the errors that happen in 
>fact.  The constant function is particularly bad on this front.  

Then I failed to get the point across.  The proof Craig mentioned is
*not* saying that all functions are equvalent in practice.  It says
that if we have no information about the acutal, empirical
distribution of errors, we have no basis at all for saying that some
function is stronger than another. How, then, can we rank or compare
checksums, given we dont know a lot about the acutal distribution of
errors (and that the link-level models are clearly inapplicable?)


The approach I took was that, in the absence of information about what
kinds of errors happen now, (or in the future seeing as the Internet
is not statistically stationary), we should use error-check functions
which minimise the worst-case damage which a blind, uninformed
adversary can do.  The constant function is weak on that score,
because the adversary might choose never to hit the constant check
bits. (Even though, across all possible errors, the constant function
is just as good as any other).

That' a minimax argument: assume there's some error process, with some
distribution of acutal errors; but the error process is not smart
enough to damage bits and recompute a valid checksum.  Under those
conditions, how do we minimize the worst-case or expected rate of
undetected errors?

That's where I argue for maxmizing the Shannon information content of
the error-check bits. That minimizes the likelihood that a `blind'
error process will manage to stumble into doing errors which manage to
duplicate the error-check function we are acutally using.

Unless we know a lot more about the distribution of acutal,
in-the-wild errors, I dont think any better metric of error-check
functions is possible.  The good news for you is that cryptographic
functions do well on that metric. 


>But by 
>using a keyed class of cryptographically based functions, you increase the 
>potential set of errors that will be detected eventually if they are repeated.


I dont follow this at all. From what think I understand, I beleive I
can construct a counter-example, if you let me count the key size
as part of the check bits. Can we maybe take this one off-line?



>>Last, a question. I've suggested an adversary models with `blind'
>>adversaries which can't exploit the structure of an error-check
>>function. You proposed informed, intelligent, malicious adversaries --
>>an intruder model which suggests cryptographic approaches.
>>We already have protocols which address those latter models.
>
>But they aren't standard.  If they were standardly used, we wouldn't need 
>checksums.


But are they standardly needed, enough so to force the cost onto all
packets, from all hosts?

[...]




More information about the end2end-interest mailing list