[e2e] Re: [Tsvwg] Really End-to-end or CRC vs everything else?

Craig Partridge craig at aland.bbn.com
Mon Jun 11 06:35:12 PDT 2001


You just changed the problem from one of checksum (where checksum is defined
as "a cost-effective error check") to cryptographic signature ("a check
against adversaries").  I think that's redefining the problem :-).  It may
turn out that the solution is to declare, OK checksums were a bad idea -- but
I'm not there yet.

Going back to Dennis' question (which is one I've pondered nights over the
past several weeks).  I think there's a broad error model that we might try
to use.  The transport and network layer protocols deal with an environment
where (we expect) link layer errors have been filtered out.

So the checksum doesn't have to worry about bit-serial types of errors.

The checksum must deal with the kinds of errors that a mix of hardware and
software (exact mix to reflect your preference for hardware or software)
tend to suffer in receiving, buffering, and transmitting packets.  Thought
about this way, you get a few distinct classes of errors:

    * buffers get mixed
    * data gets transposed
    * buffers get truncated
    * byte or word erasures
    * bit errors (which we could probably stomp out if we forced folks to use
      parity on their memories... but we don't get to do that)

There are probably more, Jonathan Stone has the complete list of what has
been seen -- and it may not quite match the list above -- I've tried to
extrapolate from what has been seen to the kinds of errors we're likely to

That list is a small fraction of the kinds of errors that a motivated adversary
could cause.  And we could imagine designing checksums to be efficacious
against such errors.


More information about the end2end-interest mailing list