[e2e] Re: [Tsvwg] Really End-to-end or CRC vs everything else?
dennis at juniper.net
Tue Jun 12 11:26:58 PDT 2001
> At 04:52 PM 6/10/01 -0700, Dennis Ferguson wrote:
> >Choices which are really good under some sets of circumstances are really
> >bad under others. For link-by-link protection from transmission errors we
> >have the luxury of crafting the error detection to match the characteristics
> >of the link, and even of changing it if the original choice is proven wrong,
> >but end-to-end checksums are long-lived and are supposed to be able to treat
> >the network stuff in between the ends as an ever-changing black box. I
> >have no idea how you design for this.
> The traditional way to deal with this kind of non-statistical uncertainty
> is pretty straightforward. What you do is assume that the error process is
> an *adversary* that knows everything about the protocol that can be known,
> and who *wins* the game if they can corrupt the data with a high enough
> The result of this thinking is that cryptographic message authentication is
> the appropriate answer. A "key" chosen out of sight of the adversary, at
> random, is used to select from a range of functional transformations which
> are diverse enough so that without knowledge of the key, one cannot
> transform the datagram into an acceptable one.
> For this, you don't need an "error model".
Actually what you seem to be doing is changing the problem from error
detection to something that is a bit different. If we maintain focus
on the error detection problem, however, it is still the case that you've
suggested a solution for end-to-end error detection which I shouldn't
have precluded, that being "brute force".
If MD5 is a good example, then the two properties which make a good
cryptographic checksum seem to be
(1) they carry around a lot of bits in the packet, more than we're
usually willing to spend for error detection, and
(2) the computation of the checksum has a certain irreducible complexity
(to avoid brute force attack even with custom hardare), and the
algorithm is analytically opague.
There is no reason to expect a cryptographic checksum to be any better
at detecting (unintentional) errors of any particular nature than any
other checksum occupying the same number of bits in the packet, in fact
it is reasonable to suspect it is worse at some things since for a
cryptographic checksum this is only a secondary property. The real
reason you can get by without an "error model" (which you couldn't use
anyway because of (2) above) is that whether the 128-bit cryptographic
checksum fails to detect 1 in 2**130 of the errors which occur, or only
1 in 2**110, doesn't really matter much since both of those are really
small fractions. The primary property that makes MD5 a good error
detection algorithm is sufficient brute force to make one comfortable
that it should always be strong enough.
So while (2) is, in fact, usually considered an undesirable property for
an error detection algorithm, (1) is always good if you have the luxury
of ignoring cost and MD5 uses enough bits that everything else matters
less. Since error detection is usually considered an unfortunately
necessary overhead we usually try hard to do more with less, but if
you are willing to throw packet bits and compute cycles at it you can
always be assured of getting satisfactory performance even if it isn't
optimal for error detection in any theoretical sense.
With respect to the later arguments about computation cost and moving
checksums off-board, however, in a strange way I think you've been arguing
the wrong side of that dispute with Craig since the nature of MD5 might
actually decrease the probability that moving the checksum to off-board
hardware would be desirable. MD5 is computationally complex (see (2))
but looks good despite this when you compare it to other things in CPU
benchmarks since it requires what current CPUs are good at; it does a lot
of computing per memory reference. Off-board silicon is usually pretty
crappy compared to the way CPUs are designed, you get speed from this
compared to a CPU only on problems where the computation is fundamentally
bit-diddling (e.g. a CRC), or a datapath problem, or a problem where
you can find a lot of things to compute in parallel. MD5 really wants
a high clock rate and fast math, which general-purpose CPUs are still
king-of-the-hill at providing, so the bang-for-the-buck of moving it
off-board is probably minimized.
In any case, I was wrong. If cost is no object you can always design
an acceptable end-to-end error check.
More information about the end2end-interest