[e2e] Protocols breaking the end-to-end argument
dga+ at cs.cmu.edu
Sat Oct 24 18:56:13 PDT 2009
Hi, Richard, and everyone -
Having read the e2e argument paper a number of times (I, like half the
networking faculty I know, use it as a discussion point in my graduate
networks and distributed systems courses), I think it's worth taking a
step back from this debate a bit, which has clearly extended beyond
the realm of the purely technical.
(The preceding paragraph may be taken, correctly, as a doubtless
unsuccessful plea to abandon the ad hominem attacks on all sides of
this argument in favor of what is, actually, a fun discussion.)
In particular, I believe that your argument is taking a (perhaps
deliberately) overly strong interpretation of what was actually
written in the e2e paper, which makes a somewhat subtle argument that
is devoid of absolute black and white insistence about the placement
of functionality. This seems like a very common misinterpretation --
it even shows up in the subject of this thread: "breaking" the end-to-
end argument, as if it was a law writ in stone by the hand of the
DPR's representation of the e2e argument in this discussion is
entirely in keeping with the text of the paper, which included no
mention of TCP providing reliability. In fact, it's become
increasingly clear over time that a careful file transfer system -- or
a careful storage system -- probably has to implement exactly the type
of strong error checking alluded to in the e2e paper. See, for
instance, a lot of recent work on long-term digital data preservation,
or, more commercially, the content-hash based storage systems now
provided by major vendors such as EMC.
TCP offload does not particularly seem to disagree with the point of
the e2e argument, as it was stated.
"It would be too simplistic to conclude that the lower levels
should play no part in obtaining reliability."
and the argument in the paper makes it very clear that there are
legitimate performance reasons for performing functions -- even
duplicating them -- at lower levels; one should simply make such
optimizations with full awareness of the consequences. Offload is a
great example: It's a very useful performance enhancement with
today's 10GE networks, and it may cause you some difficulties if you
want to take advantage of later enhancements to TCP. It's an
engineering tradeoff, pure and simple, which is all the "argument" is
about. So are 802.11 retransmissions, with which you're undoubtedly
familiar. Great, needed performance optimization -- they're an
excellent illustration of the "when you _should_ use the middle"
aspect of the paper.
On Oct 24, 2009, at 7:50 PM, Richard Bennett wrote:
> There's no doubt about the fact that Saltzer was and still is
> regarded as one of the brightest lights in the system architecture
> firmament, and that in particular his seminal paper on naming and
> addressing was one of the most cogent pieces of its kind ever
> written. It's unfortunate that the structure of Saltzer's thinking
> isn't reflected in the organization of Internet protocols, naming,
> and addressing and that he wasn't able to pass his brilliance along
> to all of his students.
> David P. Reed wrote:
>> I don't know why I waste my time explaining to Richard Bennett what
>> he misreads, but here goes:
>> On 10/24/2009 03:54 PM, Richard Bennett wrote:
>>> Noel Chiappa wrote:
>>>> > From: Richard Bennett <richard at bennett.com>
>>>> > Moors shows that the Saltzer, Reed, and Clark argument for
>>>> > placement is both circular and inconsistent with the FTP
>>>> example that
>>>> > is supposed to demonstrate it.
>>>> I didn't see that at all.
>>> Moors points out that TCP error detection and recovery is an end-
>>> system function, but not really an endpoint function in the file
>>> transfer example. The file transfer *application* is the endpoint,
>>> so placing the error detection and recovery function in TCP is
>>> actually putting it in an intermediate system level. This becomes
>>> clear when we recognize that TCP is often implemented in hardware
>>> or in firmware running on a CPU that lives on an interface card.
>>> The paper goes to great lengths to show that host-based TCP is
>>> immune to problem induced at MIT by a bad 1822 interface card, but
>>> it was very common engineering practice in the mid-80s to
>>> implement TCP on an interface card that had the same vulnerability
>>> as the 1822 card. Excelan and Ungermann-Bass built these systems
>>> and they were very popular. They designed in a competent level of
>>> data integrity at the bus interface, so it wasn't necessary to
>>> rely on software to detect bus problems. So it's at least ironic
>>> that the end-to-end argument on the data integrity basis was
>>> mooted by practice by the time the 1984 version of the paper was
>>> Because the file transfer program doesn't do its own data
>>> integrity checking but relies on TCP to do it, it's not really an
>>> example of endpoint placement at all; in fact, it's a "partial
>> OK. This is incredibly simple to understand. In the end-to-end
>> argument paper, we describe a program called "careful file
>> transfer", whose goal is to ensure that the file received is a
>> proper copy of the source. We use this "careful file transfer"
>> example as a pedagogical device.
>> The paper carefully does not claim that TCP or FTP over TCP satisfy
>> the end-to-end argument required for the function "careful file
>> transfer". There was a reason: FTP/TCP does not do so.
>> Now, RB claims that Moors's paper somehow says the argument is
>> inconsistent with the FTP example. Well, no. It is consistent
>> with the actual example we use, which is not FTP/TCP.
>> Bennett may have joined late this particular discussion. If so, he
>> missed my earlier posting that said that the end-to-end argument
>> did not say "TCP is best". It was not a defense of TCP at all
>> (unless you accept his mind-reading of the authors' intent to
>> somehow write the paper to be part of some fight that Bennett
>> imagines was going on).
>> The end-to-end argument paper was not a paper about TCP or IP or
>> any particular implementation of any protocol, except insofar as it
>> was inspired by architectural discussions in the design, and was
>> cited quite frequently by IETF architects later as they considered
>> designs happening afterwards. It was about a way to think about
>> architectural questions - one that was used frequently and heavily
>> in the original TCP and IP design process, and as noted in the
>> paper, in a number of other processes we were aware of and had been
>> involved in.
> Richard Bennett
> Research Fellow
> Information Technology and Innovation Foundation
> Washington, DC
More information about the end2end-interest