[e2e] Re: [Tsvwg] Really End-to-end or CRC vs everythingelse?

Hilarie Orman HORMAN at volera.com
Mon Jun 11 14:30:56 PDT 2001


The fact that something was introduced as a short-term solution isn't
a strong argument against it being a long-term solution.  The world
doesn't always correspond to an inventor's viewpoint.

In the cases of NAT, firewalls, and transparent caches, you get
a huge increase in scalability.  This isn't merely "selling well", it's
also scaling well.  It is an engineering reality worthy of notice.

HTTP caches are an interesting case.  Unlike many other applications,
a large and important subset of the application semantics can be
interpreted and satisfied by intermediaries.  The fact that some get it
wrong and need to be fixed isn't an end-to-end argument at all; lots of
web servers make application layer mistakes and need fixing. 

When the application semantics satisfy certain non-stateful
criteria, the transport addressing can be satisfied by "anycast", 
and the end-to-end advice, important as it is in other cases, 
just doesn't apply.

Hilarie

>>> "David P. Reed" <dpreed at reed.com> 06/11/01 11:47AM >>>
At 01:13 PM 6/11/01 -0400, Melinda Shore wrote:


>I'm going to assume that by "application messages" you mean
>"anything after the transport header."

Not exactly.  That's a network definition based on some assumption of the 
"wire format".  The "transport header" is not visible to the application at 
all, and has no role in the application semantics.  What I mean is that the 
network protocol (say TCP or SCTP or whatever) has a specification, which 
says that what data comes in goes out a specified other end unchanged (with 
a certain level of assurance).

>   Anyway, to some extent
>it's already the case that there's an implicit rejection
>mechanism, in that some things already fail when certain kinds
>of middleboxes are introduced (firewalls break session-oriented
>protocols, NATs break integrity protection).

This is good?  When introduced, firewalls were justified as a short-term, 
temporary patch because UNIX and other systems hadn't developed proper 
security and authentication (see Cheswick and Bellovin's book for a very 
clear statement that firewalls were a pragmatic shortstop, not the "right 
answer").  Also when introduced, the NAT concept was proposed as a 
short-term, temporary solution to a scarcity of 32-bit host IDs.  See the 
original NAT RFCs.

I understand the pragmatic need, short-term  (being up to one's a** in 
alligators is a tough position).  But they have never been good protocol 
design approaches, and they don't achieve the function ascribed to them 
.  And worse, they make the semantics of the lower net layers horrendously 
complex and difficult to change.  And then Checkpoint "invented" "stateful 
inspection" and network administrators started to inject their own personal 
"intelligence" into applications they knew nothing about.

Just because something "sells well" doesn't mean it is good design.

>Operators often
>don't want to make network topology known to applications, and
>it's difficult to find agreement, in practice, that transport-
>layer middleboxes should be detectable by applications.

If I can detect a transport layer middlebox because it tampers with packet 
contents (inserting HTTP headers, spoofing web servers from "transparent" 
caches that are out of date, ...) then the only way you are going to stop 
me from detecting it is  to make it illegal to look at what happens to a 
packet.

That kind of attitude (the idea that a network operator is supposed to be 
able to interfere in end-to-end communications at will) seems like the 
attitude of a company that doesn't fear losing business (gov't sanctioned 
monopoly carriers and PTT's come to mind).







More information about the end2end-interest mailing list