[e2e] e2e principle..where??....

Hilarie Orman HORMAN at volera.com
Sat Jun 2 10:08:16 PDT 2001


Note that sometimes the URL generation is done not by
the server but  by a reverse proxy that dynamically changes
the original URL.  In practice, HTTP semantics are freely delegated
to proxies in a wide variety of circumstances.

If the application is written with semantics that can be
interpreted and satisfied ubiquitously, then the application can be
handled by transparent proxies and the end-to-end argument
has no sway.

Hilarie Orman

>>> "David P. Reed" <dpreed at reed.com> 06/02/01 09:09AM >>>
Just a thought.  The end-to-end argument does not argue against designing 
an application to use caching as an optimization.  Since caching of the 
Akamai sort  (just to use the typical model in use these days) is NOT 
transparent to either end of the application (the server-side explicitly 
generates URLs that point to the cache, and the client fetches explicitly 
from the cache), this seems like a protocol where the lower networking 
layers (the "Net transport" layer) provides no caching functionality 
whatsoever.  Thus, the Akamai approach is an end-to-end design - there was 
no need to modify the existing network protocols or spoof them to make it work.

The "transparent cache" approach (pioneered by @Home) where the network 
itself spoofs HTTP and tries to second-guess what the user wants - that is 
not end-to-end, and creates huge secondary problems (since the server and 
client don't know that caching is going on, they may assume the client is 
getting up-to-date information, whereas the out-of-date cached information 
is being fed).

What the end-to-end argument does *not* say is that applications must be 
designed without optimizations that suit their needs - and caching at the 
application level is an optimization that just includes more "ends" in the 
application layer protocol (the cache servers).

At 08:45 PM 6/1/01 -0400, Manish Karir wrote:

>with the vast majority of traffic these days being web traffic
>(thought I saw some stats somewhere measuring (90+% to be http traffic)
>AND combined with the extremly high web cache deployment rate(ever
>hear of an ISP who does not use a web cache to maximize the number
>of users he can support),
>one wonders what the e2e priniciple really means these days.....???
>
>
>manish karir
>(ducking and running for cover...)
>
>
>
>
>On Thu, 24 May 2001, Lloyd Wood wrote:
>
> > I'm rapidly coming to the conclusion that the end-to-end principle is
> > the IETF's very own Third Reich in usenet's Godwin's law.
> >
> > Once the end-to-end principle is invoked in a workgroup, useful
> > conversation becomes impossible.
> >
> > tschuess,
> >
> > L.
> >
> > Godwin's Law prov. [Usenet] "As a Usenet discussion grows longer, the
> > probability of a comparison involving Nazis or Hitler approaches one."
> >
> > <L.Wood at surrey.ac.uk>PGP<http://www.ee.surrey.ac.uk/Personal/L.Wood/>
> >
> >





More information about the end2end-interest mailing list