[e2e] Protocols breaking the end-to-end argument
David P. Reed
dpreed at reed.com
Fri Oct 23 07:20:47 PDT 2009
I'd reframe the statement, just because I would actually like the term
"end-to-end argument" to continue to mean what we defined it to mean,
rather than what some people have extended it to mean.
So I think what you are looking for is a set of examples that
demonstrate functions that are "best done inside the network".
If you read the original paper, there is no claim whatever that says either:
1) that all functions should be done at the edges. (this radical
proposition, however, is one that guides some of my personal
interests in researching how far one can go. But that's a "Reed
research guideline" not an architecture argument.)
2) that one should never include optimizations of functions that
must (to be correct) be done at the edges, in the network.
Yet each interpretation above (and some others) are used occasionally.
Here's an example that challenges 2) and 1) but not the original
argument: where should congestion measurement be done, in order to
support congestion control?
Congestion *exists* only inside the network, by definition. So it must
be measured in the network.
However, where should *control* of congestion happen? That's a very
different story. It can't happen at the places where it is measured...
because congestion is an emergent phenomenon that depends on details at
the edges, AND on routing decisions (and traffic engineering and
investment decisions, as well, at slower rates of change). The answer
would be easy if there were one perfect place to do it. Of course, the
network itself makes that hard.
Today's Internet offers a variety of measures of congestion: measured
changes of RTT end-to-end at each of the hosts that share a bottleneck
subpath for active traffic, packet drops, packet-pair tests, marks such
as ECN, SNMP-if-it-had-a-MIB, ...
It also offers a variety of ways to mitigate congestion: get one or more
senders to slow down, get the sender to recode using more compression,
force some of the traffic to an alternate path, etc.
Choices of how to implement the congestion management function (which
includes traffic engineering as a subroutine) can be informed by the
"end-to-end argument" if you break the function down into subfunctions.
But this is not a problem with the "end-to-end argument". It is a
problem with TCP RTP and other protocols over IP, and routers that we
We have, for example, ECN as a tool implemented by routers. Turning it
on probably would help a reasonable amount. ECN itself is a solution to
congestion *measurement*, not mitigation. Measurement in the router,
communicated by ECN to all who share the bottleneck path, is clearly a
function "in the network". And yet it satisfies the end-to-end argument!
Lest we think that congestion control is the only area where *careful
thinking* is informed by end-to-end arguments about function placement,
there are many that fit the original argument. Blocking hostile DDOS
attacks is another. It's hard to imagine that anyone could argue that
DDOS against a target could be prevented solely outside the network.
However, *prosecution* of the offenders is clearly not a function that
can be done inside the network. Similarly, it would be silly to burden
a router with the job of collecting evidence for the prosecutor. There
are actually two kinds of DDOS attacks:
1) against the network itself,
2) against a particular end host (or hosts).
The former can be detected reliably by the network elements involved.
The latter must be defined by the host itself... since it is the host
who desires or doesn't desire a lot of traffic aimed at it.
Let's look at the latter, only. It would be silly for the operator of
the network to have to look at packets flowing to a web server to detect
that many SYNs are sent, but the 3rd step of the handshake is
uncompleted. The server is the only reliable place to verify that its
time is being wasted by many open connnections.
Yet responding to the DDOS attack may be helped by disconnecting the
sources. This has to be a network function on a large scale. And
tracing back to the source may be a network function.
On 10/23/2009 07:47 AM, L.Wood at surrey.ac.uk wrote:
> Hey, I wrote a chapter of that book...
> Do look into the Bundle Protocol, which ignore the end-to-end
> principle and control loops in its design. See our 'Bundle of
> Problems' paper for more on this:
> The Bundle Protocol has similar problems/oversights as LTP-T.
> Carlo Caini's group has drawn parallels between
> DTN work and TCP PEPs, pointing out that what TCP PEPs do
> on the quiet (break the end-to-end control loop into separate
> loops) is what things like bundle hops + convergence layers
> or http proxy caches do more explicitly and visibly. See e.g.
> his IWSSC'09 paper:
> "TCP, PEP and DTN Performance on Disruptive Satellite Channels."
> <http://www.ee.surrey.ac.uk/Personal/L.Wood/><L.Wood at surrey.ac.uk>
> -----Original Message-----
> From: end2end-interest-bounces at postel.org on behalf of Jaime Mateos
> Sent: Fri 2009-10-23 11:26
> To: end2end-interest at postel.org
> Subject: [e2e] Protocols breaking the end-to-end argument
> I'm working on a project about the current challenges the Internet is
> presenting to the end-to-end argument. I'd be interested to know about
> any protocols, currently in use, that break the end-to-end principle and
> the context where they are used. So far the only one I've found is TCP
> PEP that seems to be in use in satellite networks (Internetworking and
> computing over satellite networks, Yongguang Zhang -
> There also seems to be a number of research projects such as Split TCP
> and LTP-T that I've come across. I'm also interested in these but not to
> the same degree as in protocols that are currently in use today.
> Jaime Mateos
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the end2end-interest