[e2e] New approach to diffserv...

Sean Doran smd at ab.use.net
Mon Jun 17 05:00:19 PDT 2002


| again (for example, but i have more if you keep reisting:-):
| 1/ middleboxes are not in the middle because by definitoin they cannot
| perform at warp factor 10 core speed  necessary. if they could we'd
| just have a net matde of web proxy caches and smtp relays.

Well, I ignored this because I thought you were being tongue-in-cheek
about an issue which is fundamentally the name of the device rather
than the question.   "middle" is really any point between a pair
of communicating end systems, but implies functionality beyond what
is normally associated with an intermediate system.

Vadim Antonov's arguments about the fundamental amenity to parallel
processing of large data streams aside, I imagine that most middle
boxes are going to sit no closer to the core than the border between
a site and its immediate provider.

| this means that i have an incentive to buy e2e solutiosn as soon as i
| need hi speed access...which is a LOT Of people

Well, site X is going to upgrade its primary router as it moves
more traffic to and from the rest of the Internet; the functions
beyond merely forwarding IP could be upgraded at the same time.

| 2/ middleboxes dont solve any real security problems, they introduce
| more.

The second clause is very nearly a truism which applies to virtually
ANY new equipment of ANY variety in the path.   Users should not trust
all the Internet's equipment to be non-hostile.

However, middleboxes can exacerbate the ultimate problem which
sits between chair and keyboard, viz. the the risk analysis 
done by the user of any given application is believed by many smart
people to be done in an ill-informed or lazy fashion, or both.
This isn't a good reason to ban middleboxes (or users) from the 
Internet's architecture.

|  >>	demonstrate the greater utility for the greater number of people
| IP does this (conter to your stat mux point)

conter?  Uhm, we are probably in agreement on most of this.
Packets are great things, but they'd been done (otherwise
why would Padlipsky have railed at X.25 so much?).  
IP is a really simple service -- so simple, that few people
actually use raw IP datagrams to do much more than carry
TCP around anyway, other than talking to DNS servers, 
doing the occasional bit of multicast, and reinventing
most of the service TCP provides.   IP's value isn't
in statmux gain, but rather in removing the need to
negotiate and renegotiate paths across lots of intermediate
systems controlled by different parties.

Lots of the stuff in the end2end argument (duplicate suppression,
error recovery, and delivery acknowledgement -- the whole
tower of "at-most-once" service) got pulled into TCP, along
with congestion avoidance.   The congestion avoidance in
a primarily TCP/RFC2001 network, even with hordes of mice, is so
much better than the congestion control mechanisms done
in other packet-based networks, that bottlenecks can
retain stupid FIFO behaviour across 5 orders of magnitude 
worth of bottleneck bandwidth under almost any traffic
pattern TCP senders in aggregate can manage.   All this has
made the Internet useful at reasonable cost, and has allowed
bottlenecks to be individually handled (upgrade, leave alone, 
whatever), by eliminating the *need* to coordinate bottleneck
resources among intermediate systems.   If congestion
avoiding senders were in the minority, this aspect of
the Internet would almost certainly be radically different.

The congestion-avoidance-specific bottleneck optimizations people
here have proposed have demonstrated improved link power with fairly
simple traffic-ignorant tweaks to the FIFO algorithm (RED) or some
less-simple ones (MDRR, SFQ, ...).  These do not help sufficiently
in the face of hugely aggressive senders, particularly when there
are lots of them.  

Now imagine if DDOSes traffic implosions were considered normal
rather than attacks.   That is, throw away congestion avoidance
and put a bunch of hungry eyeballs behind a bottleneck, and
consider it normal that the senders would compete in whatever
fashion necessary to maximize only their own consumption of
the bottleneck's resources.

I'd bet that smaller internal bottlenecks would be protected
by devices which did such things as application-specific
flow control to outright termination and regeneration of
application data.   (It would have been interesting to see
what would happen if the {X.25,FR,ATM}-congestion-controlled-VC-to-
the-desktop crowd had much larger amounts of traffic, just to see
the responses to the selection pressure of hungry Internet
users...).

So, the interesting thing about the whole middlebox shouting
match is that it's not about whether bottleneck bandwidth
is usable, but rather, in how a given point in the graph
happens to be used.   That is, they are usually argued
about in terms of policy mechanisms informing the fundamental
drop/no-drop decision alone, with lots of input parameters 
going way beyond the IP header information.

| imposition? no-one said much about imposition.....we're just debating
| a point - the only imposition i find is ports being arbitrarilty
| blocked by annoying ISPs:-)

I think the well-known-ports decision to fix particular
services at particular TCP & UDP ports was a bad one.
I'd really really really like to see this go away
for lots of reasons, and an indirection is not the
hardest thing to do.  Something like adding an FQDN
argument to the traditional getservbyname call 
resulting in a set of lookups along the lines of:

	example.com -> MX 1 mailrelay.example.com
	mailrelay.example.com -> A 10.0.0.26
	smtp.mailrelay.example.com -> PORT TCP 25

would get rid of this complaint of yours, and would
make simple NAT easier, and several other things
which vary alot in perceived wholesomeness from person
to person.  Yay, indirection.

| I compeltely agree

And I completely agree with your private aside
that this research would be less CS than socio-psychology,
although I see that as a golden opportunity to breed in
cross-disciplinary genes that I* strikes me as missing...

| by the way, here's a gedanken experiment:
| if middleboxes work well at the IP level, they ought to be a good idea
| at the HTTP/URL level. install, and
| now try go build a search engine

Uhm, as in given a url http://www.internal.example.com/foobarbletch/
something between the http client and the http server decides
whether the connection should take place? a pair of authorization
decisions (one on the server, one in the middlebox).   

Search engines *already* give back URLs which are perfectly valid
pointers to things that my browser cannot retrieve because of
authorization issues.

While the "if you don't know the name, you can't curse it" 
philsophy drives the NAT-as-security-gateway approach,
the reverse is not true.  Even knowing the name of
something (whether it's a network-layer locator, or a URL)
doesn't mean your curse will get through.  The argument
is where the good juju should be placed -- in the 
routing system (i.e., "victim unreachable"), in the
forwarding path (i.e., drop drop drop), in a middlebox,
or in the ultimate target.  

The thing that interests me, is how unacceptable
the answer "it depends" is to some people.

	Sean.




More information about the end2end-interest mailing list