[e2e] MTU - IP layer
David P. Reed
dpreed at reed.com
Fri Apr 22 06:54:09 PDT 2005
As a pragmatic architect, it seems to me that pmtud is focusing on
micro-optimizing whatever problem turns out to be their motivating
problem (FTP, I suspect), and worse yet, binding in narrow assumptions
about the underlying *inter* network architecture (like the idea that
there is one path, it is slowly changing, and that packet structure is
preserved on the path, rather than being tunable to manage
latency/jitter). We'll never be able to exploit concurrency in the
transport or link layers if we continue binding highly specific low
level assumptions into highlevel protocols (also known as optimizing for
the narrow domain of the present). So I offer this as a suggestion...
It would seem to me that a small-packet network is free to implement
large packets by intra-AS fragmentation and reassembly, for example.
The objection to same was that reassembly was hard if packets took
different paths. But the PMTUD model implies they *Don't*! Reductio
ad absurdum. So a much more practical separation of concerns would be
to use a small number of end-to-end maximum packet sizes, and perhaps a
notion of a much simpler f/r. To cope with the long-term trend towards
supporting larger and larger end-to-end datagrams, why not allow any
size datagram, but cut it only on power-of-2 or power-of-4 boundaries
(like the old "buddy" memory allocator, which simplified the reassembly
of "free blocks").
Let reassembly occur whereever it is possible to do so (worst case at
the target). Make the end-to-end error check/error correct more robust
(perhaps an adapted erasure code implemented at the endpoint would be
effective at reducing round-trip overhead for fragment recovery).
Note that this *does* follow the end-to-end principle making the network
simple and moving the work to the endpoints, while allowing the
underlying network to be simply specified.
This is only a proposal, as usual. Sent in hopes of inspiring useful
research by grad student architects and thoughtful systems designers who
need to simplify complex tradeoffs. Perhaps cleaning up f/r is a lot
more useful than making the "perfect" pmtud algorithm and then ruliing
out network innovations that can't support it.
In anticipation of the usual fiery reaction to end-to-end proposals
from the cross-layer optimizers (routerheads) on this list, I'd ask
those of you who are allergic to such solutions, please spout your
annoyance at me directly, rather than doing a Cannara-like blast of rage
and annoyance at past injustices and current bete noires to the whole list.
More information about the end2end-interest