[e2e] MTU - IP layer
touch at ISI.EDU
Fri Apr 22 15:53:45 PDT 2005
-----BEGIN PGP SIGNED MESSAGE-----
David P. Reed wrote:
> As a pragmatic architect, it seems to me that pmtud is focusing on
> micro-optimizing whatever problem turns out to be their motivating
> problem (FTP, I suspect), and worse yet, binding in narrow assumptions
> about the underlying *inter* network architecture (like the idea that
> there is one path, it is slowly changing, and that packet structure is
> preserved on the path, rather than being tunable to manage
> latency/jitter). We'll never be able to exploit concurrency in the
> transport or link layers if we continue binding highly specific low
> level assumptions into highlevel protocols (also known as optimizing for
> the narrow domain of the present).
FWIW, I agree completely. Much as the 'positive feedback of positive
evidence' variant of the new version of pmtud is a step in the right
direction, I disagree with the way the current proposal is entangled
with the transport layer. I would be more comfortable if it were just
part of the network layer - where the path necessarily lies - and where
current PMTUD is basically implemented.
> So I offer this as a suggestion...
> It would seem to me that a small-packet network is free to implement
> large packets by intra-AS fragmentation and reassembly, for example.
> The objection to same was that reassembly was hard if packets took
> different paths. But the PMTUD model implies they *Don't*! Reductio
> ad absurdum. So a much more practical separation of concerns would be
> to use a small number of end-to-end maximum packet sizes, and perhaps a
> notion of a much simpler f/r. To cope with the long-term trend towards
> supporting larger and larger end-to-end datagrams, why not allow any
> size datagram, but cut it only on power-of-2 or power-of-4 boundaries
> (like the old "buddy" memory allocator, which simplified the reassembly
> of "free blocks").
> Let reassembly occur whereever it is possible to do so (worst case at
> the target). Make the end-to-end error check/error correct more robust
> (perhaps an adapted erasure code implemented at the endpoint would be
> effective at reducing round-trip overhead for fragment recovery).
I agree that the basic idea should allow layers not to need to be aware
of each other beyond direct interface - IP fragments on link MTU, TCP
segments only on IP datagram limits rather than how IP fragments.
PMTUD is just an optimization, and it should never be the case that an
optimization disables functionality (as with black holes on
negative-info based current PMTUD).
One of the problems is that the optimization turns out to be
significant. The unit of loss in the network is an IP fragment, but the
unit of congestion control is a TCP MSS. When the two aren't the same,
things don't work as expected.
PMTUD (old or new) -does- move the work to the endpoints; new PMTUD even
more so, because it doesn't rely on ICMP errors from inside the network
but rather E2E feedback. Why isn't that consistent with the E2E
principle of making the network simpler while moving the work to the
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
-----END PGP SIGNATURE-----
More information about the end2end-interest