[e2e] Is a non-TCP solution dead?

Mark Handley mjh at icir.org
Mon Mar 31 16:10:00 PST 2003


>If I had to choose between
>i)  optimise a L2 protocol for a particular transport and
>ii) optimise a transport protocol in order to cope with different L2
>protocols (not simultaneously)
>
>I would almost instinctively choose option ii)  (probably becuase I have
>convinced myself that if it is e2e it must be good :-)
>without suggesting that L2 protocols should not be "well-designed" (whatever
>this means)

It's not that e2e must be good - it's an evolvability issue.

There are a vast number of end-systems out there.  To a first
approximation, they all speak more or less the same end-to-end
transport protocols, and this is necessary for interoperability.  For
this reason, plus the more recent proliferation of firewalls, NATs,
etc, it's likely that the number of end-to-end transport protocols
will remain small, with most of the service evolution happening above
the transport layer.  While transport protocol evolution does happen,
I'd bet money on the set of widely used transport protocols being
similar in ten years time.

There are many different link technologies out there.  Almost none of
them are the same link technologies that were around ten years ago.

Optimizing transport protocols for particular link technologies may
seem a good thing in the short term, but it harms the future.  It's
hard enough to get TCP right without link-layer dependencies in there.
And it's harder still if you have to optimize for arbitrary
concatenated sequences of link-layers.


On the other hand, if we can identify useful commonality between
link-layers, and if we can pass up hints to the transport to make
things work better without sacrificing generality or security, then
this seems a reasonable thing to me.  This has however proved rather
difficult every time it's been raised before.

 - Mark




More information about the end2end-interest mailing list