[e2e] purpose of pseudo header in TCP checksum

Vadim Antonov avg at kotovnik.com
Tue Feb 15 18:20:20 PST 2005


The pseudo-headers, essentially, guard against an end-system
implementation which would accept packets not addressed to it.

This is just one of a zillion ways in which any realistic OS can be broken
to the effect that it'll corrupt data.  And, given the simplicity of
comparing destination address on a packet to the list of host's addresses,
it is a quite unlikely bug.

On the other hand, pseudo-headers restrict the implementation of the 
packet layer by enforcing some form of an address (of course, they can be 
computed over 4-byte hashes of the actual addresses, but this is not how 
it is specified).

Simply put, this is a non-solution to a non-existing problem, at the 
cost of an unnecessary inter-layer dependency.

A useful security cannot be implemented at IP or TCP layer, because
security is only as good as its weakest point. Any strong security
requires entity authentication (which means, pretty much, authenticating a
physical user, a server process, and determination of trustworthiness of
an execution environment), access authorization (meaning determination of
which entity is allowed to do what), privacy and integrity protection
(encryption and message authentication), intrusion detection and reaction
capabilities, and key managament.

Network layer security is not aware of users or servers, cannot do access
authorization on any useful granularity level, and does not do much for
intrusion detection and reaction.  It follows that it is quite weak. I
wouldn't call IPsec, SSL/TLS, etc, "secure" - they can be used as rather
trivial parts of much larger security architectures, but they provide only
false sense of security when deployed stand-alone.

These weaknesses necessiate the use of manually-configured firewalls and
other kludges, which are nice and dandy if you only want protection
against script kiddies.  However, most attacks doing any significant
damage come from inside of security perimeters.  Again, IPsec and all
those "secure" VPNs do, essentially, nothing to protect against those
attacks, and many sysadmins discovered that their salesdroids' "secure"
laptops with "secure" VPN connections to the office get infected with
virii downloaded from the Internet by means of unprotected home DSL
connections and happily proceed to spread the infection all over their
"secure" LANs with quite depressing regularity.

Essentially, the only way to build a reasonably secure system, having an
elusive but quite important property called "compartmentalization" is to
do it at the application/middleware level, and treat networks as always
insecure.

This also means that attempts to drag security down to the network layer
are, at best, misguided.

The real problem with NAT traversal by encrypted comms is not in the fact
that some broken-as-desiged protocols don't work; there are ones which do
work fine no matter how you chop & mix the bytestream in the intermediate
systems.  The problem is in the application-level protocols which embed
transport-level addresses (notoriously FTP, and most of the VoIP signaling
crap). There is no excuse for doing that.  The only place where such
embedding is unavoidable is the name resolution protocol - and it does not
need to be encrypted, since end-host authentication is required anyway to
protect against man-in-the-middle attacks.

A useful transport-level design would provide a special zone in the
packets for clear-text transport addresses which would be understood and
translated by NATs as packets go through.

On Tue, 15 Feb 2005, Noel Chiappa wrote:

> Briefly, if we got off our butts and added another namespace to the
> overall architecture, one intended for end-end naming (instead of trying
> to leech off the internetwork layer's routing names - which was pointed
> out as a mistake by Jerry Saltzer over 20 years ago), and changed TCP to
> use that in the pseudo-header, NAT problems become non-existent (except
> when doing an ICP to a legacy host).

I'm not sure that an extra level of indirection is good, as things can be
done with just two levels (names and routing addresses) if network
services register & deregister themselves dynamically with the name
servce. The way I see it, there is no need to have the fixed host (port) /
network (IP address) boundary at all (the fixed network/host address
boundaries are already history).

There's even no need for strong consistency in the distributed name
caching - if your connection to the address failed with "no such
end-point" error received from the remote host, you can simply tell your
name resolver to invalidate cache entries and request the up-to-date
information from the source.  It helps if addresses (including ports) are
never reused faster than name serivce cache entries expire.  Fixed port
numbers for WKSes are a kludge, and deserve to die (the leftmost component
of a domain name can be always treated as the service name, as people apt
to do anyway, with ubiquitous www.blah.com-s and mail.blah.com-s).

The dynamic service registration takes care of renumbering, too, thus
making address block ownership a non-issue.  It is not hard to implement, 
and is, in fact, much simpler than the existing kludge^H^H^H^H^H^HDNS 
infrastructure.

--vadim



More information about the end2end-interest mailing list