[e2e] UDP length field

David P. Reed dpreed at reed.com
Wed Apr 11 10:38:53 PDT 2001


A reasonable question.  The main reason as I recall from the design work 
around the TCP/IP split decision was that IP fragmentation makes it 
difficult to trust that the size of the underlying datagram was not 
corrupted.  If you did use the IP length to transmit e2e length data, it 
would have to be part of the virtual header used by checksumming.

In any case, if the UDP or TCP length isn't less than or equal to the 
reconstructed IP datagram size, the datagram should be discarded by the UDP 
or TCP layer (the IP layer, of course, can't: it does not know the codings 
of upper level protocols).

Other reasons probably include the idea of padding out IP datagrams to 
mod-4 boundaries, higher level protocols that transmit data units on bit 
boundaries, rather than octet boundaries (we had machines with 7 and 9 bit 
bytes back then).


At 12:16 PM 4/11/01 -0400, Hari Balakrishnan wrote:

>One of my visiting students, Lars-Ake Larzon (who is involved with the 
>UDP-lite
>proposal), asked me a question that stumped me:
>
>         Why does UDP have a length field?
>
>His reasoning, which is reasonable, is that this information adds no value 
>and
>only serves to cause confusion when the IP and UDP length fields mismatch.
>
>         Does anyone have data on how often the two length fields mismatch 
> in real life?
>
>Finally, the RFCs seem silent on the following question:
>
>         What should an end-receiver do when the lengths mismatch?  Is 
> this up to the
>implementation?
>
>I'm sure someone on this list knows the answers.  And I apologize if this has
>been discussed on the list before; I did a very cursory search on the archive
>and couldn't find anything relevant.
>
>Thanks,
>Hari




More information about the end2end-interest mailing list