[e2e] How many transmission attempts should be done on wireless networks?

David P. Reed dpreed at reed.com
Fri Sep 18 08:38:50 PDT 2009

To me it comes down to a simple thing:

end-to-end packet latency matters to the higher layers,

controlling the combined incoming rate of packets must be done at the 
endpoints, since they are the only place to back off,

Internal bottlenecks cause large queueing delays to build up, and absent 
packet drops, those queues will drain only as fast as the outgoing links 
that service them can carry packets,

The *application* control loops to manage application layer backoff 
share the same paths and the same queueing delays as do all the rest of 
the packets,  (this could be ameliorated by making a separate flow 
control channel, but there are a wide range of applications that would 
prefer to back off intelligently - e.g. use lower rate codecs or decide 
to go drink coffee and come back when the net is more lightly loaded - 
so the control channel cannot be based on some fixed theory that hides 
control from apps).

In any case, in today's TCP, we should be seeking Kleinrock's optimum 
steady state - a pipelining state such that no router *ever* has more 
than one packet in each outgoing queue (on the average).   On links that 
are not bottlenecks, the average is << 1 packet in the outgoing queue, 
on links that are bottlenecks, the combined flows traversing that link 
are such that the average queue length is 1 packet.

This is the "minimal latency" state that sustains the maximum overall 
achievable throughput of a steady state network.  Analogous to a 
single-queue being "double buffered", where a new packet arrives just as 
the old packet completes.

Because of the delays in the control loops at the endpoints that cannot 
measure without long delay, entry and exit of new flows, and other 
non-steady-state things, we actually are able to tolerate queue averages 
a few packets (2-3) long on the average, in service of keeping 
throughput up in the worst case without resonances and other problems 
that relate to control instabilities.

It's not the size of the buffers that matters - they handle transients 
better.  What matters is not dropping packets feeding into a bottleneck 
aggressively enough to keep the outgoing queue drained, so it doesn't 
build up sustained backups of obsolete packets.

On 09/18/2009 09:59 AM, Detlef Bosau wrote:
> Hi to all,
> this debate continues to appear here now and then in the list ;-) and 
> to my understanding, the positions are quite extreme here.
> On the one hand, there are some standards, who do "heroic effort" 
> (credits to DPR ;-)) and allow to configure even a SDU corruption 
> ratio 10⁻9.
> On the other hand, there is e.g. DPR, who does hardly any effort and 
> says, there should be typically not more than three transmission 
> attempts.
> I would like to understand these positions a bit better than now and 
> perhaps, there are some strong facts, which support the one or the 
> other view.
> Detlef
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20090918/be35c767/attachment.html

More information about the end2end-interest mailing list