[e2e] why fair sharing? ( Are we doing sliding window in the Internet?)

Vadim Antonov avg at kotovnik.com
Fri Jan 12 17:38:58 PST 2007

On Fri, 12 Jan 2007, Sergey Gorinsky wrote:

>   Vadim,
> > How hard it is to turn the Fair Queueing knob to "on" on the gateways?
>   To put my 2 kopecks in... First, since an application can masquerade as 
> multiple flows, fairness enforcement with FQ is not effective.

Doing FQ on src/dst addresses (not on address+ports) flows will be a lot 
better than per-flow fairness of TCP in any case.

> To lend 
> itself to meaningful enforcement, fairness should be defined not in terms 
> of flows or even hosts/processes generating them. Instead, fairness 
> should be linked to humans behind the communications but this requires 
> a very different network architecture.  

That is pretty much what I'm saying. The fairness is an economic, not 
technical concept.  Basically, I'd venture to guess that share of network 
capacity allocated should be roughly proportional to the payments.  
Meaning that routing system should be augmented with some way to announce
weights for the fairness enforcement.
>  Second, packet-by-packet FQ and end-to-end TCP strive to approximate 
> instantaneous PS (Processor Sharing) which is not a good fit for any 
> natural application. Multimedia streams need a minimal rate, not a fair 
> share.

This a common misconception. The multimedia streams are either
pre-recorded, lag-insensitive content, in which case they are, basically,
file transfers (that accounts for 99% of the "streams", incidentally); or
a real-time content which is quite elastic in bandwidth requirements -
especially video (audio bandwidth is not an issue nowadays, anyway). You
can reduce frame rate, reduce color & luminosity bit depth, reduce
horizontal & vertical resolution, or just increase compression - for a TV
quality stream that yields two orders of magnitude acceptable degradation
bandwidth-wise.  This is more than you can typically get from the TCP
congestion control; and more than the common bandwidth oversubscription 
ratio is.

What can be done for real-time streams is doing deadline scheduling on the 
output queues - and tossing away packets which are past deadline.  That'd 
require accurate timing (in ms resolution) on gateways, but it's quite 

Instead of the bandwidth reservation nonsense (which screws up dynamic
routing) we'd be much better served by the introduction of a
millisecond-resolution TTL field from seconds to milliseconds. Or simply
adding a bit which changes meaning of TTL field from hops/seconds to
milliseconds (255 ms one way should be enough, I guess, at least for this
plane :) - it will also be backwards compatible with the existing

> Elastic applications are not well served by PS either because 
> average message delay is much larger than under SRPT (Shortest Remaining 
> Processing Time), in agreement with Internet experiences where deviations 
> from short-term fair sharing improves overall efficiency.

Yep. But you still need to enforce fairness between *users*.  So it must
be some combination of FQ for end-points and deadline packet order and
drop scheduling.
Thanks for the references!

I'm not insisting on FQ being the best way to do things - it's just that 
it is already implemented and addresses most obvious problems: short 
sessions, parallel session cheats, point-origin flooding, etc - including 
overly-aggressive or poorly tested TCP stacks, which was the point of the 
original discussion.

More information about the end2end-interest mailing list