[e2e] Once again PF scheduling and TCP

Detlef Bosau detlef.bosau at web.de
Fri Apr 6 17:34:42 PDT 2007


Hi.


To my understanding, there exist basically two possible approaches for 
proportional fair scheduling in wireless networks which differ in the 
update of the EWMA filter which estimates the average throughput for a flow.

The one approch updates the average based on the service rate a user is 
_offered_, the second approach updates the average based on the service 
rate a user acutaly _uses_, i.e. if the user does not get a time slot 
the average is updated by 0.

As far as I see, the second approach is widely used.

Does someone happen to know the reason for this?

In addition, the the constant alpha used in the  EWMA filter R(i+1) = 
(1-alpha) * R(i) + alpha * R(i) is set to 1/1000 in all the papers I 
read so far. I´m not quite sure, whether this is due to some divine 
inspiration or whether 1/1000 is some magic number ;-)

However, when alpha is set to 1, the average rate would be close to the 
actual rate and therefore PF scheduling would have no effect.
On the other hand, when alpha is chosen extremely small, sudden changes 
in the channel quality can lead to large delay spikes for a user.
In some simulations, I made during the last days, I only considered two 
terminals attached to the same base station and I used a constant BLER. 
I started TCP flows synchronously for both terminals in one simulation 
and with a temporal offset in another. When I started both flows with a 
large temporal gap, which may perfectly occur in reality, and when I 
chose alpha vera small (1/10000), the considered
TCP flows did not achieve the same throughput in the long run, but 
converged at the same _cumulative_ _goodput_.
The flow that started earlier suffered from large delay spikes when the 
second flow started and even in recovery phase the flows suffered
larger delay spikes than with first come first serve scheduling.

I indentedly leave out all the simulation details here, because I´m 
primarily interested in two questions:

1.: Why is the second of the two update variants typically used? (Or am 
I wrong there?)
2.: What´s the rational for the constant alpha = 1/1000, which is 
obviously chosen without any consideration of bandwidth, round trip 
time, buffer capacity etc.?

It´s interesting that, assumed I don´t have a mistake in my simulations, 
in an environment where two users suffer from a constant _and_ 
_identical_ (!) error rate proportional fair and first come first serve 
scheduling lead to dramaticly different results. Ideally, both 
scheduling algorithms should behave identically under these 
circumstances. Particularly, the notion of fairness appears to be 
different: FCFS leads to rate fairness, PF leads to a fair cumulative 
goodput.

Detlef




More information about the end2end-interest mailing list