[e2e] Once again: Detlef and Latencies :-)

Detlef Bosau detlef.bosau at web.de
Mon Jan 7 03:21:55 PST 2008


Same procedure as every year :-)

During the last days I made some simulations on HSDPA packet latencies. 
I used the EURANE packet for this purpose.

An overview of some results, which is still under construction, can be 
found here: http://www.detlef-bosau.de/eurane_results.html

What I´m curious about are the latencies.

1.: The latencies seem to be independent of the scheduler in use. This 
is in strong contradiction to the results given in 
http://www.ikr.uni-stuttgart.de/Content/Publications/View/Standalone/FullRecord.html?36518

And particularly, it does not make sense.

The intention of opportunistic scheduling is to send a packet when 
channel conditions are favourite - and so decrease the number of 
necessary retransmissions. So the result should be that the transport 
latency decreases. Therefore, it simply appears to be nonsense that a 
round robin scheduler yields the same latency distribution as a 
proportional fair scheduler.

2.: The latency distributions are extremely asymmteric. There is some 
constant bias due do wireline latencies in my simulation script, however 
the distributions appear to be extremely heavy tailed.

And this perfectly fits into the e2e discussion. According to my 
results, which particurly differe from those by the IKR guys in this 
respect which makes the question even more important, the vast majority 
of delays is_below_ say 50 ms.  And there are quite some few _extreme_ 
outliers which are not depicted due to the width of the diagram but are 
sometimes even larger than 100 seconds.

So, the questions are:

1: Which results are true? Shall I believe the IKR results or the EURANE 
results?
2: Are the latency distributions really that extreme that, say, a 0.5 
quantile is 30 ms, a 0.8 quantile is 50 ms and a 0.99 quantile is one 
hour or more?
3: If the former is true: What is the reason for this behaviour? (O.k., 
I will read the EURANE sources once more quite carefully, that´s in fact 
the best documentation available ;-)) Is this due to the algorithms 
themselves or due to the implementation?

And if the distribution is in fact that extreme, I tend to say: This 
does not really make sense. I tend to modify the L2 code that way that 
anything, what is delivered from the base station to the terminal 
within, say, 50 ms should be delivered and we hare happy with this.
And anything, what cannot be delivered within that period of time simply 
shall be dropped. My concern is that extreme outlieres in transport 
latencies cause more grief to upper layers (RTO determination, spurious 
timeouts and retransmits, if we do loss detection my multiple ACKs we 
would need extremely large congestion windows etc.) than any benefit. 
So, in this particular case, I would tend to follow our "the" paper on 
end to end system design which suggests to spend not too much effort on 
local error recovery.

Detlef

-- 
Detlef Bosau                          Mail:  detlef.bosau at web.de
Galileistrasse 30                     Web:   http://www.detlef-bosau.de
70565 Stuttgart                       Skype: detlef.bosau
Mobile: +49 172 681 9937





More information about the end2end-interest mailing list