[e2e] Opportunistic Scheduling: Good? Or Evil?

Detlef Bosau detlef.bosau at web.de
Sun Mar 11 05:22:20 PDT 2007


David P. Reed wrote:
> Detlef - If still focused on why GPRS has long delays, I think you are 
> looking at the wrong layer.   

And if I would still focus only on GPRS, I would focus on the wrong 
network ;-)

GPRS is intended to be a "migration technolgy". A migration technology 
has three aspects:
- the scientific one: GPRS is irrelevant in the long run, howevever we 
can gain experiences from it.
- the intention: GPRS is stillborn. It was buried before it was born and 
the NOs expected high revenues from the burial ;-)
- reality: As we all know, "migration" is a state. Not a project. ;-)


> The end-to-end delay in a GPRS network has nothing to do with the MAC 
> layer.
Hm. I´m always told, what are _not_ the reasons for long delays. I´m 
about to run out of reasons here =8-)

At the moment, I´m looking for an unbiased understanding of delays here. 
At the moment, I see the following delay sources:
1. Physical bandwidth of the wireless network, length of a timesolot, 
payload of a time slot. This highly depends on the technology in use. 
Some days ago, I found it extremely helpful in a paper, that all delays 
were given in timeslot units there.
2. Recovery. How often are corrupted RLP frames retransmitted? I have 
well in mind that you wrote that at most one retransmission would make 
sense, anything else should be left to the upper layers.
3. MAC.
4. Processing. I´m often told that GPRS has a lot of inefficiencies in 
its protocol stack.
5. Queueing.

I think, for a scientific considertaion, only points 2 and 3 are important.

WRT point 2, I tend to disagree with you: If an IP packet consists of 
e.g. 20 RLP frames and each RLP frame suffers from a corruption rate of 
e.g. 0.3 or 0.4, the whole packet suffers from an error rate close to 1. 
So, in such a case error recovery for RLP frame makes sense. And if RLP 
frames suffer from a corruption rate 0.3 or 0.4, it would make sense to 
allow more than one retransmission in order to keep the whole packet 
corruption rate acceptable small.

However, and perhaps you have this in mind, the important question is: 
What are the effective packet delay and the packet corruption rate, thus 
basically the "throughput", perceived by the user?

Does it make sense to run TCP and applications built upon this in a 
"noisy environment"?

You may well argue that running network applications in the presence of 
a BLER of 0.3 or 0.4 is simply inappropriate, not to say simply stupid.

Some years ago, I had to deal with "adaptation". The task was to built 
"adaptive multimedia applications" for mobile networks.

Simply spoken, this is like Mrs. Jones, who is out of coffee.
So, Mrs. Jones has two possibilities. First: Go to the store and buy 
some. Second: Drink tea instead.
(I´m not Mrs. Jones. When I´m out of tea, I have two possibilites. 
First: Go to the store and buy some. Second: Be in an extremely bad mood.)

However, in our project, we hat another possibility: Simply adapt Mrs. 
Jones :-)


>   You can tell because of the order of magnitude of the actual delays 
> observed (seconds) compared with the time constants of the MAC layers 
> involved (one or two digit milliseconds).   That's why I said to look 
> at buffering in queues above the PHY and Medium Access layers - the 
> network transport layers.

Absolutely. However, queues don´t pile up by themselves. That´s why I 
currently build a simulator component. So I can simulate, and hopefully 
understand, what´s happening here.

>
> Regarding Aloha et al., it's clear that the point of Aloha is to be 
> memoryless.   Every new packet comes along and pretends that no memory 
> of the past is relevant, no knowledge of the demand by other users and 
> terminals is relevant, and no knowledge of the propagation environment 
> is relevant.   This is true only if devices are built without sensing, 
> without memory, etc. OR if the environment is so unpredictable as to 
> benefit not at all from memory and state.
>

Does this mean that it basically doesn´t matter in a memoryless 
environment whether I use a PCF or a DCF (spoken in WLAN terms)? At 
least as long as I dont have any QoS requirements (which I totally 
ignore for the moment).

> Aloha makes only a trivial assumption (short-term channel state 
> coherence) and only requires trivial algorithmic state (the backoff 
> time).   Nothing especially good about that, other than being somewhat 
> universal and awfully cheap to implement.
>

Just wrt ALOHA: I often found references to the Abramson paper which 
introduced ALOHA. Unfortunately, I never got it anywhere. I would 
appreciate if it would be possible to send me copy of this work, if this 
is possible. It´s just one of the network classics :-)

> That said, protocols that build low level circuits by TDM or whatever 
> are making an implicit assumption of coherence of channel state, 
> stability of demand (constant rate), and totally anticipated future.   
> Nothing especially good about that.
>
> Everything else is detail.   Most of the detail added to either 
> approach is usually got wrong.   KISS (Keep it simple, stupid) is a 
> good engineering maxim here.

May I qoute this one:

>   Complexity is what engineers do when they don't have a clue what 
> they are designing for, and want to impress their friends.
>
>
iu my signature?  :-)






More information about the end2end-interest mailing list