[e2e] Opportunistic Scheduling: Good? Or Evil?

David P. Reed dpreed at reed.com
Sat Mar 10 15:22:51 PST 2007


Detlef - If still focused on why GPRS has long delays, I think you are 
looking at the wrong layer.   The end-to-end delay in a GPRS network has 
nothing to do with the MAC layer.   You can tell because of the order of 
magnitude of the actual delays observed (seconds) compared with the time 
constants of the MAC layers involved (one or two digit milliseconds).   
That's why I said to look at buffering in queues above the PHY and 
Medium Access layers - the network transport layers.

Regarding Aloha et al., it's clear that the point of Aloha is to be 
memoryless.   Every new packet comes along and pretends that no memory 
of the past is relevant, no knowledge of the demand by other users and 
terminals is relevant, and no knowledge of the propagation environment 
is relevant.   This is true only if devices are built without sensing, 
without memory, etc. OR if the environment is so unpredictable as to 
benefit not at all from memory and state.

Aloha makes only a trivial assumption (short-term channel state 
coherence) and only requires trivial algorithmic state (the backoff 
time).   Nothing especially good about that, other than being somewhat 
universal and awfully cheap to implement.

That said, protocols that build low level circuits by TDM or whatever 
are making an implicit assumption of coherence of channel state, 
stability of demand (constant rate), and totally anticipated future.   
Nothing especially good about that.

Everything else is detail.   Most of the detail added to either approach 
is usually got wrong.   KISS (Keep it simple, stupid) is a good 
engineering maxim here.   Complexity is what engineers do when they 
don't have a clue what they are designing for, and want to impress their 
friends.



Detlef Bosau wrote:
> David P. Reed wrote:
>> The delays come from buffering and retrying for long times, on the 
>> basic theory that the underlying network is trying to be "smart" and 
>> guarantee delivery in the face of congestion or errors. 
> Because I still do not understand the opportunistic scheduling stuff 
> in mobile networks, I´m actually trying to simulate this stuff myself.
> I´m finished with an extremely simple, yet perphas not that bad, RLP 
> simulator which does not really simulate RLP data transfer but the 
> transport delays caused by RLP. And I added a component for MAC 
> scheduling. (This is all trivial stuff, in summeay less than perhaps 
> 300 lines.)
>
> Media access in mobile networks immediatley leads to the question: Why 
> do we use dynamic time slot allocation at all? Why don´t we use a 
> simple collision scheme  or CSMA/CA or something like that?
>
> To my understanding, a reason can be that ALOHA flavours (and the 
> aforementioned techniques are such flavours) do not well exploit a 
> channel´s capacity. So, when small bandwidth is a concern, dynamic 
> time slot allocation can lead to a better channel utiliziation than 
> ALOHA flavours. Do you agree here? Or am I wrong?
>
> Now, there must be a coordination function which actually does the 
> dynamic TSA. Fine.
>
> So, the next question is: Why don´t we simply use "first come first 
> serve"? It´s simple, it´s pretty stupid - and pretty attractive ;-)
> In fact, my own scheduler actually does "first come first serve", 
> because I have not yet implemented the scheduling priority functions 
> which are necessary for proportinal fair scheduling and the like.
>
> However: At least with greedy sources, a simple FCFS scheme leads to a 
> perfectly fair distribution of (time slot) ressources between the 
> actaually sending (or receiving ) mobiles of a cell and I don´t see, 
> where this should lead to a problem with TCP.
>
> O.k., I know the rationale that proportinoal fair scheduling increases 
> the overall throughput. But I do not yet know at which expenses this 
> is accomplished. I know of serval papers which consider fairness and 
> throughput issues with a mix of applications.
>
> What I do _not_ yet know are papers which study opportunistic 
> scheduling under varying error scenarios. And I´m particurlaly not 
> interested in the whole "soft" models which model fading channels with 
> an attenuation proportional to the square of the BS-mobile distance 
> and the like.
>
> What will happen, when there are "noise spikes" or a channel suffers 
> from sudden and drasting quality changes due to multipath fading or 
> whatever?
>
> When we use a simple FCFS scheme, changes to channel will only affect 
> this one channel. When we use opportunistic channel, there will be 
> side effects to other channels.
>
> Of course: on the link layer, we will optimize the channel 
> utilization. But how does this affect TCP? Does your statement, the 
> underlying network was "too smart" , still hold here?
>
> I would appreciate any hint on this one. And perhaps, my simulations 
> will give a clue here.
>
> Thanks.
>
> Detlef
>
>


More information about the end2end-interest mailing list