[e2e] Ressource Fairness or Througput Fairness, was Re: Opportunistic Scheduling.

David P. Reed dpreed at reed.com
Wed Jul 25 07:18:13 PDT 2007


Hear, hear, Detlef!

Yours is indeed a very reasonable critique - that stability is assumed 
but never validated.   Therefore, it is revolutionary.   I too have been 
sad to see that 99.99% of the papers in IEEE Trans on Networking focused 
on problems that are iatrogenic - problems that exist only because the 
physician is trained never to question the received wisdom of the 
"networking community" - that layers are good, so more layers are 
better, that FTP speed is the right measure, so we should optimize for 
FTP in static environments, that centralized control is good when we 
know everything in advance, so we have to protect centralized 
controllers from DOS attacks, etc.  All of these are subject to 
critiques, and few are crazy enough to critique the assumptions, because 
they are taken as given, and never questioned.

That said, a great Bob Dylan quote is: "to live outside the law you must 
be honest".   What he meant was that a revolutionary (which includes 
revolutionaries in science and engineering) must be far more careful, 
far more self-critical, far more thoughtful than those who coast through 
life believing things are so because "everyone knows that".

QoS, for example, which you skewer below, is one of those "everyone 
knows" things.  Of course you can smooth out the unpredictability of the 
"channel" - just assume that you can, and demand that a lower layer do 
just that.   You'll look brilliant.  :-)   And you can get hired by 
Verizon, because their PR says that they deserve a monopoly on cellular 
data from the government because they "assure" quality of service - and 
you will be the engineer who sounds like they really can turn toxic 
sludge into food.  Just utter a few TLAs like QoS, etc.

I wish you luck - it's tough to build systems that work well in real 
worlds.   It's a lot easier to build systems that work well if you can 
choose the assumptions, designing your own fantasy world of isolated 
radio links in free space, a network entirely owned by one company, 
applications restricted to FTP, etc.

Detlef Bosau wrote:
> Dave Eckhardt wrote:
>>> One difficulty was, and during the last years I learned that this is
>>> perhaps the most basic reason why adaptation of multimedia documents in
>>> mobile networks is condemned to fail before it's even started, that
>>> there is no serious possibility to have a long-term or even medium-term
>>> prediction of a wireless channel's properties.
>>>     
>>
>> As far as I can tell, this is indeed fiendishly difficult. 
>
> With particular respect to my own experience in 2000 to 2002: Is it 
> _difficult_? Or is it _hopeless_?
> One professor told me: "Why don´t you do channel identification? 
> That´s a nice challenge!"
>>  A couple of
>> times people asked for my bit-level traces in order to fit some sort of
>> model to them, but nobody who did so was ever heard from again... this
>>   
>
> I know about Rayleigh channel models. And to my understanding, these 
> models are simply necessary e.g. to make UMTS work.
>
> However, this is a typical misconception between CS and EE. Rayleigh 
> channel models do not attempt to do any kind of prediction or 
> forecast. They attempt to identify the actual channel state. First of 
> all: The temporal perspective is "the next timeslot", i.e. about 10 ms 
> in UMTS and about 2 ms in HSDPA. Second: An estimation of the next 
> timeslot may fail. So, we have a problem for one timeslot. No longer. 
> What I needed / was expected to do / however we can call it is to do 
> prediction for a much longer period of time. E.g. 10 or 20 seconds. 
> And I really think, this is hopeless.
>
>> is one reason why my scheduling approach was essentially reactive rather
>> than predictive, and works without needing to measure error rates.  It
>>   
>
> I think, we cannot be "predictive". We can, seriously spoken, only be 
> reactive.
>
>> would be easy enough to plug in an oracle if one were available, of 
>> course.
>>
>>   
>
> :-)
>
> That´s another reason why I see a difference between data and media. 
> An often used buzz word is "adaptivity". And when we talk about 
> "adaptivity" in mobile networks, anybody tries to "adapt" applications 
> etc. In 2000, I was pointed to approaches like Odyssee (Brian Noble) 
> or the SMIL standard.
>
> What I think now, is that we perhaps should better talk about 
> robustness, when we talk about media. (The discussion adaptivity vs. 
> robustness is a stoneaged one, e.g. in electrical engineering and 
> systems theory.) Of course, we can - and do actually - talk about 
> adapation for data flows. Actually, HSDPA does coding scheme 
> adaptation on layer 2 each time slot. However, the user´s perception 
> of data flows and media flows and the user´s requirements are different.
>
> And of course, there is such a weird think as "QoS mapping" that 
> attempts to find a correlation or relationship or whatever between 
> (baiscally _informal_ and _non_ technical) requirements based upon 
> user´s perception and (basically _formal_ and _technical_) 
> specifications in networking.
>
> Which bit error rate corresponds to "pleasant to look at"? Or which 
> transport delay jitter corresponds to "acceptable for a phone talk"? 
> And wich average bit rate corresponds to "pleasant to listen at"? And 
> how would Beethoven have answered this question  in his youth? And how 
> in is later life when he was deaf?
>
>>> But when I read your paper, I saw two TCP flows and one Audio flow and
>>> one Video flow.  And then I saw something on throughput, which is 
>>> necessarily
>>> comparing peaches and oranges in that case, because no one is 
>>> interested in
>>> TCP throughput.  One is typically interested in TCP _goodput_.  And 
>>> that has
>>> to take into account of couse TCP retransmissions and can be "slightly"
>>> differ from any kind of L2 throughput in faulty networks.
>>>     
>>
>> I have never been a fan of the word "goodput".  One layer's "goodput" is
>>   
>
> But isn´t the goodput, particularly of a TCP flow, what a user perceives?
>> just the "throughput" of the next layer up, after all--if the higher 
>> layer
>> is thrashing, your "goodput" isn't any good, and you have no way of 
>> knowing
>> that.  Since there are pre-existing words for "effort" and 
>> "outcome",  it
>> makes sense to me to use them.
>>   
>
> Provoking question: How much layers do we need? IIRC in the Internet, 
> we typically think of four layers.
> Subnet, Network, Transport, Application. The OSI 7 layers often are 
> too complex. When I look at mobile networks, GPRS, UMTS and the like, 
> we again gather layers.
>
> I spent much of my time during the last years in discussions and in 
> thinking about the layers in these networks and I always have the 
> question in mind: "Do we inevitably need this layer?" Does an 
> additional layer make a system more clear and simple? Or does it only 
> add complexity? A terrible examples are the numerous "convergence 
> layers" in mobile networks were perhaps old and grey haired engineers 
> attempt to salvage the fruits of their use.
> "80 years ago, when George V. was still king of the United Kingdom, we 
> used something like X.25 and therefore we must have a convergence 
> layer wich abstracts the link layer to a generic link layer and then a 
> convergence layer which encapsulates IP and X.25 in a generic 
> convergence layer which is then placed between X.25 and IP and the 
> abstract convergence layer" etc. etc. etc.
>
> Whenever I see these terrible architecture diagrams which gather tons 
> of layers and protocols, my hair turns as grey as with these old 
> engineers :-)
>
> And each layer redefines its one meaning of effort and outcome. And 
> with each layer you have another "QoS mapping".
>
> In the end, the lowest layer has no idea what the user basically 
> intended to achieve.
>
> Perhaps a part of my problems eight years ago is that I never was a 
> multimedia guy. But when I think of the whole bunch of paper I read 
> about layers and QoS mapping in multimedia systems, I´m much less a 
> multimedia guy today than I was eight years ago.
>
> I was confronted with strange "QoS profiles" and the like, and when 
> you attempt to make adaptation decisions as e.g. in SMIL, you deal 
> exactly with those values - and I found it extremely difficult to 
> maintain the relationship between these technical parameters and the 
> most important question in networking at all: What does the user want 
> to do?
> What are the user´s needs? What does the user perceive? What is 
> acceptable for the user? And what, particularly in adaptation, is 
> simply annoying? And I never was convinced of bothering an innocent 
> user with slide bars and parameter tuning knobs and profiles etc. he 
> never will deal with. This is perhaps I worked lots of years at user´s 
> help desks and in direct contact with users, and so I know from my own 
> experience that users are simply completely overstraind by these knobs 
> and slide bars and bells and whistles and don´t know how to handle 
> them - and so we have basically two classes of users. The one class of 
> users simply ignores this stuff and the other class of users plays 
> around with this stuff and does more harm than good.
>
>> What we were trying to accomplish was conceptualizing the scheduling of
>> high-error wireless links in terms of effort-fair vs. outcome-fair,
>> arguing that a hybrid is frequently desirable, and demonstrating a basic
>> implementation.
>>
>> It's fine with me if you wish to argue that for data outcome should be
>> measured as "100%-correct packet bytes with latency below 250 ms" but
>> that for voice outcome should be measured in terms of "85%-correct
>> packet bytes with latency below 50 ms".  And I wouldn't object if you
>>   
>
> From my COMCAR experience, I first miss the possibility to model / 
> define / implement "85 % correct", see the discussion with Noel.
>
> The second concern is that we still have to map this unto a user´s 
> perspective. For data it´s easy: If you check your bank account via 
> home banking, you obviously don´t want to be cheated by faulty data. 
> And if you edit a document which is stored on a file server, you don´t 
> want to corrupt it more and more each time you read and write it.
>
> O.k., at a second glance it´s not as easy as it seems: If you download 
> some new installation CD for your linux installation, you perhaps want 
> this download to complete within your remaining lifetime :-)
>
> But what is acceptable to the user when it comes to media / multimedia 
> systems / multimedia documents?
>
>
>> wanted to argue that effort should be measured in watt-hz-seconds or
>> some other measure of how much spectrum resource is expended.
>>
>> But I believe that in a high-error environment it *does* make sense to
>> integrate scheduling of disparate flow types according to a tradeoff
>> between effort and outcome (and we were arguing for a particular model
>> very different from utility curves).
>>
>> Note that a couple messages back my motivating example for cell phones
>> was that an operator may be able to very slightly degrade the voice 
>> quality
>> of some customers in order to "unfairly" boost the experience of another
>> customer in a "dead spot", and that this might keep the customer talking
>>   
>
> This is the well known idea of "graceful degradation".
> Up to that time it sounds fine.
> In the next step, you define degradation paths.
> And from that point on it becomes fiendish.
>
> There are many technical concepts how we can do this.
>
> Is any of them accepted by the users?
>
>> instead of hanging up.  No part of that example depends on TCP, 
>> "goodput",
>> persistent ARQ, etc.  The key issue is the notion of fairness.
>>
>> I don't think we know "the story" on running voice over data-centric 
>> networks
>> versus running data over voice-centric networks or whether there is a 
>> neutral
>> ground.  Last time I looked Real Audio was mostly running over TCP, 
>> not UDP...
>>   
>
> Interestingly!
>
> But what´s the reason? One reason is that nobody uses Real Audio 
> conversational. So, the final reason is: Data for Real Audio is 
> downloaded to the user´s site and than played back - from disk or memory.
>
> So, we have data transport. No media streaming.
>
> O.k., sometimes the user is cheated with large buffers and preload and 
> pseudo-lifestreams. And depending on the network quality the stuff 
> frequently hangs - until it´s eventually hung up by the user.
>
> (I don´t know whether you are blessed by bushisms via podcast in the 
> U.S., here in Germany the real patriot listens to the podcast speeches 
> of Sancta Angela. But this is no contradition to my remark that the 
> podcast is finaly hung up by an enervated listener.)
>> let alone anything involving link-level options to deliver 
>> partially-mangled
>> packets.  And initially GSM was kind of dubious for data because of the
>> voice-centric deep interleaving, right? 
>
> Admittedly, I don´t know whether GSM was really that bad for data.
>
> I read tons of scientific papers about this and how disastrous it was. 
> Now, as I mentioned before, I worked as a user help desk guy for many 
> years in my life. And there were many users who didn´t know that GSM 
> would not work with data - so they used it and were fine. That´s 
> similar to the old story with the humble bee. Each engineering student 
> is told that the humble bee could not fly. Only the the the humble bee 
> does not know - and so she is happily flying.
>
> I know that there are tons of papers and even PhD theses which claim 
> the huge difficulties of GSM and data.
> Now, GSM provided a data rate comparable to old telephone modems and 
> the users worked with that succesfully and without difficulties. In 
> fact, I did user support not only but amongst others for users who 
> accessed their "Intranet"-data via GSM or checked their mail via GSM 
> from about 1995 on and did so for years and all the guys I talked 
> about the papers I read from 2000 on which mentioned the huge 
> difficulties with GSM and data looked at me as if I were fallen to 
> earth from another planet.
>
> That´s basically one reason, why I´m looking for any difficulties with 
> data and wireless networking for nearly eight years now - and as you 
> correctly guess from the sentences above, I´m absolutely no way 
> convinced of many parts of the scientific literature I read so far. I 
> think a huge number of so called scientifc papers and even PhD theses 
> about this topics are simply and drastically spoken urban legends. And 
> I think, and I´m somewhat bitter because of this, that we should be 
> highly self critical about our claims and that we must not write tons 
> of papers about spurious timeouts and loss differentiation problems 
> etc. just in order to achieve "scientific honour" or a PhD or 
> something like that - and the public ridicules about our work and some 
> years later we have papers like the Hasenleitner paper which simply 
> debunked the spurious timeout legend as pure nonsense. (And by the 
> way: A look in Edge´s original work would have even done that. It´s 
> undergraduate level that there is hardly anything as robust as 
> Chebyshev´s inequality when it comes to confidence intervals and the 
> like. So I wonder, why this topic was discussed at all.)
>
> O.k. That was disgressive.
>
>>  I think there are plenty of open
>> questions.
>>   
>
>> But I haven't yet seen anything to convince me that the concepts of 
>> effort-fair
>> and outcome-fair don't make sense or that either one is better than a 
>> tunable
>> hybrid.
>>
>>   
>
> O.k. For me, it would be nice to restrict the discussion a bit. What I 
> have in mind when I talk about this problem is in fact the multi user 
> diversity debate. There are lots of papers on this issue as well and 
> there was some hype about this topic during the last ten, twelve 
> years. And there is still some hype in writing sophisticated 
> schedulers which exploit multi user diversity and adaptive channel 
> coding and the like.
>
> Perhaps, I´m about to see another failure of my own work and perhaps 
> I´m going to be severely disappointed here as well.
>
> Basically, there are two concerns.
>
> First. We claim we would exploit multi user diversity and by doing so 
> increase spectral efficency etc. etc.
> Do we really? Or do we hope so?
>
> Second. What I have seen so far in the lower layers of actual mobile 
> wireless networks is highly complex and sophisticated. And I´m still 
> to understand most of the details. However, I wonder how terms like 
> "rate" are interpreted differently by CS and EE guys and I wonder why 
> flow control issues, which are typical end to end issues, are dealt 
> with locally and why there are much techniques integrated into the 
> lower layers, the ramifictions of which on the end to end behaviour of 
> the system are not yet clear.
>
> Therefore the question whether we should pursue ressource fairness or 
> throughput fairness is primarily which kind of fairness, if it is 
> pursued on a wireless link, fits best into the current end to end 
> Internet design?
> And what is the real purpose of that "fairness"? To my knowledge, the 
> original idea introduced by Knopp and Humblet and further discussed by 
> Tse et al. simply wants to exploit multi user diversity and for this 
> purpose some systems introduce sophisticated schedulers into the 
> downlink channel.
> - Do these systems really achieve what they want to do?
> - Do these systems have ramifications on upper layers?
> - Do these systems maintain the intended end to end behaviour of 
> protocols / applications etc.?
>
> Or is the multi user diversity debate as it is conducted at the moment 
> just another hype?
>
> That´s my original intention.
>
> Regards
>
> Detlef
>
>
>> Dave Eckhardt
>>   
>
>


More information about the end2end-interest mailing list