[e2e] Ressource Fairness or Througput Fairness, was Re: Opportunistic Scheduling.

Detlef Bosau detlef.bosau at web.de
Mon Jul 23 14:22:28 PDT 2007


Noel Chiappa wrote:
>     > From: Detlef Bosau <detlef.bosau at web.de>
>
>     > - Data flows are typically asynchronous and sensitive against data
>     > corruption.
>     > - Media flows are typcially synchronous / isochronous and at least to a
>     > certain degree robust against data corruption.
>
> "sensitive to data corruption" would probably be a better way to phrase that
> first one...
>
>   

Thanks. (And now you all know: I´m not a native speaker. Prepositions 
and the like are always a problem. And we Germans often strike back with 
articles :-))

>     > IP or any other packet switched protocol must not be used for media
>     > streaming in mobile networks.
>
> Well, at an architectural level, the internetworking model is supposed to
> work for media applications (i.e. error-tolerant applications) on networks
> with some errors.
>
> That's the whole reason IP and TCP were split apart to begin with, because
> the media applications (packet voice was the one they were actually doing)
> didn't need the high robustness of TCP, and the data stream delays caused by
> retranmissions were a bad case of "cure is worse than the disease". That
> allowed applications which didn't need the data robustness of TCP to be built
> directly on an unreliable datagram service.
>   

That points into the direction of having header data "better protected" 
than payload.

However: Is this implemented particularly in mobile wireless networks, 
particularly on "data channels"?
And to the best of my knowlege, IP packets typically _are_ conveyed via 
"data channels".
In the mentioned project, I was advised to ignore the checksum in UDP 
packets. If we ignore possible problems with the header information, 
this is exactly what you write: Respect the (IP) header checksum and 
ignore the rest.

However: What happens on the link layer? In the link layer 
specifications for data channels I read so far (and I never happened to 
read a standard where header data is given better protection than 
payload, although I often heard about it) there is frequently a data 
recovery mechanism which simply tries to recover the data packets and 
which check the correctnes of a packet by _one_ CRC sum for the whole 
packet. Or for the individual "radio blocks" respectively, which in turn 
means nothing else than that the whole packet is either error free or 
will be dropped.


> Similarly, UDP has a distinguished value of the checksum field for "no
> checksum", for applications for which data perfection is not critical.
>   

Surely it has! But how do I tell the link layer not to care for 
correctness? Where is the switch where I can tell the link layer: "Dear 
Link Layer! I never will have a look at the UDP checksum, I always leave 
it alone!" and where I don´t get the answer: "So, when you don´t care 
for the checksum, you surely won´t mind me to care for a correct 
checksum, won´t you?"

Some months ago, Dave Reed wrote that mobile network´s link layers often 
are simply "too smart"! ;-)

I´m not quite sure, whether we have header checksums particularly for 
UDP and RTP, or perhaps for UDP and RTP combined, than we could define 
for the link layer that it shall care for the IP/UDP/RTP headers to be 
correct - and _must_ _not_ care for the rest. I´m curious whether this 
is implemented in any wireless mobile network?


> IPv4 has a separate header checksum which has to be correct; the thinking
> being that if the internetwork header is damaged, you might as well toss the
> packet, since it's quite possibly damaged past the point of being useful
> (e.g. source or destination address is damaged).
>
>   

Yes. And when the checksum is correct the network shall be well behaved 
and damn stupid and convey the packet.
However, some of these networks like GPRS, UMTS and I´m not yet sure 
with HSDPA are "know it all" networks and even toss the packet when it´s 
perfectly acceptable for (to? Prepositions are rerrible ;-) ) the 
application.
That´s a typical end-to-end argument: Only the application knows whether 
the packet is o.k. or not. (However, "smart" networks think, they knew 
better ;-))

>
> The obvious engineering response (especially on low bandwidth networks!),
> knowing that if the header suffers a bit error, the bandwidth used to send
> the packet will have been wasted, is to use an internetwork<->network
> adaption layer that takes special measures to protect the header. E.g. an FEC
> or CRC - something that can *correct*, not just *detect*, errors - over that
> part of the packet, or something like that.
>
> Does any high-error, low-datarate network used for media applications
> actually do this?
>
>   
That´s exactly my concern! I heard some rumour that this were intended. 
But I never saw it practically.

(And it´s only a concern in _packet_ switching as in "_line_ switching" 
/ TDM networks the "header information" is put into the schedule and 
therefore the problem does not exist any more.)

> The next obvious response is one that requires a little bit of cooperation
> from the internetwork level, which is to realize that for data applications,
> which are sensitive to data error (such as TCP), it's worth turning on
> network-level reliability mechanisms (such as a CRC over the entire packet).
> Obviously, the internetwork layer ought to have a bit to say that; the
> adaption layer shouldn't have to look at the protocol type.
>
>   
That´s the "switch", I missed above.

> Interestingly, again, IPv4 originally did include such a mechanism: the ToS
> bits did include a "high reliability" bit, but I'm not sure we fully
> understood the potential application I am speaking of here.
>   
Tempora mutantur, nos et mutamur in illis, or how the use of the ToS 
bits changed over the years ;-)

> (Also, interestingly, speaking of IPv4, IPv6 doen't have a separate header
> checksum; IIRC the thinking was it was duplicative of the end-end checksum.
> That's why UDPv6 mandates the checksum. However, it seems to me that this,
> plus the UDP checksum point, means that in a very critical area, IPv6 is
> actually less suitable for media applications than IPv4. But I digress..)
>
>   

Is this really digressive?

Interestingly, some people within the COMCAR project talked about IPv6 
in newspaper interviews and emphasized its suitability for media 
applications in mobile networks =8-)


>     > .. what would be a litte flaw for a propper implemented line switching
>     > is reliably turned into a disaster, when you use packet switching.
>
> Now this raises another interesting point (although not the one you were
> making, which I think I have answered above); which is that circuit switching
> is potentially inherently better for transmission systems which have high
> BER.
>
> One possible way this could be true is that the application will perform
> poorly, and that damaged packets potentially waste bandwidth. However, I
> think this can be mostly avoided through use of CRC's, etc, as discussed
> above. 

Yes. However, the actual IPv6 discussion points into a different 
direction. And I well think of projects which recommend IPv6 even for 
mobile networks. And when I consider your points above, this requires 
some considerations. The one thing is whether we _can_ use IPv6 even vor 
media tranportation in packet switched networks when your points are 
considered. The other thing is whether the implementations are actually 
_done_ that way.

> Yes, that makes for a slightly more complex design, but in this day
> and age (we're not back in the age of 74-series TTL anymore) the extra
> computing power to do a CRC is 'no big deal'.
>
> Another point might be the per-packet header overhead, but this is obviously
> an old debate, one where the other advantages of the datagram model (such as
> the lack of a setup delay) have carried the day - to the point that even
> notionally circuit-based systems such as ATM have had datagram features,
> along with per-packet headers...
>
>
>   
And we both know about the long term success of ATM :-)

> I think the original IPv4 design did a reasonable (not perfect, of course -
> we had nothing like the experience we have now) job on these things. Perhaps
> the understanding of those points was not spread as widely as it should have
> been, though (through not being prominently discussed, as the other points of
> the datagram design philosophy were).
>
> The system you worked on (which apparently used an all-or-nothing approach to
> data integrity - could they even repair damage, or just detect it?) didn't do
> that, but I don't think it proved it can't be done (any more than the Titanic
> proved you can't make ships which don't kill passengers by the thousand :-).
>
>   

The system I worked on is spilled milk. And it was not my job to change 
the IP implementation. I had to work above IP. In addition, I did not 
see all these details. I started the whole work with little, not to say 
no, knowledge about mobile networks.

It´s spilled milk. And perhaps, I will learn and accept this one day.

Detlef
> 	Noel
>   


-- 
Detlef Bosau                          Mail:  detlef.bosau at web.de
Galileistrasse 30                     Web:   http://www.detlef-bosau.de
70565 Stuttgart                       Skype: detlef.bosau
Mobile: +49 172 681 9937





More information about the end2end-interest mailing list