[e2e] Ressource Fairness or Througput Fairness, was Re: Opportunistic Scheduling.

Noel Chiappa jnc at mercury.lcs.mit.edu
Mon Jul 23 10:34:20 PDT 2007


    > From: Detlef Bosau <detlef.bosau at web.de>

    > - Data flows are typically asynchronous and sensitive against data
    > corruption.
    > - Media flows are typcially synchronous / isochronous and at least to a
    > certain degree robust against data corruption.

"sensitive to data corruption" would probably be a better way to phrase that
first one...


    > you inevitably have a recovery layer in these systems: ... simply has
    > no means to accept "blocks with one percent bit error rate" ... block
    > is either corrupt or intact.
    > ...
    > .. in networks with high error rates, i.e. particularly mobile wireless
    > networks, IP is absolutely ill suited for any kind of media streaming.
    > ...
    > IP or any other packet switched protocol must not be used for media
    > streaming in mobile networks.

Well, at an architectural level, the internetworking model is supposed to
work for media applications (i.e. error-tolerant applications) on networks
with some errors.

That's the whole reason IP and TCP were split apart to begin with, because
the media applications (packet voice was the one they were actually doing)
didn't need the high robustness of TCP, and the data stream delays caused by
retranmissions were a bad case of "cure is worse than the disease". That
allowed applications which didn't need the data robustness of TCP to be built
directly on an unreliable datagram service.

Similarly, UDP has a distinguished value of the checksum field for "no
checksum", for applications for which data perfection is not critical.
(Although on checking the IPv6 specification for a point below, I notice
that IPv6 doesn't have this capability. More on this below...)

But you do have one (actually several) interesting points below here:


    > your media flow may well tolerate 1 percent BER, if the one corrupted
    > bit is in the time stamp in the RTP header, you can throw away your
    > whole RTP packet which may consist of say 1 kBypte, roughly spoken:
    > 10.000 bits.

IPv4 has a separate header checksum which has to be correct; the thinking
being that if the internetwork header is damaged, you might as well toss the
packet, since it's quite possibly damaged past the point of being useful
(e.g. source or destination address is damaged).

However, on high BER networks with a simplistic internetwork<->network
adaption layer (important!), there's a good chance (especially with the
smaller packets that some media applications produce) that in any packet with
error(s), an error will have occurred in the header, and the packet will have
to be discarded. 

The obvious engineering response (especially on low bandwidth networks!),
knowing that if the header suffers a bit error, the bandwidth used to send
the packet will have been wasted, is to use an internetwork<->network
adaption layer that takes special measures to protect the header. E.g. an FEC
or CRC - something that can *correct*, not just *detect*, errors - over that
part of the packet, or something like that.

Does any high-error, low-datarate network used for media applications
actually do this?


The next obvious response is one that requires a little bit of cooperation
from the internetwork level, which is to realize that for data applications,
which are sensitive to data error (such as TCP), it's worth turning on
network-level reliability mechanisms (such as a CRC over the entire packet).
Obviously, the internetwork layer ought to have a bit to say that; the
adaption layer shouldn't have to look at the protocol type.

For one, on low-bandwidth networks, this makes sure that the bandwidth isn't
wasted. More importantly, TCP responds poorly when a high percentage of
packets are dropped; throughput goes to the floor (because modern TCP's
interpret this as a signal of congestion), and worse, the connection may
close.

Interestingly, again, IPv4 originally did include such a mechanism: the ToS
bits did include a "high reliability" bit, but I'm not sure we fully
understood the potential application I am speaking of here.


(Also, interestingly, speaking of IPv4, IPv6 doen't have a separate header
checksum; IIRC the thinking was it was duplicative of the end-end checksum.
That's why UDPv6 mandates the checksum. However, it seems to me that this,
plus the UDP checksum point, means that in a very critical area, IPv6 is
actually less suitable for media applications than IPv4. But I digress..)


    > .. what would be a litte flaw for a propper implemented line switching
    > is reliably turned into a disaster, when you use packet switching.

Now this raises another interesting point (although not the one you were
making, which I think I have answered above); which is that circuit switching
is potentially inherently better for transmission systems which have high
BER.

One possible way this could be true is that the application will perform
poorly, and that damaged packets potentially waste bandwidth. However, I
think this can be mostly avoided through use of CRC's, etc, as discussed
above. Yes, that makes for a slightly more complex design, but in this day
and age (we're not back in the age of 74-series TTL anymore) the extra
computing power to do a CRC is 'no big deal'.

Another point might be the per-packet header overhead, but this is obviously
an old debate, one where the other advantages of the datagram model (such as
the lack of a setup delay) have carried the day - to the point that even
notionally circuit-based systems such as ATM have had datagram features,
along with per-packet headers...


But I have seriously tried to think of an inherent advantage that circuit
swithed networks *necessarily* have over a well-designed packet switching
system, and I can't think of one.

The key, of course, is that "well-designed"; the entire system (especially
the internetwork<->network adaption layer) has to be designed with both these
applications (data-damage-insensitive) and also these networks (high BER) in
mind.

I think the original IPv4 design did a reasonable (not perfect, of course -
we had nothing like the experience we have now) job on these things. Perhaps
the understanding of those points was not spread as widely as it should have
been, though (through not being prominently discussed, as the other points of
the datagram design philosophy were).

The system you worked on (which apparently used an all-or-nothing approach to
data integrity - could they even repair damage, or just detect it?) didn't do
that, but I don't think it proved it can't be done (any more than the Titanic
proved you can't make ships which don't kill passengers by the thousand :-).

	Noel


More information about the end2end-interest mailing list