[e2e] Satellite Date Rates

Jon Crowcroft Jon.Crowcroft at cl.cam.ac.uk
Sun Dec 5 08:31:01 PST 2004


um, i misse a bit of the context on this, so this comment may be awry
w.r.t the satellite data rate discussion,
but as regards general efficiency of rateless codes, 
methinks you need to re-read about luby codes/tornado/turbo
and in general rateless codes where you dont need a priori knowledge
of the channel error characteristcs to choose the code 
etc...the overhead is nothing like the level you assert here...

yes, you are right that people have started looking at FEC in P2P

e.g. upcoming Infocom 2005 paper
C. Gkantsidis, P. Rodriguez, "Network Coding for Large Scale Content
Distribution", IEEE INFOCOM 2005, Miami. March 2005. (Also available
as Microsoft Research Technical Report (MSR-TR-2004-80). [pdf]. 
at:
http://www.rodriguezrodriguez.com/files/My_Research.html

a tutorial about the ideas...
http://research.microsoft.com/~pachou/pubs/ChouWJ04.ppt

but the reasons are to reduce channel inefficiency...i..e reduce
greedyness....also, it seems to me that the idea that a p2p system is
far _less_ greedy than a system that encourages repeated download from
the "master" site such as iTunes or other classical 70s server 
based technlogies...but thats a whole other debate...

In missive <Pine.GSO.4.50.0412051506330.7623-100000 at argos.ee.surrey.ac.uk>, Llo
yd Wood typed:

 >>On Mon, 29 Nov 2004, David P. Reed wrote:
 >>
 >>> Actually, Lloyd, I was proposing something based on the simplest, most
 >>> basic digital fountain idea.
 >>>
 >>> If you remember Luby's original papers, the basic idea is that the
 >>> digital fountain stream is an infinite sequence of packets that encodes
 >>> a source file that is N packets long, such that if you receive ANY
 >>> subset of N packets correctly, you can reconstruct the original file.
 >>> I.e., it's pretty close to the information theoretic efficiency of the
 >>> packet-at-a-time channel.   You may also remember that the first N
 >>> packets of the fountain are just those of the source file, while
 >>> subsequent packets are combinations of the preceding frames.
 >>
 >>Thus, after N+2 packets are sent, redundantly sending every symbol
 >>(original and resend) you have achieved a rate of N/N+2, or 1/2. And
 >>from then on, with further packets, you are down to rate 1/3, then
 >>1/4... as your infinite sequence continues. This is called 'rateless'
 >>because it holds to no defined rate to attempt to get the packet
 >>across. And obviously, as the infinite series continues (because you
 >>often can't be sure when the receiver has all it needs), the
 >>efficiency of the channel drops towards zero.
 >>
 >>[..]
 >>> This is analogous to an adaptive FEC, and becomes one, but based on a
 >>> somewhat different theory and loss model, and with a quite different
 >>> optimization goal
 >>
 >>One where tending towards zero efficiency is somehow seen as tending
 >>towards maximum theoretical throughput.
 >>
 >>Really, it's no coincidence that erasure codes have found a home in
 >>greedy P2P applications, whose concepts of channel efficiency is
 >>somewhat fuzzy at best, but where the redundancy of encoding can best
 >>be exploited by multiple hosts.

actually, i dont think its fair to say the concept of channel
efficieny is fuzzy - it is well defined - the place you look for this
definition is (again) in radio nets, for example look at definitions
of info-capacity of a channel by Gupta&Kumar - the info theoretical
limits explored in that work are for multiple "hosts" so carry over
nicely to P2P...(or the internet in general...)

 cheers

   jon



More information about the end2end-interest mailing list