[e2e] Technology marches forward at the expense of the net?

David P. Reed dpreed at reed.com
Thu Dec 13 14:43:40 PST 2001


I know a little about Digital Fountain's technological basis.  Essentially, 
it uses source coding to transmit a file in such a way that any random 
sample of its coded stream that contains as many correct bits as the 
original file can be used to reconstruct the file.

So they can send a file by "blasting" at whatever rate they want, and any 
packets that happen to get through will contribute to the final 
file.  (this only works, of course, because the code essentially covers the 
whole file as a unit).

This has the nice property of separating congestion control from 
retransmission control, so the two are independent.

They can still use "congestion control" to adjust the rate of the "blast" 
in a closed loop way - ECN bits, for example, would work fine, and if you 
send acks for every arriving packet, you can use "drops" (and RED, if you 
want) to measure congestion.  But the key difference is you don't have to 
resend the same data in order to provoke acknowledgements.

So you can be "TCP compatible" (in a fairness sense) by using the same 
closed-loop feedback on rate.

But the performance is better, because you never have to retransmit the 
same data packets when there is a loss (this duplication increases in TCP 
as the RTT increases).  Instead in the DF approach, you send new packets 
that contain information that is always "new".

I can explain more if anyone wants.

BTW, the neat thing about DF's technology is that it is inherently 
multicast - that is the same stream can be fanned out, and even if one path 
has more congestion than the other, the endpoints will each get the file in 
a time proportional to their particular bottleneck bandwidth vis-a-vis the 
source.


At 04:18 PM 12/13/2001 -0500, Vishal Misra wrote:
>On Thu, 13 Dec 2001, Mark Boolootian wrote:
>
> > It claims
> >
> >    throughput can be set to any desired fraction of available link 
> bandwidth
> >    up to 95%; for high bandwidth links, this provides much higher 
> throughput
> >    than is possible with TCP, enabling fast download of very large files
> >    even over very long WAN links.
> >
> >    a congestion control feature makes file transfers share bandwidth 
> resources
> >    fairly with other network traffic
> >
>
>Aren't the two statements above inherently contradictory? If the "other
>network traffic" is TCP, then how are the bandwidth resources shared
>"fairly"? While in no way advocating the TCP way of doing things, I don't
>see how this scheme could be max-min/proportionally/tcp fair if there is
>other non-MetaContent traffic on the network.
>
>-Vishal

- David
--------------------------------------------
WWW Page: http://www.reed.com/dpr.html





More information about the end2end-interest mailing list