[e2e] Technology marches forward at the expense of the net?
wenyu at cs.columbia.edu
Thu Dec 13 13:43:02 PST 2001
My guess from their front page summary is that, there are
mission-critical data that needs to be transferred reliably asap. I
haven't had the time to look through it, but it may be based on some
FEC-like channel coding scheme.
Of course, if that's the assumption, then one can say: fairness is not
as important if different types of sessions have different priority.
And, a natural question everyone would ask is: what if everybody is using
MetaContent? Probably very bad... "Mission-critical" sounds like the
word of military, so the military will probably like it and block anyone
else from using it :), just kidding.
However, redundancy schemes are still quite sensible in a wireless or
noisy channel environment, where most of the time you are fighting against
static or channel fading instead of other connections.
My 2 cents.
> > It claims
> > throughput can be set to any desired fraction of available link bandwidth
> > up to 95%; for high bandwidth links, this provides much higher throughput
> > than is possible with TCP, enabling fast download of very large files
> > even over very long WAN links.
> > a congestion control feature makes file transfers share bandwidth resources
> > fairly with other network traffic
> Aren't the two statements above inherently contradictory? If the "other
> network traffic" is TCP, then how are the bandwidth resources shared
> "fairly"? While in no way advocating the TCP way of doing things, I don't
> see how this scheme could be max-min/proportionally/tcp fair if there is
> other non-MetaContent traffic on the network.
More information about the end2end-interest