[e2e] new network architecture idea -
David P. Reed
dpreed at reed.com
Mon May 22 02:05:00 PDT 2006
"shouting in a vacuum" is probably the wrong metaphor.
To make Jon's vision practical, the act of "offering to serve content"
needs to impose no costs on others. Thus it cannot include as
equivalent actions such as "advertising" - since "advertising" is "sending".
The concept of "advertising" is like "shouting". It imposes a cost on
Jon's vision in its extreme may prove unrealizable. But it's worth
thinking through how you would bootstrap such an extreme idea.
A simple start would be to say that one could never receive more packets
than one authorizes to be sent to oneself. This rule, in and of
itself, is not the normal network communications rule, which starts off
by allowing any packet to be sent.
It does not prevent intermediate nodes from deciding to request packets
on a receiver's behalf, before the receiver requests them. But in such
a case, it is clear that the intermediate node takes on its own
shoulders the burden of holding the packets, and cannot blame the
receiver for the imposed cost of anticipating demand. In other words,
it is acting like a "retailer" that stocks goods in anticipation of the
needs in its neighborhood for certain goods. In return, it gets to
charge a profit for the shortened latency in delivering those goods,
balanced against the risk that those goods are not requested.
In this way, swarms can be built.
But the key thing is that costs are not imposed merely by production of
packets. The producer assumes the risk, and it can encourage
intermediaries to assume risks, but it cannot transfer the cost to
DNS is constructed in almost this way. It does not "advertise", and it
is not the source of "spam", nor is it the recipient of "spam" at the
packet level. However, the one way in which DNS does not work this way
is that it is obligated to *any* requestor. So to fix spam to work
this way, one would have to arrange that DNS servers could not handle
requests from any place where they did not have a prior authorization
In a practical sense, one could arrange this to be the case by
representing authorization via an encryption protocol: only those nodes
who have prior authorization can construct valid requests to any
particular DNS server.
Since this is a thought experiment, and a riff on Jon's proposal, I'll
Jon Crowcroft wrote:
> In missive <70C6EFCDFC8AAD418EF7063CD132D064BA06A3 at WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com>, "Christian Huitema" typed:
> >>Well, you trade DDOS for the sibyl attack. The problem is that in most
> >>P2P systems there is little "barrier to entry". Each zombie can manifest
> >>itself as multiple nodes, virtual nodes if you want. They can
> >>potentially have enough virtual nodes to represent 1/3rd of the
> >>population. If you don't believe that's possible, consider that 70% of
> >>e-mail is spam...
> in my conjectured architecture, most nodes collaborate to elimiate sybils by witnessing the source
> and attesting to its uniqueness and authenticity - since there's no _destination_, the spammer
> (if your application is unsolicted content)
> is shouting in a vacuum
> >>> swarming systems also have a variety of mechanisms built into the
> >>> analogy
> >>> of a "routing" substrate, that match incentives for download/receiver,
> >>> versus forwarding
> >>> which make it hard for a zombie farm to dent the system unless there
> >>> a significant fraction of nodes subverted (significant being >33% or
> >>> typically depending on the algorithm) - frankily,m a system with 1/3
> >>> more nodes subverted is
> >>> so badly infiltrated that I have no idea what the bad guys are still
> >>> in it:)
> >>> the other thing with swarms is that not only is hard to overload the
> >>> (as it isn't a _point_ service)
> >>> but its also hard to do topological attacks
> >>> packet swarming - an idea whose time has comefrom...
> >>> In missive <70C6EFCDFC8AAD418EF7063CD132D064BA0671 at WIN-MSG-
> >>> 21.wingroup.windeploy.ntdev.microsoft.com>, "Christian Huitema" typed:
> >>> >>> When things go wrong (black holes, DDoS, ..., even spam and the
> >>> >>> blogosphere) is when activities are "sender driven" without
> >>> for
> >>> >>> the wishes or needs of the receivers.
> >>> >>
> >>> >>You can definitely accomplish a receiver driven DDOS. Assume a
> >>> >>band of zombies, and instruct them to all receive a large set of
> >>> >>pages from the target server. Pretty soon, the server's sending
> >>> capacity
> >>> >>will be saturated. Voila, receiver driven DDOS.
> >>> >>
> >>> >>In Jon's proposal, the principle that prevent's DOS is swarming.
> >>> >>Swarming allows the data to be served from any valid copy, not just
> >>> >>initial publisher. In my example, if swarming worked, each zombie
> >>> >>become a potential surrogate for the server, and the server's
> >>> >>would remain available. I suspect however that the zombies may try
> >>> >>not fully cooperate with the swarming...
> >>> >>
> >>> >>-- Christian Huitema
> >>> cheers
> >>> jon
More information about the end2end-interest