[e2e] Changing dynamics

Pekka Nikander pekka.nikander at nomadiclab.com
Sat Feb 21 23:31:03 PST 2009


> Most (not all) of these ideas seem to reflect the idea that we  
> should operate the net with a lot of internal buffering.

I would rather say the idea is to change how we view the buffering in  
the network.

> For example, if it were actually a frequent benefit to search   
> partway back ... then what would that mean?  It would mean that the  
> packets are not transiting the network in the US with little or no  
> delay ...

But then we would not be considering the fact that not all traffic  
requires short delay.  AFAICS, the reason why TCP requires short delay  
is built-in to TCP; given a different control loop structure (see  
Lloyd's and Christian's messages) there could easily be transport  
protocols that do not require such short e2e delays.

Taking a higher-layer view, only a small fraction of traffic requires  
50..200 ms e2e delay, basically games and "interactive  
multimedia" (aka voice :-).  Most transaction-like apps (IM, web apps  
etc) could tolerate 2000..4000 ms delays, not even speaking about bulk.

> I do think there is a virtue to moving replicated content closer to  
> the endpoints.  But that is a different thing, and has nothing to do  
> with routers and e2e protocols.

I sort-of thought so, too, in the beginning.

Then I realised that changing the way the network handles information  
will change the system dynamics, affecting transport and e2e protocols.

> That thing has to do with what we were debating a few weeks ago:  
> what Van Jacobson calls "content centric networks" or what Akamai  
> does at the app layer, or my point about communications not having  
> to be about information that begins with the assumption that  
> information is in "one place".

Indeed.  That was the starting point.  But then, adding a look at the  
tends in the price/performance rate changes of the components, the  
landscape starts to change.

As a Gedankenexperiment, what if the probability of the data still  
being opportunistically cached in the forwarding nodes would be higher  
than the "sending" end-node still being up?  (Also consider a  
"stateless" data source, or a source that has no direct material  
interest of knowing if the sink has got the data or not.)

Or another example, I think which you brought forward a few weeks ago,  
what if the information in question doesn't have any well defined  
"location" but is "smeared" over the network, e.g. due to coding or  
being an answer to a question.  I presume that it would also have  
quite a large effects on how to build routers or transport protocols.   
In such a world it is hard to speak about e2e; where is the other end?


Taking a shorter-term time perspective, haven't we learned anything  
from the so-called wireless TCP accelerators?  There we saw the  
effects of the interaction of two (or three) control loops.  Now, if  
the development of technology makes it a much stronger requirement  
than today to prefer local communications to long haul communications,  
presumably changing the dynamics (number of interacting control  
loops), I surmise that it will have its effects on transport  
protocols, too.

--Pekka



More information about the end2end-interest mailing list