[e2e] TCP Loss Differentiation

David P. Reed dpreed at reed.com
Sat Mar 14 08:51:20 PDT 2009

The primary causality I was trying to reflect by that choice of words is 
that in all systems there are large-scale causality relationships 
(application layer or user layer, in most cases) that break the 
fundamental assumption that there are a collection of memoryless 
processes in the system.  These are really significant in most networks, 
ignored by most models:

- users behave differently when networks get slow (they go to the cafe 
or panic and start hitting keys harder and faster). 

- wireless network transmissions are a primary source of noise to other 
transmissions, so any correlations or dependence can get amplified when 
noise causes retransmission by other nodes, causing more noise, ...  the 
probability that wireless networks have modes of highly synchronizing 
"resonances" that correlate rather than decorrelate signals is high - 
and can be used to increase SNR by techniques like analog network coding 
(zigzag, for example) if you realize that the phenomenon is not a noise 
process at all, since it adds no uncertainty.

It's easy to create models full of independent, memoryless processes 
that *appear* to the student (or full professor) to be valid because the 
normal human reaction is to think that complexity is best modeled by 
lots of randomness.  Complex systems are not random merely because they 
are complex.  E.G., PRNG's are perfectly non-random.  It's a form of 
mysticism to think that they are the same as true random processes 
outside a very narrow domain of applicability, where they can add 
insight.  This is where most engineering math practitioners get it 
wrong.  Not knowing when your modeling approach fails... because you use 
the "religion of your peers" (the NS2 wireless modeling toolkit, for 
example, which teaches nothing about actual propagation and dependence 
of noise).

Lachlan Andrew wrote:
> Greetings Detlef,
> 2009/3/13 Detlef Bosau <detlef.bosau at web.de>:
>> David P. Reed wrote:
>>> My main point was that these loss processes are not characterizable by a
>>> "link loss rate".  They are not like Poisson losses at all, which are
>>> statistically a single parameter (called "rate"), memoryless distribution.
>>>  They are causal, correlated, memory-full processes.  And more  importantly,
>>> one end or the other of the relevant link experiences a directly sensed
>>> "loss of connectivity" event.
>> Does anybody happen to have some good reference for this one? Something
>> like: "The failure of poisson modelling of mobile wireless links" or
>> something
>> simuilar?
>> What I have seen so far, simply assumes the contrary and uses Gilbert Markov
>> Models and the like.
> Although the Gilbert model is far from perfect, it is very much better
> than a Poisson model for wireless.  They are correlated and
> "memory-full", and have the notion of "loss of connectivity" (i.e.,
> being in the bad state).  It certainly can model the case you describe
> of periods of a broken connection interleaved with excellent
> connectivity.
> Although your work may need better wireless models, I think that for
> most people on this list the law of diminishing returns means that
> going from Poisson to Gilbert is enough.
> David, could you explain what it means for a stochastic process to be
> "causal"?  My understanding was that a filtration on a random process
> is always causal in the sense of having being increasing in one
> direction, while the time reverse of the underlying random process is
> always another valid random process, albeit a different process except
> in the case of reversible processes.
> Cheers,
> Lachlan

More information about the end2end-interest mailing list