[e2e] Expected latency for a single hop

David P. Reed dpreed at reed.com
Thu Aug 4 08:19:08 PDT 2005


Detlef - Though it seems simple, your statement is about as complex as a 
problem can be.
This is the kind of problem statement that creates the definitional trap 
I was referring to in earlier discussions.   By construing the "latency" 
as being a propery of the "link" rather than of the network as a whole, 
the statement acquires  a misleading simplicity

The latency only is well defined for real packets that actually arrive 
and traverse the link.   Expectation and variance are properties of 
distributions, not packets.

There is no random process at all on the link itself (at least in the 
common case - there are links where the link itself has a random delay, 
but that usually arises where the link's physical characteristics vary 
faster and larger than the queue management and link pacing 
mechanisms).  The random process is the network environment that 
provides competing packets.  So the latency is everywhere but the link 
itself.

The other issue is that prediction is more reliable over a collection of 
packets, but a sufficient collection cannot happen in an instant.

The first order predictor is the queue size at the entry to the link.   
That's a very reliable predictor of latency for the next event.   But it 
provides very little input about variance (which depends entirely on 
packets arriving from elsewhere at "light speed").

I think there might be a much better (i.e. less complex to state) 
approach in NOT trying to start with the link and go by induction to the 
multilink case.   Instead, perhaps start with an end-to-end flow (over a 
path) and reason about what happens as you add flows that superpose 
themselves on the existing paths.

Detlef Bosau wrote:

> I posted this in another context yesterday, but perhaps, I should 
> isolate the problem to state it more clearly.
>
> Consider an arbitrary packet-switching network.
>
> Consider two adjacent nodes n1, n2 with link l in between
>
> n1--------------------------n2
>                l
>
>
> Consider a packet traveling the network, it´s path shall contain n1 
> and n2 subsequently.
>
> Now, let
>  t1: packet´s arrival time on r1.
>  t2: packet´s arrival time on r2.
>
> Can we forecast expectaition and variance (if only for the _near_ 
> future!) for the "one hop latency" t2 - t1 ?
>
> I explicitely focus on a "best effort" context.
>
> For link l I assume, that expectation and variance of the transport 
> latency exist.
>
> Is there any work in this direction?
>
>



More information about the end2end-interest mailing list