[e2e] Latency Variation and Contention.

Sireen Habib Malik s.malik at tuhh.de
Wed Aug 17 02:36:41 PDT 2005


Hi,


 >>what you are aiming at is SNR, i.e., 10log10(RTT/dRTT)

So we are getting somewhere now :-)

Right. SNR is the signal strength normalized to the noise strength. For 
dRTT=0, SNR=f(RTT/dRTT)=infinite.

I considered "congestion" as the noise strength normalized to the signal 
strength. For dRTT=0, congestion signal based upon dRTT/RTT= 
f(dRTT/RTT)= zero = no congestion.

So I reckon a congestion signal that looks like 1/(10log10(RTT/dRTT)) 
should do the trick.

--
Sireen





Joe Touch wrote:

>-----BEGIN PGP SIGNED MESSAGE-----
>Hash: SHA1
>
>
>
>Sireen Habib Malik wrote:
>  
>
>>Hi,
>>
>>Have not read the paper, however, I think that if,
>>
>>RTT = Round Trip Time, and
>>dRTT = variations in RTT,
>>
>>then "dRTT" is a weak/poor indicator of congestion.
>>    
>>
>
>but a good indicator that congestion control will be hard to compute ;-)
>
>stability is f(dRTT), not f(RTT)
>
>RTT is a function of distance, in general
>dRTT is a function of the number of hops, in general
>
>Changes in the two - relative or absolute - don't seem to tell you much
>more than that, though.
>
>  
>
>>A congestion signal based upon "dRTT/RTT" would give a much better idea,
>>relatively speaking.
>>    
>>
>
>relative variance = variance/mean
>
>but noise is more closely correlated to variance than to relative
>variance, which makes sense if dRTT = variance
>
>what you are aiming at is SNR, i.e., 10log10(RTT/dRTT)
>
>Joe
>
>  
>
>>-- 
>>Sireen
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>Detlef Bosau wrote:
>>
>>    
>>
>>>Hi to all.
>>>
>>>Recently, I found the following paper by Sherif M. ElRakabawy,
>>>Alexander Klemm and Christoph Lindemann:
>>>
>>>http://mobicom.cs.uni-dortmund.de/publications/TCP-AP_MobiHoc05.pdf
>>>
>>>The paper proposes a congestion control algorithm for ad hoc networks.
>>>Perhaps, this paper is interesting within the context of our latency
>>>discussion.
>>>
>>>However, I´m not yet convinced of this work.
>>>
>>>If I leave out some sheets of paper, some simulations and many words,
>>>the paper basically assumes that in ad hoc networks a TCP sender can
>>>measurethe degree of network contention using the variance of
>>>(recently seen) round trip times:
>>>
>>>-If the variance is close to zero, the network is hardly loaded.
>>>-If the variance is "high" (of course "high" is to be defined) there
>>>is a high degree of contention on this network.
>>>
>>>Afterwards the authors propose a sender pacing scheme, where a TCP
>>>flow´s rate is decreased with respect to the so measured "degree of
>>>contention".
>>>
>>>What I do not yet understand is basic assumption: variance 0 <=> no
>>>load; variance high <=> heavy load.
>>>
>>>Perhaps the main difficulty is that I believed this myself for years
>>>and it was an admittedly difficult task to convince me that I was
>>>wrong %-)
>>>However,
>>>
>>>    @article{martin,
>>>    journal = " IEEE/ACM TRANSACTIONS ON NETWORKING",
>>>    volume ="11",
>>>    number = "3",
>>>    month = "June",
>>>    year = "2003",
>>>    title = "Delay--Based Congestion Avoidance for TCP",
>>>    author = "Jim Martin and Arne Nilsson and  Injong Rhee",
>>>    }
>>>eventually did the job.
>>>
>>>More precisely, I looked at the latencies themselves, not the variances.
>>>
>>>
>>>Let´s consider a simple example.
>>>
>>>          A  network B
>>>
>>>"network" is some shared media packet switching network.
>>>Let´s place a TCP sender on A and the according sink on B.
>>>
>>>The simple question is (and I thought about this years ago without
>>>really coming to an end - I´m afraid I didn´t want to):
>>>
>>>Is a variance close to zero really equivalent for a low load situation?
>>>And does increasing variance indicate increasing load?
>>>
>>>Isn´t it possible that a variance close to zero is a consequence of a
>>>fully loaded network? And _decreasing_ load in that situation would
>>>cause the latencies to vary?
>>>
>>>If we could reliably identify a low load situation from a varaince
>>>close to zero, we could use the latencies themselves as a load
>>>indicator because we could reliably identify a "no load latency" and
>>>thus could identify imminent congestion by latency observation.
>>>
>>>One could even think of a "latency-congestion scale" which is
>>>calibrated  first by variance observation in order to get the
>>>"unloaded" mark and second by drop observation and some loss
>>>differentation technique to get the "imminent congestion" mark.
>>>
>>>To my knowledge, this is extensively discussed in literature - until
>>>Martin, Nilsson and Rhee found the mentioned results.
>>>
>>>Now, back to my example and the basic question: Does the assumption,
>>>latency variations indicate the degree of contention in an ad hoch
>>>network, really hold?
>>>
>>>I admit, I personally do not yet see an evidence for this.
>>>
>>>Detlef
>>>      
>>>
>>
>>
>>    
>>
>-----BEGIN PGP SIGNATURE-----
>Version: GnuPG v1.2.4 (MingW32)
>Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
>
>iD8DBQFDAnbVE5f5cImnZrsRAtC9AKDYkULbaAz4y93+Ym5iIuv/rVZEWgCfW5vy
>MELJpDvHjw5QDGjl4dDUtLU=
>=lMcl
>-----END PGP SIGNATURE-----
>  
>


-- 
M.Sc.-Ing. Sireen Malik

Communication Networks
Hamburg University of  Technology
FSP 4-06 (room 5.012)
Schwarzenbergstrasse 95 (IVD)
21073-Hamburg, Deutschland

Tel: +49 (40) 42-878-3443
Fax: +49 (40) 42-878-2941
E-Mail: s.malik at tuhh.de

--Everything should be as simple as possible, but no simpler (Albert Einstein)







More information about the end2end-interest mailing list