[e2e] Latency Variation and Contention.

Sireen Habib Malik s.malik at tuhh.de
Tue Aug 16 03:22:55 PDT 2005


Hi,

Have not read the paper, however, I think that if,

RTT = Round Trip Time, and
dRTT = variations in RTT,

then "dRTT" is a weak/poor indicator of congestion. 

A congestion signal based upon "dRTT/RTT" would give a much better idea, 
relatively speaking.

--
Sireen










Detlef Bosau wrote:

> Hi to all.
>
> Recently, I found the following paper by Sherif M. ElRakabawy, 
> Alexander Klemm and Christoph Lindemann:
>
> http://mobicom.cs.uni-dortmund.de/publications/TCP-AP_MobiHoc05.pdf
>
> The paper proposes a congestion control algorithm for ad hoc networks.
> Perhaps, this paper is interesting within the context of our latency 
> discussion.
>
> However, I´m not yet convinced of this work.
>
> If I leave out some sheets of paper, some simulations and many words, 
> the paper basically assumes that in ad hoc networks a TCP sender can 
> measurethe degree of network contention using the variance of 
> (recently seen) round trip times:
>
> -If the variance is close to zero, the network is hardly loaded.
> -If the variance is "high" (of course "high" is to be defined) there 
> is a high degree of contention on this network.
>
> Afterwards the authors propose a sender pacing scheme, where a TCP 
> flow´s rate is decreased with respect to the so measured "degree of 
> contention".
>
> What I do not yet understand is basic assumption: variance 0 <=> no 
> load; variance high <=> heavy load.
>
> Perhaps the main difficulty is that I believed this myself for years 
> and it was an admittedly difficult task to convince me that I was 
> wrong %-)
> However,
>
>     @article{martin,
>     journal = " IEEE/ACM TRANSACTIONS ON NETWORKING",
>     volume ="11",
>     number = "3",
>     month = "June",
>     year = "2003",
>     title = "Delay--Based Congestion Avoidance for TCP",
>     author = "Jim Martin and Arne Nilsson and  Injong Rhee",
>     }
> eventually did the job.
>
> More precisely, I looked at the latencies themselves, not the variances.
>
>
> Let´s consider a simple example.
>
>           A  network B
>
> "network" is some shared media packet switching network.
> Let´s place a TCP sender on A and the according sink on B.
>
> The simple question is (and I thought about this years ago without 
> really coming to an end - I´m afraid I didn´t want to):
>
> Is a variance close to zero really equivalent for a low load situation?
> And does increasing variance indicate increasing load?
>
> Isn´t it possible that a variance close to zero is a consequence of a 
> fully loaded network? And _decreasing_ load in that situation would 
> cause the latencies to vary?
>
> If we could reliably identify a low load situation from a varaince 
> close to zero, we could use the latencies themselves as a load 
> indicator because we could reliably identify a "no load latency" and 
> thus could identify imminent congestion by latency observation.
>
> One could even think of a "latency-congestion scale" which is 
> calibrated  first by variance observation in order to get the 
> "unloaded" mark and second by drop observation and some loss 
> differentation technique to get the "imminent congestion" mark.
>
> To my knowledge, this is extensively discussed in literature - until 
> Martin, Nilsson and Rhee found the mentioned results.
>
> Now, back to my example and the basic question: Does the assumption, 
> latency variations indicate the degree of contention in an ad hoch 
> network, really hold?
>
> I admit, I personally do not yet see an evidence for this.
>
> Detlef



-- 
M.Sc.-Ing. Sireen Malik

Communication Networks
Hamburg University of  Technology
FSP 4-06 (room 5.012)
Schwarzenbergstrasse 95 (IVD)
21073-Hamburg, Deutschland

Tel: +49 (40) 42-878-3443
Fax: +49 (40) 42-878-2941
E-Mail: s.malik at tuhh.de

--Everything should be as simple as possible, but no simpler (Albert Einstein)







More information about the end2end-interest mailing list