[e2e] query reg improving TCP performane

query query.cdac at gmail.com
Thu Jul 5 23:32:54 PDT 2007


Thanks a lot Andrew . It helped me to understand and I feel that my Tunings
are O.K.

>   I was doing some Bandwidth measurement test on a 100 mbs link with a RTT
> > of about 70ms.
> >   Based on that, I calculated the BDP as follows.
> >
> >           BDP = Bandwidth * RTT
> >               = 921600 bytes
> >    I did the following adjustments. I increased the above calculated BDP
> by
> > nearly
> >    half of the value . The TCP settings now look like this.
> >
> >            /proc/sys/net/core/rmem_max  175636
> >            /proc/sys/net/core/wmem_max  175636
> >            /proc/sys/net/ipv4/tcp_rmem      4096    87380
> > 175636
> >            /proc/sys/net/ipv4/tcp_wmem      4096    87380
> > 175636
> >
> >     After these settings , I find the link utilisation to be nearly 95
> mbs.
> >
> >     According to many papers that I read , I found that the BDP should
> be
> > equal
> >     to the product of Bandwidth * RTT .
>
> The papers probably said that *router* buffers need to equal the
> bandwidth*RTT.  You are adjusting the sender/receiver buffers.  These
> need to be significantly larger, as you have found.


                      The papers or rather articles are talking of sender
and receiver
   buffers . Here is one such link where I find it.
   http://www.psc.edu/networking/projects/tcptune/   .



In order to allow retransmissions, the sender buffer needs to be able
> to store all "packets in flight", which include both those in the in
> the router buffers and those "on the wire" (that is, in the nominal
> RTT of the link).
>
> In order to be able to provide in-order delivery, the receiver buffer
> needs to be able to hold even more packets.  If a packet is lost, it
> will receive an entire RTT (plus router buffer) worth of data before
> the first retransmission of that packet will arrive.  If the first
> retransmission is also lost, then it will need to store yet another
> RTT worth of data.
>
> The general rule-of-thumb for Reno is that the send buffer should be
> at least twice the bandwidth*RTT.  For BIC is is probably reduced to
> about 120% of the BDP (because it reduces its window by a smaller
> factor when there is a loss).  The receive buffer should still be at
> least equal to the BDP plus the router buffer.


   What I understand from your reply,  is that  It is not necessary that
TCP Window should be
   equal to BDP in all cases . Had the router buffer size is equal to  BDP
,  then I think I
   should  equal link utilisation equal to the capacity of the link .
   Since , In Internet it will not be  possible to know the router buffer
size , so the best thing one
   can  do , is to make the TCP window size twice to BDP as you have
suggested.

  I am finding another problem.  The UDP transmission rate on that link is
decreased.  I changed
  to the default settings , but it is showing the exact readings after
tuning. It  seems   it  is  reading  some fixed  value from something  and
based  on  that  it  is  transferring  data .

The readings are like this..........

iperf  -u -c 192.168.60.62 -t 300 -l 1460  -i 2
------------------------------------------------------------
Client connecting to 192.168.60.62, UDP port 5001
Sending 1460 byte datagrams
UDP buffer size:  108 KByte (default)
------------------------------------------------------------
[  3] local 10.128.0.2 port 32785 connected with 192.168.60.62 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3] -0.0- 2.0 sec   257 KBytes  1.05 Mbits/sec
[  3]  2.0- 4.0 sec   257 KBytes  1.05 Mbits/sec
[  3]  4.0- 6.0 sec   255 KBytes  1.05 Mbits/sec
[  3]  6.0- 8.0 sec   257 KBytes  1.05 Mbits/sec
[  3]  8.0-10.0 sec   255 KBytes  1.05 Mbits/sec
[  3] 10.0-12.0 sec   257 KBytes  1.05 Mbits/sec
[  3] 12.0-14.0 sec   255 KBytes  1.05 Mbits/sec
[  3] 14.0-16.0 sec   257 KBytes  1.05 Mbits/sec
[  3] 16.0-18.0 sec   257 KBytes  1.05 Mbits/sec

   The result is for the following tuning.
   net.core.rmem_default = 110592
   net.core.wmem_default = 110592

 After that I changed the tuning to
            net.core.rmem_default = 196608
            net.core.wmem_default = 196608

   The readings for the tuning is like this...
iperf  -u -c 192.168.60.62 -t 300 -l 1460  -i 2
------------------------------------------------------------
Client connecting to 192.168.60.62, UDP port 5001
Sending 1460 byte datagrams
UDP buffer size:  192 KByte (default)
------------------------------------------------------------
[  3] local 10.128.0.2 port 32785 connected with 192.168.60.62 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3] -0.0- 2.0 sec   257 KBytes  1.05 Mbits/sec
[  3]  2.0- 4.0 sec   257 KBytes  1.05 Mbits/sec
[  3]  4.0- 6.0 sec   255 KBytes  1.05 Mbits/sec
[  3]  6.0- 8.0 sec   257 KBytes  1.05 Mbits/sec
[  3]  8.0-10.0 sec   255 KBytes  1.05 Mbits/sec
[  3] 10.0-12.0 sec   257 KBytes  1.05 Mbits/sec
[  3] 12.0-14.0 sec   255 KBytes  1.05 Mbits/sec

 Kindly please help me to rectify the problem. It is the same link on which
I
 performed the TCp test.

  regards
  zaman

>
>






I hope this help,
> Lachlan
>
> --
> Lachlan Andrew  Dept of Computer Science, Caltech
> 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA
> Phone: +1 (626) 395-8820    Fax: +1 (626) 568-3603
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070706/38aad1ea/attachment.html


More information about the end2end-interest mailing list