[e2e] TCP Performance with Traffic Policing

Barry Constantine Barry.Constantine at jdsu.com
Fri Aug 12 12:16:52 PDT 2011


Thanks for answering this Eli, very well said. 

The buffeting of the slower link more gracefully allows TCP to adapt in my experience. 

Also thanks to all on this list, my first time posting and the suggestions and information have been fantastic. 

Barry

Sent from my iPhone

On Aug 12, 2011, at 2:44 PM, "Eli Dart" <dart at es.net> wrote:

> 
> 
> On 8/12/11 9:32 AM, Alexandre Grojsgold wrote:
>> Is there a reason to consider X Mbps policing differnet of having a X Mbps link
>> midway between source and destination?
> 
> In my experience, policing at rate X behaves like an interface of rate X 
> with no buffer.  This means a policer must drop if there is any 
> oversubscription at all, while an interface can provide some buffering.
> 
> This means that TCP sees loss more easily in policed environments, 
> especially if there is a large difference in bandwidth between the 
> policed rate and the host interface rate (at any instant in time, the 
> host is sending at wire-speed for its interface if it's got data to send 
> and available window, regardless of average rate on the time scale of 
> seconds).
> 
> Of course, different router vendors have different buffering defaults 
> (and different hardware capabilities), and some policers can be 
> configured with burst allowances.  However, many policers don't behave 
> in the ways that they say they do, even when configured with burst 
> allowances.  As another post indicated, its quite a mess...
> 
>        --eli
> 
> 
>> 
>> -- alg.
>> 
>> 
>> 
>> 
>> On 12-08-2011 12:48, rick jones wrote:
>>> On Aug 12, 2011, at 7:03 AM, Barry Constantine wrote:
>>> 
>>>> Hi,
>>>> 
>>>> I did some testing to compare various TCP stack behaviors in the midst of traffic policing.
>>>> 
>>>> It is common practice for a network provider to police traffic to a subscriber level agreement (SLA).
>>>> 
>>>> In the iperf testing I conducted, the following set-up was used:
>>>> 
>>>> Client ->   Delay (50ms RTT) ->   Cisco (with 10M Policing) ->   Server
>>>> 
>>>> The delay was induced using hardware base commercial gear.
>>>> 
>>>> 50 msec RTT and bottleneck bandwidth = 10 Mbps, so BDP was 62,000 bytes.
>>>> 
>>>> Ran Linux, Windows XP, and Windows 7 clients at 32k, 64k, 128k window (knowing that policing would
>>>> kick in at 64K)
>>>> 
>>>>                 Throughput for Window (Mbps)
>>>> 
>>>> Platform        32K        64K        128K
>>>> --------------------------------------------
>>>> Linux          4.9         7.5         3.8
>>>> XP             5.8         6.6         5.2
>>>> Win7           5.3         3.4         0.44
>>>> 
>>> The folks in tcpm might be better able to help? but I'll point-out one nit - "Linux" is not that much more specific than saying "Unix" - it would be goodness to get into the habit of including the kernel version.  And ID the server since it takes two to TCP...
>>> 
>>> happy benchmarking,
>>> 
>>> rick jones
>>> Wisdom teeth are impacted, people are affected by the effects of events
>>> 
>> 
>> 
>> --
>> 
>> _________________________________________________________________
>> 
>>    
>> 
>> *Alexandre L. Grojsgold*<algold at rnp.br<mailto:algold at rnp.br>>
>> Diretor de Engenharia e Operações
>> Rede Nacional de Ensino e Pesquisa
>> R. Lauro Muller 116 sala 1103
>> 22.290-906 - Rio de Janeiro RJ - Brasil
>> Tel: (21) 2102-9680 Cel: (21) 8136-2209
>> 
>> 
>> 
> 
> -- 
> Eli Dart                                            NOC: (510) 486-7600
> ESnet Network Engineering Group (AS293)                  (800) 333-7638
> Lawrence Berkeley National Laboratory
> PGP Key fingerprint = C970 F8D3 CFDD 8FFF 5486 343A 2D31 4478 5F82 B2B3



More information about the end2end-interest mailing list