[e2e] Linux TCP (was: Re: TCP Performance with Traffic Policing)
Hagen Paul Pfeifer
hagen at jauu.net
Fri Aug 19 07:49:17 PDT 2011
On Thu, 18 Aug 2011 23:09:32 -0400, Lars Eggert wrote:
> Neither BIC nor CUBIC are standards. How quickly an implementation can
> drop one non-standard CC default for another doesn't really matter much.
> How easily it moves beyond what the broader community has agreed on a a
> standard does.
>> 3) Linux implement an mechanism to PREVENT ordinary users from
>> an unfair CC algorithm . This is far more then any other operating
>> system! If
>> you present some numbers where you spooted some unfairness - fine, this
>> can be discussed too!
> That's nice, but why has Linux chosen to enable a non-standard CC
> algorithm as the default? I'm all for giving knowledgable folks the
> they need, but a default is a default.
I am sure you know the answers for most of the questions in your email. I
cannot finally say why Linux selected CUBIC (I cannot speak for David).
Maybe 95% of all user are never affected and fine with NewReno. I write
about the decision about NewReno versus CUBIC later in this email.
Your questions cannot answered from a technical view. Vendors (and Linux)
act not purely driven by standards. Operating systems often differs because
their is simple no standard, or because the standard do not fulfill the
requirements of the customer, sometimes the standard is crap and so on.
You asked why has Linux chosen to enable a non-standard CC algorithm as
the default. Because the non-standardized - but well analyzed - CUBIC IS a
fair and a compatible CC algorithm respecting NewReno and friends. AND
simultaneously make some customers happy (those with larger LFN's).
Everybody knows that CUBIC _could_ be the defacto "standard", at least with
In my opinion there will always be a race between Standardization bodies
like IETF and real world networking stacks. With one important attribute:
Operating systems must respect constraint imposed by standardization
bodies - like TCP fairness. If a standardization body show that there are
some real fairness issue in Linux, then Linux will hear and react to this -
no doubt. Standardization bodies on the other hand should align their
development with real life demands. The CUBIC I-D is stocked several years
ago - there is no standardized CC algorithm that fulfill the requirement of
several customers. That's not good and Eddy in his position as the new TCPM
AD should go ahead with this.
The story about CC algorithms is more complicated. CC algorithms are not
really vendor driven, "vendor lobbying" is low. There is no real
standardization pressure behind this. Unlike, say IW10 where Google and
larger HTTP providers have a real advantage. Vendors/people are required to
lobby for IW10 because it has huge impact in the whole eco system.
Lars, sometimes you are on netdev and follow IETF relevant topics. People
like Ilpo, Alexander, Fernando and other guys are more involved. These
peoples - among others - are the IETF-Keepers, watching out for potentially
dangerous changes. I hope that this positive lobbying will be sufficient
that conformity and interoperability is not violated.
More information about the end2end-interest