[e2e] TCP "experiments"
touch at isi.edu
Mon Jul 29 10:25:12 PDT 2013
On 7/27/2013 7:20 PM, Matt Mathis wrote:
> The real issue is the diversity of implementations in the Internet that
> allege to be standard IP and TCP NAT, but contain undocumented "features".
> No level of simulation has any hope of predicting how well a new protocol
> feature or congestion control algorithm will actually function in the real
> Internet - you have to measure it.
> Furthermore: given that Google gets most of its revenue from clicks, how
> much might it cost us to "deploy" a protocol feature that caused 0.01%
> failure rate? If you were Google management, how large of sample size
> would you want to have before you might be willing actually deploy
> something globally?
I don't understand the logic above, summarized IMO as:
- big deployment is required to find issues
- issues are found by measurement
- Google doesn't want to deploy protocols that cause failures
Take that with the current actions:
- Google management is comfortable deploying protocols
that are not instrumented
Do you see why the rest of us are concerned?
Besides, there are different kinds of failures. I doubt Google wants a
0.01% failure of its current base, but if the feature increases its base
by 0.02%, do you really think it wouldn't go ahead and deploy?
And worse, what if the 0.01% failure wasn't to Google connections, but a
competitor? Or just the rest of the Internet?
My view has been that we need to protect the Internet for *when* there
is no more Google. I don't trust Google (or any company) to do that.
More information about the end2end-interest