[e2e] Google seeks to tweak TCP
detlef.bosau at web.de
Mon Feb 6 12:30:57 PST 2012
On 02/06/2012 08:29 PM, Daniel Havey wrote:
>> However, this is implementation and development and not
>> primarily research.
> IMHO, a complete research work would includes the possibility of implementation. Even though the process may be long and arduous.
It may be a cultural difference here. But in Germany, we have a
distinction between research and development. Of course, development is
But the main focus of research should be to identify and to solve the
structural problems. Development is not only arduous, it is - part of
arduous ;-) - expensive. And not anyone, who does fundamental research
has the money to develop a product for the market.
>> According to my experience, it is difficult to sell lossy
>> links in papers. However, being lossy and being slow are
>> often just two sides of the same mountain.
> True, because the MAC will twist itself into a pretzel before allowing a packet to drop.
Hang on here. I considered this topic for several years now. And I
started from a "pure end to end view" and thought, corruption recovery
should be done only at the end point. I thought so even in a paper of
mine. (I did not publish very much so far and the more I think about
this work, I put it into question.)
One point is, that even recovery from corruption is moving around a
problem. It is particularly moving load along the network path. When I,
e.g., request a page from a WWW server from New York, myself sitting in
Stuttgart, and use some wireless connection. Why, the hell, should it
bother the whole world and Tier 1 Internet links etc., that there is a
corruption on a wireless link here in Stuttgart? So, we need convincing
reasons to drop a packet. Without giving the whole rationale here, we
should choose reasonable line codings and channel codings here and then
do one, perhaps two transmission attempts. In most cases, this we either
be successful - or even 100 attempts wouldn't help. So, in a situation
like that, we should drop the packet. However, if we do so, the change
in the service time would be due to the coding and not due to the
retransmission. And particularly the choice of appropriate codes - or
even making the decision that a wireless connection is suspended for a
moment, should be a local one because the information to make a decision
is readily available at the local link - and not at the communication
> At the transport layer we will experience these losses as increased delay.
> We shouldn't ignore them because we can feel their effects, even if we don't see the actual loss.
> As conditions become poor, you will see the actual loss combined with the delay from trying to prevent that loss.
However, you may hardly see the actual loss. Although this is quite
implementation dependent. In GSM, you may see actual loss because AFAIK
GSM has no means of channel assessment. In HSDPA, you will hardly see a
lost packet because there _is_ a means of channel assessment and a poor
channel may be suspended for a certain time, or (more sophisticated) a
better channel will be preferred.
> IMHO, these packets are probably not worth so much trouble. Reliability is expensive, and 100% reliability is even more expensive.
100 % reliability is not realistic especially in limited time.
> But, this is the viewpoint of a person who just wants to stream video.
Which is not reliable. And I still suffer from a failed project (10
years ago) where I was held responsible for the poor results and the
very reason was, that the project partners wanted to do streaming with
packet switching and simply did not realize (that time: me too, I have
to admit this) that line switching and packet switching in mobile
networks are simply two completely different stories and no way comparable.
Therefore, I make a very clear distinction between these.
And I'm afraid, I'm one of the very few ones to do so. And rather a pig
will fly (and, please, not overhead!) than I will ever mix up these two
It took me years to understand this point.
> I don't really need "all" of the packets, just an adequate number of them and a few losses are really not a big deal for my applications.
And even that is highly technology dependent. My project was about WWAN
that time. And I frequently heard that saying that e.g. voice is loss
Excuse me, for WWAN, this is not that simple. A tar ball may survive
loss, a .tgz file not. When voice is conveyed via WWAN, it is compressed
as much as possible, so it is nearly inevitable for loss to decrease the
perceived quality of speech to the listener. This is certainly no
concern when sufficient throughput is available and you can spare the
In data transfer, and TCP is concerned with data transfer, the situation
is completely different.
> It sounds like I need a non-TCP solution, but, see above.
For streaming? By all means! For streaming, TCP is simply the wrong
However, in mobile networks, for TCP you will need a packet switching
API, for streaming you will need a line switching API.
And this could not be overemphasized. Any other approach is likely to fail.
> I just take it as a starting point for my research that transport must be TCP. Without this as a starting point then the work will not be "implementable".
I beg you pardon? Did you ever hear of (my apologies to the list, as
this is a "TCP list" according to Craigs historical remarks ;-)) ) UDP?
Although, even UDP is a packet switching protocol. And to my
understanding, packet switching is anything but well suited for doing
>>> Such links have difficulty even reaching their tiny
>> capacity with Reno. They do much better with Cubic.
>> What's the very reason for this behaviour? Is it because
>> Reno cannot deal well with losses?
> Haha! Yeah, good point. I don't actually know. My experiments just used Cubic as a baseline, because it is the default.
a convincing scientific rationale :-)
O.k., for a baseline you can take whatever you prefer. It's a baseline,
70565 Stuttgart Tel.: +49 711 5208031
mobile: +49 172 6819937
detlef.bosau at web.de http://www.detlef-bosau.de
More information about the end2end-interest