[e2e] Are we doing sliding window in the Internet?
Michael.Welzl at uibk.ac.at
Sat Jan 5 03:59:45 PST 2008
> Joe Touch wrote:
> Injong Rhee wrote:
> > I don't see the
> > penalty that users, ISPs, academics and Mr. Joe next door have paid
> > because of its use. I don't see the Internet is being crashed or
> > crumbling down because of CUBIC.
> You don't see it because nobody is measuring it or even looking for it.
> Deploying a protocol and not seeing a problem isn't proof that it's not
> causing harm now, and it's not proof it won't cause harm in certain
> deployment scenarios.
> If "works most of the time for most people" were sufficient, we could
> pare the TCP state machine down by 1/3, and cut out all options
> including SACK.
> Yes, not causing harm right now is a start, but it's nowhere near the end.
Indeed - and I'd like to connect this with the question
that David asked previously:
> Is that [using it in one's home system] any different
> than running CUBIC between PlanetLab nodes and users
> on PCs in academic computer networking research labs?
Yes it probably is, but in both cases, it's not a very
Case 1, your home: probably your access link will
be the bottleneck, and the window will not be very large.
In this situation, the difference between CUBIC and TCP
will not be very significant because CUBIC is designed
to behave like TCP in environments where the BDP or
at least the RTT are small.
If I'm wrong about the bottleneck in your home, it's
the same as case 2 below...
Case 2, PlanetLab: We sent TCP traffic to PlanetLab
nodes from a Linux system with a Web 100 kernel and
found that, even with standard TCP, the limitation
is almost always from the receiver window.
To conclude, IMO, we can't regard the use of CUBIC in
Linux as a proper wide-scale experiment due to the lack
of support and use of window scaling in the Internet.
I know we discussed this point in the past, but really,
I think that this is much more of an issue than everything
else when we talk about TCP. After all, who cares whether
end systems use Compound TCP, CUBIC or BIC, when all these
systems are fair towards TCP by design when windows are
small, and in fact windows never really get to be large?
Then again, I don't know how to solve the problem. One
possibility might be mapping a single TCP flow onto
multiple TCPs (because the rwnd limit is per-flow) and
dynamically switch between them... but I think that this
doesn't make a lot of sense in the light of the underlying
More information about the end2end-interest