[e2e] Multiple TCP-friendly Sessions and Cong. Control
falk at ISI.EDU
Wed Apr 10 16:23:04 PDT 2002
On Wed, 2002-04-10 at 15:38, Christian Huitema wrote:
> > Some applications do their own CC because they, rightly or wrongly,
> > perceive TCP as being too "heavyweight". Some just don't need
> > reliable transfer but want to be responsible citizens. This is all
> > part of the justification for DCP. Indeed, a good argument for not
> > putting CC in user space is that getting congestion control right is
> > not easy and applications can get it wrong.
> This is an often heard argument. It is also somewhat patronizing, and
> very probably wrong.
I'm not sure we're talking about the same thing. Either way, I don't
think I agree with all your assumptions.
> It would perhaps be useful to first check whether
> the TCP congestion control, as we know it today, is in fact the right
> solution. There are at least three arguments against it:
I'm advocating keeping congestion control in the kernel, regardless of
> 1) A lot of evidence points to a bandwidth glut in the middle of the
Today, perhaps. Do you claim that this will continue to be true for the
indefinite future? Don't you think that the deployment of broadband or
the high speed applications you refer to below will affect this?
> It is not at all clear that we have to be scared by "the
> potential for non-congestion-controlled datagram flows to cause
> congestion collapse of the network."
It's also not at all clear that we can count on altruistic applications
that will sacrifice performance or the user "experience" in order to
share network resources.
> We may want to consider efficient
> sharing of bottlenecks such as narrowband "last mile" connections, but
> that is a very different problem than congestion collapse.
But bottlenecks move over time.
> 2) There is also some evidence that the current assumption equating
> packet losses with an indication of congestion is probably wrong. Packet
> losses can be caused by events unrelated to network congestion, e.g.
> noisy wireless links, routing events, and faulty hardware.
Wrong, as in 'wrong most of the time'?
> 3) The current TCP algorithm does not scale to high bandwidth. More
> precisely, it requires that the packet loss rate drops as the square of
> the desired bandwidth, which can be very hard to achieve in practice,
> and is not in any case required to achieve stability.
No argument here. However, DCP does contain the ability to include new
congestion control algorithms in the future.
> I would argue that many of the RTP/UDP implementations use algorithm
> that are both more sophisticated than TCP, and also better suited to
> their task.
But the design decision of what kind of congestion control to use isn't
just about optimizing *their* task of providing service to the user,
it's also about fairly, safely, and effectively sharing network
So, none of your arguments support the statement that "getting
congestion control right is hard." In fact, if I accept your three
points, which I don't, they imply that congestion control is hard enough
that today's TCP has it wrong. Also, your three points don't justify
putting congestion control in user space, they motivate making
alternative congestion control algorithms available. DCP does this.
Initially with just TFRC but the hooks are there for future algorithms.
With the ease of deployment for new apps on the Internet, I'm very
uneasy with encouraging application designers to do their own thing in
congestion control over UDP. Unfortunately, non-reliable data flows
don't have any choice right now, which is why DCP is important.
More information about the end2end-interest