[e2e] Question the other way round:

Detlef Bosau detlef.bosau at web.de
Sat Nov 16 02:34:53 PST 2013

Am 16.11.2013 01:12, schrieb Vimal:
> I think it depends on how you define congestion.

I don't think so. We don't need congestion - and we don't want congestion.
> For a simple single-link resource, if you define congestion as "long
> term utilisation of a link >= capacity" then in some sense we "need"
> congestion as we would like to fully utilise the network (i.e., we
> want utilisation = capacity).

So for a single link we need a mechanism which ensures full link
utilization. Correct?

That's a possible motivation for the sliding window scheme. However, do
we really need sliding window for fully  utilizing the link?

> If you define congestion as "queue build up at all timescales" then
> congestion is inevitable at any load > 0 if arrivals are random.

Forget about queueing theory for the moment,  I want to talk about real
systems. A few days ago, JC and me discussed where the workload actually
resides. As long as the workload resides on links, there is hardly any
problem. At least in wired networks. A wired link has a certain
throughput and a certain propagation delay, neither of which depends on
the actual load. Hence the only problem you may have is that you pay for
an underutilized link and hence the argument is work conservation.

As long as the workload resides in buffers, the situation becomes a bit
more complicated. At the latest when buffers are too large (whatever
that will mean, please have a look at the most recent "best current
practice RFC" which is presumably going to change on a monthly basis
;-)) buffers introduce both, latency and costs. Both is bad.

You can hardly decrease propagation latency because you cannot speed up
the light. But you should be careful not to introduce too much queueing

> There has been some work on defining the right operating point of the
> network for a certain metric.   For instance, it is known (due to
> Leonard Kleinrock as far as I can recall) that the "right" operating
> point for an M/M/1 queue to optimise the average throughput/delay^r
> for flows is to operate it at a point where the expected queue
> occupancy is exactly r.
That's the queueing theory stuff - which is, in my opinion (and I'm
ready to take flames ;-) http://www.youtube.com/watch?v=H5yUnnH9nu8)
simply misleading here.

In some special cases, particularly in wireless networks, buffers may
enhance average throughput. And in case a huge average throughput is
your objective you may consider using them.

In switching nodes you encounter asynchronism which must be handled and
which may requires a certain amount of buffer. (Actually, dealing with
variable throughput in mobile links is nothing else than dealing with
asynchronism, it is only  more asynchronous than perhaps in an Ethernet

However, sometimes my impression is that we use "intended congestion" in
order to assess a system's capacity. And by and by I doubt, whether this
is really the right way to go.

Back to Kleinrock, the problem is to assess the right workload for the
famous "knee". And as we cannot assess this we offer that much load to a
network that it starts squirming, crying and vomiting - and eventually
we try to ease the situation.

I'm not fully convinced that this is the ideal way to go.

Detlef Bosau
Galileistraße 30   
70565 Stuttgart                            Tel.:   +49 711 5208031
                                           mobile: +49 172 6819937
                                           skype:     detlef.bosau
                                           ICQ:          566129673
detlef.bosau at web.de                     http://www.detlef-bosau.de

More information about the end2end-interest mailing list