# [e2e] Is a control theoretic approach sound?

Christian Huitema huitema at windows.microsoft.com
Tue Jul 29 09:46:43 PDT 2003

```> This is a very interesting example:)  Let me follow this example:
>
> 1) if the cart is very small and light, and if every mouse pulls very
hard
> and brakes very hard to chase the target speed, then the cart goes as
what
> you described.
> 2) if the cart is very heavy, and every mouse pulls slightly, then it
> takes
> a long time for the cart to gain the target speed, and the system may
> oscillate around the target speed slightly.
> 3) If the mice are a little bit cleverer: they consider the speed of
the
> cart. If the speed is far from the target speed, they pull very hard
or
> brake very hard; if the speed of the cart is close to the target
speed,
> they
> pull slightly or brake slightly, or even stop pulling. Then there is a
> hope
> to make the system stays around the target speed? :) Actually, even 50
> elephants come to pull the cart, they have to follow the same rule:
pull
> or brake slightly when the speed is close to the target.

There is another dimension that we ought to consider, which is the
frequency of corrections. The traditional AIMD congestion control has a
nasty side effect: the frequency of correction decreases when the
average transmission speed increases. This is linked to the relation:
Throughput = O(1/SQRT(LossRate))
Which can also be written
1/LossRate = O(Throughput^2)
The average number of packets between losses is inversely proportional
to the loss rate:
PacketInterval = O(1/LossRate)
The time interval between losses is inversely proportional to the
throughput:
TimeInterval = PacketInterval*PacketSize/Throughput
We can express that as a function of the throughput:
TimeInterval = O((1/LossRate)*(1/Throughput))
TimeInterval = O((Throughput^2)*(1/Throughput))
TimeInterval = O(Throughput)
In short, the faster you go, the less often you correct the situation.
>From a control theory point of view, that could only be sustainable if
high speed networks where somehow much more stable than low speed
networks, for example if the rate of arrival and departure of new
connections somehow diminished when the connection speed increases.
There is no evidence that this is the case.

So, if we pursue that particular avenue of control theory, we have two
possible solutions. On possibility is to change the way TCP reacts to
packet losses, so that for high speed connections we end up with:
Throughput = O(1/LossRate)
TimeInterval = O(1)
Or we can try to obtain another source of control information than
packet losses, so we can apply more frequent connections. Vegas and FAST
seem to explore this later avenue, by using the packet transmission
delays as a source of information about the congestion. On one hand,
that is seductive, since we could get some limited amount of information
in every packet. On the other hand, if we believe a paper in the last
Transaction on Networking, there is only a weak correlation between
delays and congestion, and I wonder how the designers of FAST deal with
that.

-- Christian Huitema

```