[e2e] How shall we deal with servers with different bandwidthsand a common bottleneck to the client?

Detlef Bosau detlef.bosau at web.de
Tue Dec 26 08:39:36 PST 2006


Agarwal, Anil wrote:
> Detlef,
>  
> In my earlier description, I had incorrectly assumed that link 2-3 was 
> at 10 Mbps. The nature of the problem is similar whether link 2-3 is 
> at 10 Mbps or 100 Mbps.

Admittedly, I didn´t understand it yesterday...

However, eventually you say:
> Changing bandwidths a bit or introducing real-life factors such as 
> propagation delays, variable processing delays and/or variable 
> Ethernet switch delays will probably break this synchronized 
> relationship. RED will also help.
>  

In fact, the behaviour disappears when I randomize the delays.
> One can construct many other similar scenarios, where one connection 
> is selectively favored over another. Perhaps, one more reason to use RED.
>  
I´m not quite sure about the relationship to RED here. (In fact, I still 
have no personal opinion to RED, however I have numerous questions to 
RED, but I think that´s not the question here.)

What I try to understand is basically, whether the observed behaviour is 
an artifact or not.

It might be as well a behaviour which happens only under rare 
circumstances, think of the capture effect in Ethernet.

In consequence, my basic doubt aganist all kind of *mulation (simulatin, 
emulation etc.) rises again. I personally make no difference between 
simulation and emulation. To my knowledge, emulation is used 
synonymously for "real time simulation" and as such is prone for the 
same artifacts and errors for any other kind of simulation. Particularly 
the synchronicity between the two links 1-2 and 3-4 is basically 
artificial. Even with quartz-controlled timers I severely doubt hat two 
NICs will ever run perfeclty synchronous - and this is not even 
necessary as long as data sent by one NIC is read error free by the other.

Like all other kinds of *mulation the NS2 is nothing than a set of 
difference equations put in "some strange form".
It thus represents our hopes, fantasy and religious beliefs. 
Unfortunately, reality doesn´t care about any of them  ;-)

Hover, I did not consider this scenario by pure chance. The question 
behind this scenario is a very precise one.

Let´s draw a network, somewhat more simple this time.

Sender(i) ------------(some network)-----------Splitter------(some 
network)---------Receiver


Sender(i) denotes several senders.

Now, a stone aged question of mine arises:

Can it be guaranteed that there is no overload, neither on the network 
before the splitter nor on the network behind it?

In some off list discussion last year, Mark Allman pointed out that 
overload on both network paths (before and behind the splitter) is 
prevented by TCP congestion control and in split connections the 
splitter prevents overload by TCP flow control.

E.g.:  Consider one sender and the path before the splitter a 100 Mbps 
path, the path behind the splitter a 10 Mbps path.

In fact (and that´s why I added an experimental flow control to TCP in 
my simulation), when the buffer at the splitter is sufficiently large 
after some settling time the window on the sender is correctly curtailed 
that way that the sender achieves an average rate of 10 Mbps.

NB: I think I have to re-read the thesis by Rajiv Chakaravorthy on this 
issue because it´s the question whether we need window clamping techniques.
It seems that any necessary clamping can be achieved by the existing 
flow control and congestion control mechanisms of TCP.

Now, if there is an array of senders, denoted as sender(i), we have 
basically three scenarios.

1.:  The common bottleneck is _before_ the splitter.

Perfect. The splitter is fed slower than it is served. We´re lucky.

2.: The common bottleneck is _behind_ the splitter.

Perfect. The splitter assigns an equal share of bandwidth to each flow 
and throttles the senders by means of flow control if necessary.
We are lucky again. (I don´t know why we need Christmas in the presence 
of so much luck.)

3.: There is no "common bottleneck".

I have to explain this because at a first glance, this appears to be 
nonsense: Either the path head, i.e. the part before the splitter, or 
the path tail, i.e. the part behind the splitter, should be the common 
bottleneck if there is one.

Consider the path tail running with 10 Mbps. Consider sender(0) being 
capable of sending at 100 Mbps. Consider sender(1) being capable of 
sending at 2 Mbps.

If sender(0)  runs alone, the bottleneck is the path tail. If sender(1) 
runs alone, the bottleneck is the path head.

Consider both senders are running in parallel. What will happen in the 
presence of a splitter?

Again we have two cases.

1.: The path tail runs TCP or some other window controlled protocol 
which maintains available ressource. Hopefully, the flow from sender(1) 
will achieve something about 2 Mbps and the flow from sender(0) will get 
the rest. I don´t know. I´m actually trying to find out.

2.: The path tail runs some rate controlled protocol as it would make 
sense e.g. in satellite connections where the startup behaviour of TCP 
is extremely annoying. Now: How will this protocol distrute the availabe 
ressources among the flows? One could give equal shares to them, i.e. 5 
Mbps to each.
However, because the flow from sender(1) cannot send faster than 2 Mbps, 
3 Mbps would remain unused.

Particularly the latter scenario seems to be some kind of "end to end" 
problem: The splitter node does not know the end to end ressource 
situation and thus have to leave the distribution of ressources to the 
end nodes.

Do I go wrong here?

Any comments are highly appreciated.






More information about the end2end-interest mailing list