[e2e] Is a control theoretic approach sound?

Yunhong Gu ygu1 at cs.uic.edu
Wed Jul 30 18:00:55 PDT 2003


To be frankly, I an not sure if this simple scheme of more aggressive AI
works in the private networks of the particle physics laboratories.

We can suppose that topology and link capcities are known to the
applications in such networks, and there are only small number of
data sources in the system. 

Let's talk about the problem with this example: suppose we have 1Gbps link
between A and B with 100ms RTT. The standard TCP will increase its cwnd by
1 byte per RTT, and the new "hacked" TCP increases 1000 bytes per RTT
(suppose this is the optimal value). Well this hacked TCP does recover
fast from loss, but in fact it intends to cause more loss (if the current 
sending rate is already 1Gbps, all the new increases to the cwnd will be
lost, with the simplest model).

I didn't test this by myself though.

If applications can use a flow control to limit the data generation
rate (like some video codecs do), this simple scheme may work better.

We have not talked about the RTT bias of TCP, which can also cause
performance problem in distributed applications (e.g., join 2 data flows
from a remote node and a local node, respectively).

IMHO it is still worth to invent new approaches, which may not be used to
completely replace TCP, but at least can solve the problems in some
specific field (e.g., particle physics laboratories) to remove the burden
of tuning TCP parameters from application programmers.

Just my two cents,
Gu


On Thu, 31 Jul 2003, Panos GEVROS wrote:

> 
> ----- Original Message -----
> From: "Yunhong Gu" <ygu1 at cs.uic.edu>
> Subject: Re: [e2e] Is a control theoretic approach sound?
> 
> > Well, I think to decide how "aggressive" the AI will be is not that
> > *simple* a problem :) It is not the more aggressive the better (even if
> > the per flow throughput is the only objective), right?
> 
> agreed but only if you want to address the problem in its full generality
> ... if it is restricted to those areas of the (capacity,traffic) space where
> the packet loss is in [0...7-8%] range (and AIMD is indeed relevant) since
> out of this range timeouts start becoming the norm) then it is
> *fairly*straightforward* to decide on AIMD parameters which provide specific
> outcomes (wrt individual connection perfromance -within limits obviously-
> and wrt capacity utilisation).
> 
> > > ..in their case they know pretty much that the links they are using are
> in the
> > > gigabit range and there are not many others using these links at the
> same time.
> > >
> >
> > But what if there are loss, especially continuous loss during the bulk
> > data transfer? No matter how large the cwnd is initially, it can decrease
> > to 1 during the transfer, then the problem arise again.
> 
> drastic measures (timeout, exponential backoff etc) will always need to be
> in place -
> I 'm saying that (at least in the first attempt)  it pays being optimistic
> (this is the idea underlying slow start anyway..)-  and in certain
> environments indeed more optimistic than the standard prescribes since there
> is a-priori knowledge of the network path characteristics and even traffic
> conditions - which is the case when considering OCxx links connecting
> particle physics laboratories.
> this approach seems to me a lot simpler and (most likely) equally effective
> compared to elaborate control schemes which try to do better while trying
> hard to remain "friendly" at the same time.
> 
> Panos
> 
> 
> 
> 
> 
> 
> 
> 




More information about the end2end-interest mailing list