Fairness of TCP (was Re: [e2e] a new paper on Adaptive RED)

Steven Low slow at caltech.edu
Thu Aug 9 22:05:43 PDT 2001


Wuwei,

We can regard TCP/AQM as carrying out a distributed computation over the
Internet to maximize *sum* of source utilities: 
	max_{x_i}   sum_i  U_i(x_i)             (1)
subject to capacity constraints.  There are several interesting points:

1) This represents a certain global measure of performance (social
welfare).
However, the optimization can be distributed where each source optimizes
its own benefit, as you described below.  The role of "prices" is to
coordinate their actions and align individual with socail optimality.
Here, "prices" mean different things in different protocols: loss
probabiltiy with Reno and queueing delay with Vegas.

2) Any link (AQM) algorthim that stabilizes the queue implicitly solves
the
dual problem of (1) and generates prices that are Lagrange
multipliers.   
This is because the gradient of the dual problem consists of link
capacity
minus flow rate.   Therefore, stable queue means link flow rate =
capacity,
hence matching rate drives the gradient of the dual problem to zero.
Since dual problem is convex, this solves the dual problem.    This
is explained in a preprint of ours "Internet Congestion Control: An
Analytical Perspective" available on our website:
netlab.caltech.edu/pub.html

3) Utility functions need *not* be the same across sources.  If all
sources
are running TCP Reno, then all have the same utility (with possibly
different
delays).   But if some sources run TCP Vegas, then one can solve the
above
convex program with utility functions representing Reno and Vegas to
study
their equilibrium rate vector (fairness).   An example of this is given
in
our paper "A Duality Model of TCP and Queue Management Algorithms" on
the
web.   Whether Vegas is more or less "aggresive"
than Reno really depends on the problem instance, and a general
statement
cannot be made.

4) When rate is high or equivalently loss is low, the arctan() version
of TCP 
Reno's utility is approximately the same as the -1/x_i version (only
their
marginals matter).   The difference comes from how one models Additive
Increase
of Reno.  If one models it as (p_i(t) = loss prob in path of source i):
	x_i(t) (1-p_i(t)) / w_i(t)
then one gets arctan() utility.   If one models it as:
	1/rtt_i
then one gets -1/x_i utility.   They are approximately equal if p_i(t)
is
small.   This is also explained in the above preprint "Internet
Congestion
Control: An Analytical Perspective".

Steven


Wuwei wrote:
> 
> Hi,Steven Low
> 
> Yes, there have been numerous works made in the context of flow optimization, such as
> Kelly's, Kunniyur and Srikant's, La nad Anantharam's,Mo and Walrand's and some guys
> in MS Research in London.But i think the works are focusing on the case of single
> utility function in a network, such as proportional fairness (U=log x) and more
> generally (p,alpha)-proportional fairness (U=x_^(1-alpha)/(1-alpha)). What is the
> case when different utility function is used? From my understanding, a user's utility
> function only determine his equilibrium rate by optimizing U(x_i)-px_i (x_i=U_^(-1)(p))
> where p denotes the end-to-end price or can be interpreted to mark/loss rate.
> 
> Another question is how to describe non-elastic traffic in the "utility". Obviously,
> many of them will not be concave. It seems that Shenker's Sigcomm98 paper made some
> preliminary work on it, but no fairness issues have ever been found.
> 
> BTW: I remember in Kunniyur and Srikant's Infocom2000 paper has proved TCP reno et al
> has the utility function U=-1/x. I wonder whether it is U=-1/x_i or U=arctan(rtt_i x_i).
> 
> Thanks
> 
> Best Regards
> Wei Wu
> wuwei at sdp.ee.tsinghua.edu.cn
-- 
__________________________________________________________________
Steven Low, Assoc Prof of CS & EE 
slow at caltech.edu			netlab.caltech.edu
Tel: (626) 395-6767			Caltech MC256-80
Fax: (626) 792-4257			Pasadena CA 91125



More information about the end2end-interest mailing list