[e2e] "congestion" avoidance...

David P. Reed dpreed at reed.com
Tue Apr 17 03:14:53 PDT 2001


Interesting thoughts.  However, money or something like it needs to enter 
into the thinking.  I.e. some notion of sharing responsibility for costs 
imposed on others.

IE: At a point of congestion, the "indirect channels" among competing flows 
provide a way of signalling (at some bitrate) for a bargaining scheme.

What range of bargaining schemes can be piggybacked on this signalling channel?

For example, what if a single (urgency) bit per packet (like the ECN flag, 
but provided by the source to the congested queue) could be modulated at 
the source, tracked in a state variable at a router queue, and coupled into 
a bit in each outgoing packet that controls rate like ECN.


At 10:06 AM 4/17/01 +0100, Jon Crowcroft wrote:



>there have been some steps recently to look at a range of rate and
>window based mechanisms for sharing the net amongst a set of sources (or
>sinks if we include receiver based multicast schemes) - i was looking
>at these and wondering if it isnt time to revisit some of the
>congestion control and avoidance thinking
>
>some schemes have been proposed that smooth the adjustment so that
>over an RTT we creep up to the operating rate, and creep down, on a
>packet by packet (inter-packet delay adjustment) basis
>(RAP from Handley et al)
>
>other schemes have proposed different powers for the increase decrease
>function (and assert that so long as we decrease x^n, and increase
>x^(n-1), we ought to be "ok" for some definition of ok)
>(binomial adjustment etc from
>
>the TCP AIMD with fast retransmit scheme has several motivating factors
>some intended, some lucky happenstance (serendipitous)...
>
>1/ sampling network conditions and eliminating noise:
>
>currently, this operates over the RTT timescale, but is memoryless
>after that....estimates for loss effectively im,plicit in the AIMD
>operation, but the noise filter (number of dupacks) is somewhat
>rigid...
>
>2/ safe/stable operation:
>given feedback controller, its reasonable to operate this over
>packet conservation/self clocking makes it more smooth
>
>3/ relating end system rate adjustment timescale to buffering provisioning
>the AIMD scheme has the bandwidth/delay product wrth of network
>buffering as a necessary side effect - other adjustment schemes might
>need less (some might need more but that almost certainly means they
>are trouble:-)
>
>4/ social coupling - we have a target operating point which will be
>some fraction of a bottleneck link
>if we take a flow f, and a flux (sum of flows into a bottleneck) F,
>then the idea is that we get a share proportional to the _resource_ we
>use, which (approximately) includes 1/RTT as a factor (kelly et al, le
>boudec et al)  the idea is that a set of fs in an F are coupled by the
>loss or ECN feedback function, and by some reaction period being at
>least in the same ballpark....
>
>in fact, though we don't have to have smooth functions at all, nor do
>we have to sample only the average loss rate, nor choose the sample
>rate to be an RTT - the RTT is a way of _loosely _ coupling things,
>but is perhaps too strong
>
>what if someone wanted a _rate_ that persited for all (or a larger
>part) of a connection? how could we work out some sort of congestion
>model that accommodated both packet and connection timescales?
>
>at least one factor seems missing, and that is some estimate of the
>number (and rate of change of number) of flows....if we alter the
>sample period, and sample bot hte hcongestion feedback Mean, AND its
>variance, we might be able to (assuming the social coupling function
>was still "social") estimate this
>
>obviosuly if people want to they can behave anti-socially (but that is
>and wil lalways be true unti lwe do pricing or othewr forms of
>admission control) - letsassume they behave "nicely"....
>
>could someone choose to operate a "very slow" congestion control
>scheme? why not? lets say i run a connection that takes 1/10 of the
>capacity, but there are 5 other connections, then why should i react
>to loss unless my longer term loss (or ecn) rate  tells me that
>there's now 9+ other flows? currently,  if i run any adjustment
>scheme based just on average, i have a chance of adjusting wrongly...
>
>more importantly, maybe
>secondly, how about re-examining the social coupling function? why
>shouldn't ten people _agree_ a different congestion partition function
>(e.g. they have an application that requires n sources)
>
>i guess this could be implemented via the Congestion Manager type API,
>but i am interested in the general family of functions that fit this
>more general model - for example, it seems to me that you can have
>radically different increase/decrease if you have
>a) a different sample period and a more accurate deascripotion of the
>evolutuon of the loss/load process over time (e.g. some sort of fancy
>bayesian predictor)
>b) a different share/social function - e.g. if we have 10 sources
>agreeing on a different load, then how do they distribute this
>information and how do  we make sure they aren't penalized by any
>extra fancy stuff people might later add!
>
>j.




More information about the end2end-interest mailing list