[e2e] [Fwd: RED-->ECN]

Steven Low slow at caltech.edu
Fri Jan 26 16:31:47 PST 2001


> From: Steven Low <slow at ee.mu.OZ.AU>
> To: hari at lcs.mit.edu, slow at caltech.edu
> CC: cwcam at caltech.edu, ecn-interest at research.att.com, end2end at isi.edu.rliu@yak.ugcs.caltech.edu, siew at its.caltech.edu, wch at its.caltech.edu
>
> 
> Hi Hari,
> 
> I completely agree that there are unresolved issues with the
> third approach (drastically reduce buffer overslows so that
> losses become primarily due to wireless effects), and you
> nicely touch upon several of them.   But I'd like to make two
> philosophical points about ECN & congestion control first
> (which I hope belongs to this list at least peripherally).
> 
> I think the approach of
> congesting the network in order to obtain congestion information
> as the current TCP does, which is necessary without ECN,
> becomes unnecessary with ECN.  With AQM, we can decouple
> congestion measure & feedback from performance measure such
> as loss, delay or queue length.   Then, 'congestion'
> means 'demand exceeds supply' and congestion signal curbs demands
> but the queue can be controlled in a way that maintains good
> performance such as low loss & delay.   Whether REM can succeed
> doing this is a separate matter, but I think this is the approach
> we ought to take in designing our congestion control.   Alternatively,
> when we couple congestion measure with performance measure,
> 'congestion' necessarily means 'bad performance' such as high
> loss or delay, and we do not have the *option* (even if we have
> the means) of maintaing low loss & delay in times
> of congestion (i.e. when new sources joining or capacity drops).
> In other words, when there are more sources, loss or delay must be
> increased in order to produce high enough signal intensity for
> sources to futher cut back their rates; moreover signal intensity
> must stay high not only during transient
> when new soruces first start but also afterwards.
> 
> REM tries to fix this, not through the exponential form of its
> marking probabiltiy function, but by using a different congestion
> measure and update rule, that maintains high utilization and low
> queue in equilibrium.   Again, there can be alternative ways to
> achieving this, but I think this is what we should aim for.
> And to achieve this it is critical that we decouple congestion
> measure from performance measure.
> 
> The second philosophical point is an interesting implication
> of the recent extensive works on heavy-tailed traffics and their
> origin.  It implies that the misc-elephant mix (i.e.
> most files are small but most packets belong to long files)
> that characterizes current traffics may be a permanent and
> essential feature, not an artifice of current applications or
> user behavior.   The end-to-end congestion control, properly
> designed, can be an ideal mechanism in such an environment,
> where elephants (that care about bandwidth)
> are controlled to maximally utilize the network
> in such a way that leaves the queues close to empty, so that
> mice (that are delay sensitive) can fly through the network
> with little delay.   Again, this require a new TCP/AQM strategy
> that achieves high utilization + low queue, and ECN (or
> explicit notification) helps.
> 
> A common objection to end-to-end congestion control is that
> most TCP connections are short and hence end-to-end congestion
> control is ineffective.  I believe the observation is correct
> but not the conclusion.  Since HTTP uses TCP and web files
> have mice-elephant mix, most TCP connections are therefore mice,
> which indeed should not be the primary object for end to end
> control.  End to end control should target elephants, not mice.
> Mice suffer large delay currently, not because they are end to
> end controlled, but because the current congestion control (even
> just of elephants) can produce large queues in the path of mice.
> 
> So, with the a TCP/AQM strategy that maintains high utilization
> and low queue in equilibrium (regardless of hte number of sources),
> buffer is used *only* to absorb *transient* bursts.  This can be
> very different with a scheme that uses, say, queue length to
> measure congestion; with such a scheme, we do not have control
> on the equilibrium value of the queue - it is determined solely by
> the network topology and #sources and hence can be high depending
> on scenario.   When queues are always high, they do not have
> reserve to handle burst.   But when queues are always low, I
> think bursts can be much better handled.
> 
> So much for such vague philosophical discussions.  Since this
> is getting too long, I'd defer discussion on the unresolved
> issues with the third approach to some other time (except to
> point out that one big challenge is the heterogeneity of
> routers during transition when some routers mark packets
> and some drop packets to indicate congestion).  Btw, I don't
> think the three approaches are mutually exclusive and can't
> complement each other.
> 
> Steven
> 
> ____________________________________________________________
> Steven Low,    Assoc Prof of CS and EE
> Caltech, Pasadena, CA91125
> Tel: +1 626 395 6767 Fax: +1 626 792 4257
> slow at caltech.edu
> netlab.caltech.edu
> 
> >From owner-ecn-interest at research.att.com  Sat Jan 27 08:02:12 2001
> >Delivered-To: ecn-interest at research.att.com
> >X-url: http://nms.lcs.mit.edu/~hari/
> >To: slow at caltech.edu
> >Cc: ecn-interest at research.att.com, cwcam at caltech.edu, wch at its.caltech.edu,
> >   siew at its.caltech.edu, rliu at yak.ugcs.caltech.edu
> >Subject: Re: RED-->ECN
> >Mime-Version: 1.0
> >Date: Fri, 26 Jan 2001 15:56:20 -0500
> >From: Hari Balakrishnan <hari at breeze.lcs.mit.edu>
> >
> >
> >Steven,
> >
> >REM is an interesting idea for using ECN, and I rather like it from a research
> >standpoint because it doesn't have discontinuities (cf. RED) that make analysis
> >harder.  However, I'm generally skeptical that any scheme can be shown to
> >eliminate essentially all buffer overflow losses under _all_ conditions
> >(offered load), and yet accommodate bursts and provide reasonably low delays...
> > especially when not all offered traffic is reacting or reacting in different
> >ways from multiplicative-decrease.  Even a small fraction of unresponsive
> >traffic may make life problematic.
> >
> >Some years ago, I found it pretty hard to tune RED for this, to enhance my ELN
> >scheme.  REM may be more promising, but my gut feeling (as a network engineer)
> >tells me that it wouldn't be prudent to such implicit deductions about loss
> >causes in practice...
> >
> >Hari
> >
> >On Fri, 26 Jan 2001 12:34:52 PST, you wrote:
> >
> >> [Sorry for the previous broken msg...]
> >>
> >> Hi Saverio,
> >>
> >> Another point I'd like to add is that the addition of ECN
> >> may open up new opportunities for network control, some of
> >> which we may not even envision now.  Without ECN we are
> >> stuck with using loss (or delay) as the only means to
> >> communicate between network and TCP sources.
> >> Let me give an example.
> >>
> >> There are two types of losses in wireless environment:
> >> 1. due to congestion (e.g. buffer overflow), and
> >> 2. due to wireles effect (handoffs, fast fading, interference etc).
> >> One problem with TCP over wireless links is that TCP cannot
> >> differentiate
> >> between the two and essentially assume all losses are due to
> >> congestion and reduce its rate.  Most of the current proposed
> >> solutions are based on two ideas.
> >>
> >> The first idiea is to hide type 1 (wireless) losses from TCP source,
> >> so it sees only congestion losses and react properly.  Examples
> >> are various local recovery schemes, snoop, split TCP etc.
> >> The first idea is to inform the TCP source which of the two
> >> types a loss belongs, so that TCP can react properly; e.g. ELN schemes.
> >>
> >> Availability of ECN allows a third approach: to eliminate type 2
> >> (congestion) losses, so that TCP source only sees wireless losses
> >> and therefore know how to react.   But we still need to measure and
> >> feedback 'congestion' so that sources know to reduce their rates
> >> when new sources join or capacity drops.   By 'congestion', I don't
> >> mean 'high loss', but simply 'demand exceeds supply' so everyone
> >> should reduce their demand.   Since buffer overflow is now eliminated
> >> (assume we can do this, see below), we need a different mechanism to
> >> measure and to feedback this 'congestion' information.  Here is
> >> where ECN comes in.   When we decouple the measure+feedback of
> >> congestion from loss, we must have ECN to provide the feedback.
> >>
> >> Now, can we possibly keep the queue small (greatly reduce buffer
> >> overflow) *yet highly utilized*?    Recent research works seem to
> >> provide
> >> reasons to be optimistic.     One example is in our paper:
> >>      REM: Active Queue Management
> >> on our website:
> >>      netlab.caltech.edu
> >>
> >> Steven
> >>
> >>
> >>
> >>
> >> >Date: Fri, 26 Jan 2001 12:35:44 -0500
> >> >From: "K. K. Ramakrishnan" <kk at teraoptic.com>
> >> >To: Saverio Mascolo <mascolo at poliba.it>
> >> >Cc: ecn-interest at research.att.com
> >> >
> >> >Saverio,
> >> >I am glad you see ECN as an evolution from RED.
> >> >This is our motivation also:
> >> >to have ECN incorporated and deployed with an
> >> >Active Queue Management scheme.
> >> >
> >> >However, it is difficult to agree with the other observations you make:
> >> >if congestion was only caused by "one packet" as you say, then
> >> >one might wonder why we need to actually do a whole lot to
> >> >one might wonder why we need to actually do a whole lot to
> >> >either react to or avoid congestion. Unfortunately, that isn't the case.
> >> >
> >> >If you look at a congestion epoch, there are likely to be multiple
> >> >packets,
> >> >potentially from multiple flows, that are impacted:
> >> >either being marked or dropped.
> >> >ECN helps substantially in not dropping packet*s* - and especially
> >> >when the window size for those flows that have their packets
> >> >marked, it helps by not having them suffer a time-out.
> >> >
> >> >Further: the amount of complexity in either the router (especially
> >> >in the router) or the end-system is not significant. I know that
> >> >may be a matter of opinion.
> >> >But in a router, the change is so minimal, I haven't heard anyone
> >> >complain of complexity of implementation.
> >> >
> >> >There is no violation of layering, at all. I don't see why you say so.
> >> >
> >> >   K. K. Ramakrishnan
> >> >Saverio Mascolo wrote:
> >> >
> >> >> Hi All, I see ECN as an evolution of RED. Basically ECN saves one
> >> >> packet, which is the packet that
> >> >> RED would drop to signal congestion, by setting a bit in the header.
> >> >> Thus, to save just
> >> >> ONE PACKET, ECN introduces  complexity in routers, in the
> >> >> protocol, violation of layering, etc...For these reasons I don't think
> >> >> that ECN would give an improvement to RED that is commensurate to its
> >> >> complexity.Is there any point that I miss? Saverio
> >> >>
> >> >> http://www-dee.poliba.it/dee-web/Personale/mascolo.html

-- 
__________________________________________________________________
Steven Low, Assoc Prof of CS & EE 
slow at caltech.edu			netlab.caltech.edu
Tel: (626) 395-6767			Caltech MC256-80
Fax: (626) 792-4257			Pasadena CA 91125



More information about the end2end-interest mailing list