R: R: [e2e] [Fwd: RED-->ECN]

Saverio Mascolo mascolo at poliba.it
Thu Feb 1 09:09:34 PST 2001


Could someone point me to  papers comparing ECN to RED? Thanks
Saverio

----- Original Message -----
From: Jon Crowcroft <J.Crowcroft at cs.ucl.ac.uk>
To: end2end-interest <end2end-interest at postel.org>; ecn-interest
<ecn-interest at research.att.com>; <J.Crowcroft at cs.ucl.ac.uk>
Sent: Thursday, February 01, 2001 5:39 PM
Subject: Re: R: [e2e] [Fwd: RED-->ECN]


>
> In message <008c01c08c6b$4655eec0$723bccc1 at poliba.it>, Saverio Mascolo
typed:
>
>  >>I would introduce another element of discussion...Explicit congestion
>  >>indication was proposed in the ATM community...after a while they
concluded
>  >>that it was necessary a richer feedback in order to obtain an effective
>  >>"regularization" of queue dynamics, utilization etc...thus it was
proposed
>  >>explicit rate indication such ERICA algorithm.
>
> the difference between explicit rate feedback and binary feedback can
> be overstated -
> 1/the bit can be set for a number of flows - if the number of flows is
> large, and they respond corectly then the effect can be nearly the same
> as setting the "right" explicit rate per flow
>
> 2/ the rate of change of flows per "rtt" might be such that telling
> someone the "right" explicit rate half an rtt ago maybe no better and
> might be worse than binary feedback per flow.
>
> in some of the discussion below, the distinction between the reason
> for drop tail (demand >= supply) and RED+ECN (demand is _approaching_
supply
> and AQM triggers set) is correctly made - the AQM mechanism can take
> into account a lot of other factors (number of current flows, rate of
> change of number of flows, even relative RTT of flows (hard to measure
> though, esp. in today's assymetric routed interdomain path internet:-))
>
> by the way, the idea of a "quantum mechanics" of queues had occurred
> to me - not to "perform magic" like quantum tunneling, but the use of
> quantum statistics to model the population of queues for very large
> systems might not be so far fetched..(yes, i know there are a few
> orders of magnitude of orders of magnitude difference in scale, but
> its still fun to think of)
>
>  >>>    Thu, 1 Feb 2001 13:21:27 +0200
>  >>>    julian_satran at il.ibm.com
>  >>>
>  >>>    I would be very interested to understand how you see the
>  >>"virtual-measure",
>  >>>    you are suggesting exists,  relate to accepted  theory.  It is
usually
>  >>>    though that high utilization rates and long queues are related
(i.e.,
>  >>you
>  >>>    have long queues when utilization goes up)
>  >>>
>  >>> A quibble: Queue length is a function of variance (burstiness), not
(just)
>  >>> of arrival rate.  If the input rate exceeds the output rate, queue
length
>  >>> *becomes* unbounded ("infinite").  If the input rate is less than the
>  >>> output rate, then (independent of variations in inter-arrival time)
the
>  >>> queue length is 0 -- always empty.  For an arbitrarily low *average*
input
>  >>> rate, and a long enough interval, and an unbounded queue, I can
construct
>  >>> an arrival schedule that will cause an arbitrarily high *average*
queue
>  >>> length.
>  >>>
>  >>> However, what you say is _basically_ true in the real world because
as the
>  >>> input rate approaches the output rate the queue length becomes much
more
>  >>> sensitive to smaller and smaller variations (not to mention that max
queue
>  >>> length *is* bounded), so queue length in practice is probably usually
a
>  >>> reasonable indicator of load.  The control needed to smooth flows as
>  >>> arrival rates get higher is likely to be fragile, and is probably
>  >>> inconsistent with the philosophy of most people on end2end.  But
Steven
>  >>> Low's statements are not immediately dismissable as being totally
>  >>> self-contradictory, or in the category of perpetual motion machines.
>  >>> Not to say that there aren't plenty of other arguments to be made,
but
>  >>> dismissing it out of hands on the grounds of ludicrousness isn't one
of
>  >>> them.
>  >>>
>  >>> and this is the reason queue
>  >>>    length is used as a measure of congestion.  And this is true for
all
>  >>>    accepted queueing models.  Do you have in mind some "quantum
>  >>statistics"
>  >>>    for networks that we are all unaware of? Do those have some tunnel
>  >>effects
>  >>>    that allow you to dream about high utilization AND short queues?
Is
>  >>there
>  >>>    some mathematics that can support your statements?
>  >>>
>  >>>    Julo
>  >>>
>  >>>    Julian Satran
>  >>>    IBM Research Lab. at Haifa
>  >>>
>  >>>
>  >>>
>  >>>    Steven Low <slow at caltech.edu> on 01/02/2001 10:23:33
>  >>>
>  >>>    Please respond to slow at caltech.edu
>  >>>
>  >>>    To:   Alhussein Abouzeid <hussein at ee.washington.edu>
>  >>>    cc:   end2end-interest at postel.org, ecn-interest at research.att.com
>  >>>    Subject:  Re: [e2e] [Fwd: RED-->ECN]
>  >>>
>  >>>
>  >>>
>  >>>
>  >>>    Hi Alhussein
>  >>>
>  >>>    I should clarify what I meant by 'decoupling of congestion measure
>  >>>    from performance measure'.
>  >>>
>  >>>    I want to distinguish between the
>  >>>    *measure* of congestion and the *feedback* of congestion.  The
>  >>>    measure of congestion, vaguely, should track the number of users
>  >>>    or available capacity and this is the information to which users
>  >>>    react by adjusting their rates (windows).   For example, DropTail
>  >>>    measures congestion by loss rate (i.e. congestion = high loss),
>  >>>    the original RED measures it by average queue
>  >>>    (ie. congestion = high queue).   Alternatively, you can measure
>  >>>    it by a number you compute, e.g., based on queue length and rate,
>  >>>    as you suggested.  Then, congestion can simply mean
'demand>supply',
>  >>>    but it is possible to update the congestion measure in a way that
>  >>>    drives the queue or loss to a very low value, and therefore, the
>  >>>    congestion measure can settle down to a high value, telling
sources
>  >>>    to keep their rates low, while loss and delay are kept low.
>  >>>    This is the key benefit, which cannot be achieved without
decoupling,
>  >>>    for otherwise congestion measure (in this case, loss or queue)
must
>  >>>    be kept high to send the right signal to users (unless one adapts
RED
>  >>>    parameters dynamically).  This is the first type of decoupling.
>  >>>
>  >>>    Note that even with the first type fo decoupling,
>  >>>    performance measures such as loss, queue, delay or rate,
>  >>>    are used to *update* the congestion measure, but the equilibrium
>  >>>    *value* of the congestion measure is determined "solely" (in an
>  >>>    idealized
>  >>>    world) by the network topology, number of users etc, not by the
>  >>>    flow control algorithm.
>  >>>
>  >>>    Now, how is the congestion measure fed back?   This can be done
>  >>>    implicitly through what can be observed at the source, e.g. loss
>  >>>    or delay.   Or it can be done explicitly using ECN or management
>  >>>    packets.   This is decoupling the *feedback* of congestion measure
>  >>>    from performance measure.
>  >>>
>  >>>    By "decoupling", I always meant the first kind of decoupling,
which
>  >>>    I think is more important in determining the equilibrium behavior
>  >>>    of congestion control.
>  >>>
>  >>>    I agree with you completely that ECN helps in the second type of
>  >>>    decoupling,
>  >>>    but not the first type which has to do with the design of AQM
(choice
>  >>>    of congestion measure and its update rule).   In wireless
environment,
>  >>>    however,
>  >>>    because of the two kinds of losses (congestion & random loss), the
>  >>>    second
>  >>>    type of decoupling becomes useful as well.
>  >>>
>  >>>    Regarding your mice-elephant comment, mice invasion may indeed be
a
>  >>>    problem.   But that makes it all the more important that elephants
are
>  >>>    controlled to leave queues empty (in the absence of mice), and
this
>  >>>    seems
>  >>>    hard to do if congestion measure is coupled with performance
measure -
>  >>>    empty queue then automatically invites elephants to raise their
rates.
>  >>>    If empty queue does not mean "increase you rate", then elephants
must
>  >>>    be fed back (through ECN or probabilistical dropping) other
necessary
>  >>>    information to set their rates.
>  >>>
>  >>>    Steven
>  >>>
>  >>>
>  >>>    Alhussein Abouzeid wrote:
>  >>>    >
>  >>>    > Hi Steven,
>  >>>    >
>  >>>    > I generally agree with your approach below but have three, also
>  >>>    > philosophical, comments that I'd like to share with you
regarding the
>  >>>    > decoupling of congestion measures from performance measures.
>  >>>    >
>  >>>    > Clearly, decoupling these two measures may greatly help in the
design
>  >>of
>  >>>    > congestion control algorithms, since the congestion information
>  >>becomes
>  >>>    > explicitly available (through say, ECN). However, even with
>  >>>    > the use of ECN, full decoupling is not possible. The reason is
that
>  >>ECN
>  >>>    > packets themselves might be lost if sudden congestion takes
place.
>  >>>    >
>  >>>    > Another point is regarding your argument that controlling
"elephants"
>  >>>    will
>  >>>    > result in low queue levels and hence "mice" will be able to run
>  >>quickly
>  >>>    > through the network. While this argument seems quite sensible,
at
>  >>least
>  >>>    to
>  >>>    > me, it is problematic if you imagine the arrival of many such
mice -
>  >>say
>  >>>    a
>  >>>    > mice invasion. Will the ECN marking rate/scheme used for
signaling
>  >>the
>  >>>    > elephants be enough to pro-actively avoid congestion from this
mice
>  >>>    > invasion? I think that, at least, this is an important part of
the
>  >>puzzle
>  >>>    > that should not be taken lightly. In other words, the so called
>  >>>    > *transient* effect due to the mice population has to be
accommodated
>  >>and
>  >>>    > handled as carefully as the elephants' *steady state* effect.
>  >>>    >
>  >>>    > Finally, I'd like to add one more point; that the same level of
>  >>>    > decoupling that you propose can still be achieved using AQM
without
>  >>ECN.
>  >>>    > In one context, RED can be viewed as a controller that estimates
the
>  >>>    state
>  >>>    > and acts upon the estimate. The average queue size is taken as a
>  >>measure
>  >>>    > of the state and the feedback signal is a function of the state
>  >>>    > estimate. However, the average queue size need not be the only
>  >>measure of
>  >>>    > congestion. Indeed, some recent works suggested measuring the
arrival
>  >>>    rate
>  >>>    > directly (using some filter to smooth out transients) and using
this
>  >>as
>  >>>    > the measure of congestion. In some sense, such schemes attempts
to
>  >>>    achieve
>  >>>    > the rightful objective you mentioned; decide whether demand
exceeds
>  >>>    > capacity. Both approaches (queue-based or rate-based) have some
>  >>problems
>  >>>    > that are too involved to detail here. If the AQM router uses a
>  >>>    > rate-based measure of congestion and drops packets when
>  >>>    > demand exceeds capacity (according to some reasonable
algorithm),
>  >>then
>  >>>    > we effectively achieve the same level of decoupling, and also in
this
>  >>>    > case, the (average) buffer level can be kept low.
>  >>>    >
>  >>>    > In summary, in my opinion, ECN is a much more decent way of
informing
>  >>the
>  >>>    > sources about congestion, instead of the "cruel" way of packet
>  >>>    > dropping. As mentioned by others, it also saves the efforts of
all
>  >>the
>  >>>    > routers (if any) that processed the packet before it was finally
>  >>dropped
>  >>>    > somewhere along the path, and also the effort of the source in
>  >>detecting
>  >>>    > the packet loss. It has other virtues along the same lines that
I am
>  >>not
>  >>>    > listing here. It can be used to distinguish between
>  >>congestion-related
>  >>>    > and non congestion-related losses only to some degree of
reliability.
>  >>>    > Other than that (which are by no means minor enhancements), I
don't
>  >>think
>  >>>    > ECN is *essential* for the decoupling between congestion control
and
>  >>>    > performance measures (e.g. queuing delay).
>  >>>    >
>  >>>    > Just a few thoughts. Sincerely,
>  >>>    >
>  >>>    > -Alhussein.
>  >>>    >
>  >>>    > On Fri, 26 Jan 2001, Steven Low wrote:
>  >>>    >
>  >>>    > >
>  >>>    > > > From: Steven Low <slow at ee.mu.OZ.AU>
>  >>>    > > > To: hari at lcs.mit.edu, slow at caltech.edu
>  >>>    > > > CC: cwcam at caltech.edu, ecn-interest at research.att.com,
>  >>>    end2end at isi.edu.rliu@yak.ugcs.caltech.edu, siew at its.caltech.edu,
>  >>>    wch at its.caltech.edu
>  >>>    > > >
>  >>>    > > >
>  >>>    > > > Hi Hari,
>  >>>    > > >
>  >>>    > > > I completely agree that there are unresolved issues with the
>  >>>    > > > third approach (drastically reduce buffer overslows so that
>  >>>    > > > losses become primarily due to wireless effects), and you
>  >>>    > > > nicely touch upon several of them.   But I'd like to make
two
>  >>>    > > > philosophical points about ECN & congestion control first
>  >>>    > > > (which I hope belongs to this list at least peripherally).
>  >>>    > > >
>  >>>    > > > I think the approach of
>  >>>    > > > congesting the network in order to obtain congestion
information
>  >>>    > > > as the current TCP does, which is necessary without ECN,
>  >>>    > > > becomes unnecessary with ECN.  With AQM, we can decouple
>  >>>    > > > congestion measure & feedback from performance measure such
>  >>>    > > > as loss, delay or queue length.   Then, 'congestion'
>  >>>    > > > means 'demand exceeds supply' and congestion signal curbs
demands
>  >>>    > > > but the queue can be controlled in a way that maintains good
>  >>>    > > > performance such as low loss & delay.   Whether REM can
succeed
>  >>>    > > > doing this is a separate matter, but I think this is the
approach
>  >>>    > > > we ought to take in designing our congestion control.
>  >>>    Alternatively,
>  >>>    > > > when we couple congestion measure with performance measure,
>  >>>    > > > 'congestion' necessarily means 'bad performance' such as
high
>  >>>    > > > loss or delay, and we do not have the *option* (even if we
have
>  >>>    > > > the means) of maintaing low loss & delay in times
>  >>>    > > > of congestion (i.e. when new sources joining or capacity
drops).
>  >>>    > > > In other words, when there are more sources, loss or delay
must
>  >>be
>  >>>    > > > increased in order to produce high enough signal intensity
for
>  >>>    > > > sources to futher cut back their rates; moreover signal
intensity
>  >>>    > > > must stay high not only during transient
>  >>>    > > > when new soruces first start but also afterwards.
>  >>>    > > >
>  >>>    > > > REM tries to fix this, not through the exponential form of
its
>  >>>    > > > marking probabiltiy function, but by using a different
congestion
>  >>>    > > > measure and update rule, that maintains high utilization and
low
>  >>>    > > > queue in equilibrium.   Again, there can be alternative ways
to
>  >>>    > > > achieving this, but I think this is what we should aim for.
>  >>>    > > > And to achieve this it is critical that we decouple
congestion
>  >>>    > > > measure from performance measure.
>  >>>    > > >
>  >>>    > > > The second philosophical point is an interesting implication
>  >>>    > > > of the recent extensive works on heavy-tailed traffics and
their
>  >>>    > > > origin.  It implies that the misc-elephant mix (i.e.
>  >>>    > > > most files are small but most packets belong to long files)
>  >>>    > > > that characterizes current traffics may be a permanent and
>  >>>    > > > essential feature, not an artifice of current applications
or
>  >>>    > > > user behavior.   The end-to-end congestion control, properly
>  >>>    > > > designed, can be an ideal mechanism in such an environment,
>  >>>    > > > where elephants (that care about bandwidth)
>  >>>    > > > are controlled to maximally utilize the network
>  >>>    > > > in such a way that leaves the queues close to empty, so that
>  >>>    > > > mice (that are delay sensitive) can fly through the network
>  >>>    > > > with little delay.   Again, this require a new TCP/AQM
strategy
>  >>>    > > > that achieves high utilization + low queue, and ECN (or
>  >>>    > > > explicit notification) helps.
>  >>>    > > >
>  >>>    > > > A common objection to end-to-end congestion control is that
>  >>>    > > > most TCP connections are short and hence end-to-end
congestion
>  >>>    > > > control is ineffective.  I believe the observation is
correct
>  >>>    > > > but not the conclusion.  Since HTTP uses TCP and web files
>  >>>    > > > have mice-elephant mix, most TCP connections are therefore
mice,
>  >>>    > > > which indeed should not be the primary object for end to end
>  >>>    > > > control.  End to end control should target elephants, not
mice.
>  >>>    > > > Mice suffer large delay currently, not because they are end
to
>  >>>    > > > end controlled, but because the current congestion control
(even
>  >>>    > > > just of elephants) can produce large queues in the path of
mice.
>  >>>    > > >
>  >>>    > > > So, with the a TCP/AQM strategy that maintains high
utilization
>  >>>    > > > and low queue in equilibrium (regardless of hte number of
>  >>sources),
>  >>>    > > > buffer is used *only* to absorb *transient* bursts.  This
can be
>  >>>    > > > very different with a scheme that uses, say, queue length to
>  >>>    > > > measure congestion; with such a scheme, we do not have
control
> >    > > > on the equilibrium value of the queue - it is determined solely
>  >>by
>  >>>    > > > the network topology and #sources and hence can be high
depending
>  >>>    > > > on scenario.   When queues are always high, they do not have
>  >>>    > > > reserve to handle burst.   But when queues are always low, I
>  >>>    > > > think bursts can be much better handled.
>  >>>    > > >
>  >>>    > > > So much for such vague philosophical discussions.  Since
this
>  >>>    > > > is getting too long, I'd defer discussion on the unresolved
>  >>>    > > > issues with the third approach to some other time (except to
>  >>>    > > > point out that one big challenge is the heterogeneity of
>  >>>    > > > routers during transition when some routers mark packets
>  >>>    > > > and some drop packets to indicate congestion).  Btw, I don't
>  >>>    > > > think the three approaches are mutually exclusive and can't
>  >>>    > > > complement each other.
>  >>>    > > >
>  >>>    > > > Steven
>  >>>    > > >
>  >>>    > > > ____________________________________________________________
>  >>>    > > > Steven Low,    Assoc Prof of CS and EE
>  >>>    > > > Caltech, Pasadena, CA91125
>  >>>    > > > Tel: +1 626 395 6767 Fax: +1 626 792 4257
>  >>>    > > > slow at caltech.edu
>  >>>    > > > netlab.caltech.edu
>  >>>    > > >
>  >>>    > > > >From owner-ecn-interest at research.att.com  Sat Jan 27
08:02:12
>  >>2001
>  >>>    > > > >Delivered-To: ecn-interest at research.att.com
>  >>>    > > > >X-url: http://nms.lcs.mit.edu/~hari/
>  >>>    > > > >To: slow at caltech.edu
>  >>>    > > > >Cc: ecn-interest at research.att.com, cwcam at caltech.edu,
>  >>>    wch at its.caltech.edu,
>  >>>    > > > >   siew at its.caltech.edu, rliu at yak.ugcs.caltech.edu
>  >>>    > > > >Subject: Re: RED-->ECN
>  >>>    > > > >Mime-Version: 1.0
>  >>>    > > > >Date: Fri, 26 Jan 2001 15:56:20 -0500
>  >>>    > > > >From: Hari Balakrishnan <hari at breeze.lcs.mit.edu>
>  >>>    > > > >
>  >>>    > > > >
>  >>>    > > > >Steven,
>  >>>    > > > >
>  >>>    > > > >REM is an interesting idea for using ECN, and I rather like
it
>  >>from
>  >>>    a research
>  >>>    > > > >standpoint because it doesn't have discontinuities (cf.
RED)
>  >>that
>  >>>    make analysis
>  >>>    > > > >harder.  However, I'm generally skeptical that any scheme
can be
>  >>>    shown to
>  >>>    > > > >eliminate essentially all buffer overflow losses under
_all_
>  >>>    conditions
>  >>>    > > > >(offered load), and yet accommodate bursts and provide
>  >>reasonably
>  >>>    low delays...
>  >>>    > > > > especially when not all offered traffic is reacting or
reacting
>  >>in
>  >>>    different
>  >>>    > > > >ways from multiplicative-decrease.  Even a small fraction
of
>  >>>    unresponsive
>  >>>    > > > >traffic may make life problematic.
>  >>>    > > > >
>  >>>    > > > >Some years ago, I found it pretty hard to tune RED for
this, to
>  >>>    enhance my ELN
>  >>>    > > > >scheme.  REM may be more promising, but my gut feeling (as
a
>  >>network
>  >>>    engineer)
>  >>>    > > > >tells me that it wouldn't be prudent to such implicit
deductions
>  >>>    about loss
>  >>>    > > > >causes in practice...
>  >>>    > > > >
>  >>>    > > > >Hari
>  >>>    > > > >
>  >>>    > > > >On Fri, 26 Jan 2001 12:34:52 PST, you wrote:
>  >>>    > > > >
>  >>>    > > > >> [Sorry for the previous broken msg...]
>  >>>    > > > >>
>  >>>    > > > >> Hi Saverio,
>  >>>    > > > >>
>  >>>    > > > >> Another point I'd like to add is that the addition of ECN
>  >>>    > > > >> may open up new opportunities for network control, some
of
>  >>>    > > > >> which we may not even envision now.  Without ECN we are
>  >>>    > > > >> stuck with using loss (or delay) as the only means to
>  >>>    > > > >> communicate between network and TCP sources.
>  >>>    > > > >> Let me give an example.
>  >>>    > > > >>
>  >>>    > > > >> There are two types of losses in wireless environment:
>  >>>    > > > >> 1. due to congestion (e.g. buffer overflow), and
>  >>>    > > > >> 2. due to wireles effect (handoffs, fast fading,
interference
>  >>>    etc).
>  >>>    > > > >> One problem with TCP over wireless links is that TCP
cannot
>  >>>    > > > >> differentiate
>  >>>    > > > >> between the two and essentially assume all losses are due
to
>  >>>    > > > >> congestion and reduce its rate.  Most of the current
proposed
>  >>>    > > > >> solutions are based on two ideas.
>  >>>    > > > >>
>  >>>    > > > >> The first idiea is to hide type 1 (wireless) losses from
TCP
>  >>>    source,
>  >>>    > > > >> so it sees only congestion losses and react properly.
>  >>Examples
>  >>>    > > > >> are various local recovery schemes, snoop, split TCP etc.
>  >>>    > > > >> The first idea is to inform the TCP source which of the
two
>  >>>    > > > >> types a loss belongs, so that TCP can react properly;
e.g. ELN
>  >>>    schemes.
>  >>>    > > > >>
>  >>>    > > > >> Availability of ECN allows a third approach: to eliminate
type
>  >>2
>  >>>    > > > >> (congestion) losses, so that TCP source only sees
wireless
>  >>losses
>  >>>    > > > >> and therefore know how to react.   But we still need to
>  >>measure
>  >>>    and
>  >>>    > > > >> feedback 'congestion' so that sources know to reduce
their
>  >>rates
>  >>>    > > > >> when new sources join or capacity drops.   By
'congestion', I
>  >>>    don't
>  >>>    > > > >> mean 'high loss', but simply 'demand exceeds supply' so
>  >>everyone
>  >>>    > > > >> should reduce their demand.   Since buffer overflow is
now
>  >>>    eliminated
>  >>>    > > > >> (assume we can do this, see below), we need a different
>  >>mechanism
>  >>>    to
>  >>>    > > > >> measure and to feedback this 'congestion' information.
Here
>  >>is
>  >>>    > > > >> where ECN comes in.   When we decouple the
measure+feedback of
>  >>>    > > > >> congestion from loss, we must have ECN to provide the
>  >>feedback.
>  >>>    > > > >>
>  >>>    > > > >> Now, can we possibly keep the queue small (greatly reduce
>  >>buffer
>  >>>    > > > >> overflow) *yet highly utilized*?    Recent research works
seem
>  >>to
>  >>>    > > > >> provide
>  >>>    > > > >> reasons to be optimistic.     One example is in our
paper:
>  >>>    > > > >>      REM: Active Queue Management
>  >>>    > > > >> on our website:
>  >>>    > > > >>      netlab.caltech.edu
>  >>>    > > > >>
>  >>>    > > > >> Steven
>  >>>    > > > >>
>  >>>    > > > >>
>  >>>    > > > >>
>  >>>    > > > >>
>  >>>    > > > >> >Date: Fri, 26 Jan 2001 12:35:44 -0500
>  >>>    > > > >> >From: "K. K. Ramakrishnan" <kk at teraoptic.com>
>  >>>    > > > >> >To: Saverio Mascolo <mascolo at poliba.it>
>  >>>    > > > >> >Cc: ecn-interest at research.att.com
>  >>>    > > > >> >
>  >>>    > > > >> >Saverio,
>  >>>    > > > >> >I am glad you see ECN as an evolution from RED.
>  >>>    > > > >> >This is our motivation also:
>  >>>    > > > >> >to have ECN incorporated and deployed with an
>  >>>    > > > >> >Active Queue Management scheme.
>  >>>    > > > >> >
>  >>>    > > > >> >However, it is difficult to agree with the other
observations
>  >>you
>  >>>    make:
>  >>>    > > > >> >if congestion was only caused by "one packet" as you
say,
>  >>then
>  >>>    > > > >> >one might wonder why we need to actually do a whole lot
to
>  >>>    > > > >> >one might wonder why we need to actually do a whole lot
to
>  >>>    > > > >> >either react to or avoid congestion. Unfortunately, that
>  >>isn't
>  >>>    the case.
>  >>>    > > > >> >
>  >>>    > > > >> >If you look at a congestion epoch, there are likely to
be
>  >>>    multiple
>  >>>    > > > >> >packets,
>  >>>    > > > >> >potentially from multiple flows, that are impacted:
>  >>>    > > > >> >either being marked or dropped.
>  >>>    > > > >> >ECN helps substantially in not dropping packet*s* - and
>  >>>    especially
>  >>>    > > > >> >when the window size for those flows that have their
packets
>  >>>    > > > >> >marked, it helps by not having them suffer a time-out.
>  >>>    > > > >> >
>  >>>    > > > >> >Further: the amount of complexity in either the router
>  >>>    (especially
>  >>>    > > > >> >in the router) or the end-system is not significant. I
know
>  >>that
>  >>>    > > > >> >may be a matter of opinion.
>  >>>    > > > >> >But in a router, the change is so minimal, I haven't
heard
>  >>anyone
>  >>>    > > > >> >complain of complexity of implementation.
>  >>>    > > > >> >
>  >>>    > > > >> >There is no violation of layering, at all. I don't see
why
>  >>you
>  >>>    say so.
>  >>>    > > > >> >
>  >>>    > > > >> >   K. K. Ramakrishnan
>  >>>    > > > >> >Saverio Mascolo wrote:
>  >>>    > > > >> >
>  >>>    > > > >> >> Hi All, I see ECN as an evolution of RED. Basically
ECN
>  >>saves
>  >>>    one
>  >>>    > > > >> >> packet, which is the packet that
>  >>>    > > > >> >> RED would drop to signal congestion, by setting a bit
in
>  >>the
>  >>>    header.
>  >>>    > > > >> >> Thus, to save just
>  >>>    > > > >> >> ONE PACKET, ECN introduces  complexity in routers, in
the
>  >>>    > > > >> >> protocol, violation of layering, etc...For these
reasons I
>  >>>    don't think
>  >>>    > > > >> >> that ECN would give an improvement to RED that is
>  >>commensurate
>  >>>    to its
>  >>>    > > > >> >> complexity.Is there any point that I miss? Saverio
>  >>>    > > > >> >>
>  >>>    > > > >> >>
http://www-dee.poliba.it/dee-web/Personale/mascolo.html
>  >>>    > >
>  >>>    > > --
>  >>>    > >
__________________________________________________________________
>  >>>    > > Steven Low, Assoc Prof of CS & EE
>  >>>    > > slow at caltech.edu                      netlab.caltech.edu
>  >>>    > > Tel: (626) 395-6767                   Caltech MC256-80
>  >>>    > > Fax: (626) 792-4257                   Pasadena CA 91125
>  >>>    > > _______________________________________________
>  >>>    > > end2end-interest mailing list
>  >>>    > > end2end-interest at postel.org
>  >>>    > > http://www.postel.org/mailman/listinfo/end2end-interest
>  >>>    > >
>  >>>
>  >>>    --
>  >>>    __________________________________________________________________
>  >>>    Steven Low, Assoc Prof of CS & EE
>  >>>    slow at caltech.edu              netlab.caltech.edu
>  >>>    Tel: (626) 395-6767           Caltech MC256-80
>  >>>    Fax: (626) 792-4257           Pasadena CA 91125
>  >>>
>  >>>
>  >>>
>  >>>
>  >>
>
>  cheers
>
>    jon
>






More information about the end2end-interest mailing list