[e2e] First rule of networking: don't make end-to-endpromise s you

Cannara cannara at attglobal.net
Fri Apr 23 20:56:12 PDT 2004


You know David, it's really not helpful to misprepresent what others say.  For
instance:

"Cannara's basic idea occurs to many sophomores - that packet errors    
should cause the source to send with a *larger* window (that's why he   keeps
saying it is bad to "back off" when a packet is lost)."

I never said that, as everyone else knows and you seem unable to admit.  I
said that TCP slows when it need not.  That's all.  I didn't say a sender
should increase its rate or window upon error loss.  Nor is anyone suggesting:

"If you have only ECN, but don't allow packets to be dropped on congestion,
the network will still go into congestion collapse."  

Certainly, at least because those of us who understand router code know that
dropping has to be done if congestion occurs -- that congestion actually
occurs inside the router, absent any policy to restrict I/O below physical
limits.  In this sense, the congested router(s) always prevent the network
from getting out of control, because there's no physical way to do more.

Finally, on TCP Window sizes, those are end-end statements of capability --
receive capability.  The network may not be able to handle them at the rate
the sender's interface can deliver to the net via its hardware and its Send
Window, but then use of something like ECN can adjust that, even if there is
no loss.  This is why wrongly triggering backoff with no congestion signal is
a poor substitute, but where we typically are now.

Further when you say:

"the optimum steady-state
> operating point for a network would occur if the slowest link on every
> active end-to-end path has exactly one packet in its buffer ready to go
> when it completes the packet in progress.   That means that on every
> end-to-end flow, there is an end-toi-end window of at most one packet per
> hop, and less than that if there is congestion."

you seem to be overlooking the fact that the best "window" is not just what's
ready to be sent at an interface, but also includes what's in flight on any
hop (un-ACKed), which could have a long delay, say from a satellite to a
customer.

Then, your statement starting: "Since the window includes the source and
target buffering..." seems to confuse the clear distinction between the
sender's Send Window and the receiver's Receive Window.  The "source and
target buffering" don't combine in such a simple way, because most stacks have
configured maxima for the Send Window, which is typically small compared to
the buffering the transport may offer via an API to a sending application. 
The initial Receive Window is usually indeed a function of the receiver's
buffer allocation.

Finally, your remark:  "It's a strange idea that is based on a theory that
keeping the network buffers as full as possible  is a "good thing" for goodput
- despite the fact that it pessimizes end-to-end latency due to queueing
delay" is mysterious.  Having the "network buffers" full isn't the same as
having the RTD fully utilized in carrying packets, which is, in fact the
optimal case.  Who has suggested keeping network buffers full without keeping
paths full?  Of course, at the moment of congestion, they're indeed full
somewhere.  Off to look up "pessimizes".  :]

Alex

Nothing about  

"David P. Reed" wrote:
> 
> At 01:38 PM 4/23/2004, Naidu, Venkata wrote:
> >   I got exactly the opposite supportive argument when I read
> >   RFC 2884 (http://www.ietf.org/rfc/rfc2884.txt). This RFC
> >   clearly states that the explicit congestion signal is efficient
> >   is most of the situations.
> 
> That happens to be exactly what I was saying, *not* the opposite.   NOTE
> ECN (which was called EARLY congestion notification by some, because it
> suggests congestion at a lower threshold than buffer exhaustion) is not
> what Alex Cannara is referring to - which is a signal that a packet was
> "dropped because of congestion".   ECN is a signal that signals impending
> congestion before the congestion gets bad enough that packets must be
> dropped, thus shortening the end-to-end control loop when it succeeds in
> getting through.
> 
> But ECN's effective functioning as a control system depends on the fallback
> that packets *will* be dropped if congestion occurs rapidly enough that ECN
> can't slow the source, or if ECN packets are lost due to errors,  and when
> they are dropped, there are no floods of congestion-amplifying packets
> delivered to the target or the source.   If you have only ECN, but don't
> allow packets to be dropped on congestion, the network will still go into
> congestion collapse.
> 
> ECN (and RED, and head-dropping) are elegant ideas that build on the basic
> idea that packet drop = congestion ==> backoff.
> 
> Cannara's basic idea occurs to many sophomores - that packet errors should
> cause the source to send with a *larger* window (that's why he keeps saying
> it is bad to "back off" when a packet is lost).  It's a strange idea that
> is based on a theory that keeping the network buffers as full as possible
> is a "good thing" for goodput - despite the fact that it pessimizes
> end-to-end latency due to queueing delay, thus increasing the effective
> delay through the control loop.   The end-to-end performance might go up if
> you have errors but absolutely no congestion.  But if you have congestion,
> too, you don't get much value out of filling up the congested path - the
> path is already delivering all it can.
> 
> As I mentioned before, it's simple to see (based on understanding how
> pipelining works in pipelined processes) that the optimum steady-state
> operating point for a network would occur if the slowest link on every
> active end-to-end path has exactly one packet in its buffer ready to go
> when it completes the packet in progress.   That means that on every
> end-to-end flow, there is an end-toi-end window of at most one packet per
> hop, and less than that if there is congestion.   The problem is that the
> TCP window also includes buffering for the target application, which may be
> on an overloaded system, which wants to "batch" lots of bits when it is
> ready to take them, and sometimes buffering for the source application (if
> the API is designed so that it offers to suck up lots of bits from the
> source so that they are ready to go while the source process is put to
> sleep).   Since the window includes the source and target buffering, it's
> tempting to let that stuff fill up the network switch and router buffers -
> where it causes congestion and screws up the control loop length so that
> the network gets out of control.



More information about the end2end-interest mailing list