[e2e] Fwd: Re: Question the other way round:

Joe Touch touch at isi.edu
Wed Nov 20 10:15:41 PST 2013


Forwarded for David Reed.

Joe (as list admin)


-------- Original Message --------
Subject: 	Re: [e2e] Question the other way round:
Date: 	Wed, 20 Nov 2013 09:57:28 -0500 (EST)
From: 	dpreed at reed.com
To: 	Ted Faber <faber at isi.edu>
CC: 	end2end-interest at postel.org, "Joe Touch" <touch at isi.edu>



(@Joe - admit if you like)

In information theory, one characterizes multi-channel systems (such as
networks) using a framing called "achievable rate region" (ARR).   This
is convex polytope in a space of P dimensions where P is the number of
distinct pairs who might communicate.  (The assumption that all
dimensions are "independent" channels is a strong one - in real
networks, traffic is either causally or probabilistically correlated by
protocols at higher layers and events outside the system).

Thus P grows as N**2, if N is the number of terminals of the network.

The architecture of the network affects the ARR, whether it is a
wireless network or a wired one.

What's useful about this framing is that it points out (because the ARR
is a convex polytope) that there is no single optimum, but there are
many, many reasonable operating points inside the polytope.

What is achievable may not be achieved.  Since it is a "rate" region, it
doesn't give you any information about the evolution of the optimal
operating point over time - how fast can the system migrate from one
point to another in the achievable space.  (control latencies are
crucial here).

The other useful thing about this framing is that given a set of desired
messages with latency requirements, one could in principle develop an
algorithm that uses the polytope (which can be described by its
vertices) to calculate a set of "operating points" that merely need be
selected based on the traffic presented.  As in the standard "linear
programming" algorithm, the solution is always choose a vertex.  The
list of vertices is small compared to the entire ARR, and no other
operating points are better, so one merely does a lookup of a vertex
that satisfies all requirements, and configure the network to use that
routing setup.

But there is a catch.  You have to have some algorithm that knows the
acceptable rates to deliver to each pair.

Unless you look at the forest, rather than the little tiny trees (the
switches) you can't do very much.

In particular, if you have recalculated routing tables, your ARR shrinks
from a polytope to a much smaller one.  If you have buffering
(especially large buffering), you can't move from one vertex in the ARR
to another vertex until the buffered packets have made it out of the
system (which means that there is an Achievable Slew Rate between
operating configurations of the system).

In practice, what this all means is that while we can fully describe
congestion control, anyone who claims they have a practical, optimal
algorithm has either tailored the assumptions so that their algorithm
has an optimum in that special case, or is claiming that "all uses" of
the network share a common characteristic.

The latter claim is a complete hand-wave without data.  The idea that a
datacenter (like the AWS) have "predictable" traffic patterns is truly
weird.   At the end of the day, AWS serves all comers, and deals with
incredible (and fractal) demands based on end user interactions.  And of
course, they have computers, storage, and application programs that
behave unpredictably.

*Even in datacenters* - which is what I've spent the last few years
working to improve, so I have a lot of experience - there is little or
no predictability, and huge potential for congestion.  We see it every
day in typical data centers on the fabrics that are shared and multiplexed.

There are a few principles that seem to work:

1) *Don't allow buffering in the network*.  Buffer at the edges.
   Otherwise the ability to shift to a new operating point becomes so
slow that the system is always operating in a pessimal state.

2) Keep number of hops short.

3) Try to keep the rates "balanced", which in practice means that all
links of the network should all be capable of operating within a few dB
of magnitude of the same rate.   (e.g. don't mix 1 Mb links with 40Gb
links on a switch).

4) Don't expect to operate at anything like the performance at a corner
of the ARR.  Because you can't change the operating point to another
corner quickly, and the intermediate operating points build up buffering
in the network, and force it to operate far from the edges of the ARR.



On Tuesday, November 19, 2013 10:14pm, "Ted Faber" <faber at ISI.EDU> said:

  > On 11/19/2013 10:15, Joe Touch wrote:
  > > On 11/19/2013 10:09 AM, Dave Crocker wrote:
  > >> Given the complete generality of the question that was asked, is 
there
  > >> something fundamentally deficient in the answer in:
  > >>
  > >> http://en.wikipedia.org/wiki/Congestion_control#Congestion_control
  > >>
  > >> ?
  > >>
  > >> In particular, I think it's opening sentence is quite reasonable.
  > >
  > > I agree, but it jumps in assuming packets. Given packets, it's easy to
  > > assume that oversubscription is the natural consequence of avoiding
  > > congestion.
  >
  > Unless someone's edited it, you should read the first sentence again. I
  > see:
  >
  > > Congestion control concerns controlling traffic entry into a
  > telecommunications network, so as to avoid congestive collapse by
attempting to
  > avoid oversubscription of any of the processing or link capabilities
of the
  > intermediate nodes and networks and taking resource reducing steps,
such as
  > reducing the rate of sending packets.
  >
  > I read the reference to packets as an example. And I would end the
  > sentence with "if necessary" to indicate that reducing resource
  > utilization is done only when needed (which it wouldn't be in a
  > non-oversubscribed system). Overall I think it reasonably encompasses
  > proactive congestion control.
  >
  > >
  > > But it isn't if you have a-priori known traffic patterns - as are
  > > increasingly common inside data centers, as well as for some past
  > > circuit use cases.
  >
  > There's a lot to explore, of course. When I was in graduate school some
  > nut was doing congestion control keeping average sending rates constant
  > and modulating the burstiness of the traffic by adjusting the time over
  > which the average rate was policed.
  >
  > It's a fascinating field, in which one can find the trees attractive to
  > the point of beaching the ship. The forest is at least as interesting,
  > especially if one is interested in economics. Alas, similar to
  > economics, it's also hard to do a real investigation at any scale
anymore.
  >
  > --
  > Ted Faber
  > http://www.isi.edu/~faber PGP:
  > http://www.isi.edu/~faber/pubkeys.asc
  > Unexpected attachment on this mail? See
  > http://www.isi.edu/~faber/FAQ.html#SIG
  >
  >





More information about the end2end-interest mailing list