[e2e] Is a control theoretic approach sound?
cannara at attglobal.net
Wed Jul 30 11:04:28 PDT 2003
Good comments. I'll just add a comment on how system "control theory" may or
may not be easily applied in packet land, particularly in TCP....
Any control system that performs well must have 3 good components: 1) a
difference detector (error sensor); b) an amplifier to provide strong control
output; and c) a filter. Taking the easy ones first, the output power &
bandwidth of the amplifier controls the "aggressiveness" mentioned and its
sensitivity (amplification) controls how close the output control can follow
the input (reduce the difference between a desired set point and actual). The
filter provides the ability to distinguish real control differences from noise
or spurious ones, such as atmospheric speckle in laser-guided weapons, side
echos in radar systems, or just sensor-system thermal noise. The filter thus
explicitly controls the maximum rate at which corrections to the output can be
made, based on changes in the sensed difference input. The difference
detector is therefore a crucial part of any control system, because the
controller as a whole will follow its lead, no matter how wrong that may be.
To allow a packet-transmission systems to be well controlled, then, requires
building software analogs of the basic control components above and, hardest,
guaranteeing the best difference detection and filtering, so that control will
be aggressive and exerted with the right time constant(s). The result is then
the right transmission action, given the present (recent) info.
The problem we face, particularly with network paths and transports, is that
the ends are the control points and so must have accurate, timely information
to generate proper input difference signals to allow optimal control
decisions. Getting information to end processes to meet the requirements of
effective, perhaps optimal control, is not something that was designed into
protocols like TCP/IP very well. Thus, we don't know if a detected loss is
due to physical error or congestion drop on the path. We don't know if an ack
coming back is for the original packet we sent, or for the one we timed out
and resent, and so on.
To try to add control capability to something like TCP without having prompt,
accurate control inputs is not going to get far. This is why, as previous
emails have summarized, folks have, over the years, created improved
transports. It was good to see some of that discussion, as it was to hear
that the original intent in TCP/IP was to at least allow for, if not
encourage, alternative transports. Unfortunately, the bypassed need for
security forces us now (with TCP/IP) to poor compromises on performance, as
with VPNs, etc. Time for a "do-over", as kids are want to have at a dead end
Yunhong Gu wrote:
> On Wed, 30 Jul 2003, Panos Gevros wrote:
> > my assumption is that for any adaptive transport protocol the issue is to
> > discover the (subjective) "target" operating point (as quickly as possible)
> > and stay there (as close as possible) - tracking possible changes over time,
> > so one optimisation plane is
> > (smoothness, responsiveness)
> > another optimisation plane is
> > (capacity, traffic conditions)
> > and I think it is a fair assumption that there is no single scheme that
> > operates better than the rest over the entire spaces. (if there are claims to
> > the contrary any pointers would be greatly appreciated).
> > the question is at the boundaries of the ( Capacity, Traffic) space
> > particularly at the (hi, low) end of this space
> > *simple* (and appropriate) modifications to the existing TCP control mechanisms
> > (i.e no rtt measurements and retransmission schemes more aggresive slow start
> > and/or more aggresive AI in congestion avoidance)
> > could have the same effect in link utilisation and connection throughput.
> > I believe that this is possible but the problem with this approach is that it
> > is "TCP hostile".
> Well, I think to decide how "aggressive" the AI will be is not that
> *simple* a problem :) It is not the more aggressive the better (even if
> the per flow throughput is the only objective), right?
> > Also my guess is that most of the complexity in "new" TCPs is because
> > implementors attempt to be "better" (by some measure) while remaining
> > "friendly" to the standard.
> Yes, I agree, this is a headache problem.
> > I have seen TCP implementation which in the case of the remote endpoint being
> > on the same network it allows a very high initial cwnd value at slowstart -
> > solving all performance problems (in the absence of "social" considerations..
> > of course)
> > Wouldnt this be a much simpler answer to the problems of the "demanding
> > scientists who want to transfer huge data files across the world" (citing form
> > the article in the economist magazine)
> > ..in their case they know pretty much that the links they are using are in the
> > gigabit range and there are not many others using these links at the same time.
> But what if there are loss, especially continuous loss during the bulk
> data transfer? No matter how large the cwnd is initially, it can decrease
> to 1 during the transfer, then the problem arise again.
More information about the end2end-interest