[e2e] Delivery Times .... was Re: Bandwidth, was: Re: Free Internet & IPv6
detlef.bosau at web.de
Sat Dec 29 17:10:11 PST 2012
Perhaps, I try an answer to the public :-)
Am 28.12.2012 18:12, schrieb dpreed at reed.com:
> As far as questioning the rule by controlling congestion within the
> network, how would you propose to do that without also signalling to
> the sources that they must back off? Somewhere a queue will fill.
To my understanding, and I appreciate correction here, we must carefully
discriminate congestion/instability and performance issues here.
A certain problem particularly in TCP is, that VJCC (and we're talking
about VJCC, although their are numerous flavours of it, Reno, Vegas,
Veno, Laminar TCP etc. etc. etc., the whole Yosemite park turned into
PhD theses, no one will ever read only to protect the heads of PhD
students from the cold, because the workd suffers from global warming)
does a bold combination of congestion control, performance control and
Once, a TCP flow has reached equilibrium, congestion control is quite
simply achieved: We must obey the conservation principle.
(So, first of all: turn off all rate controlled TCP flavours, the
conservation principle is the clue to stability as the congavoid paper
correctly, not to say: loud and clearly, states in the footnote on page
2. There is a terrible tong twister, this "Ljapunov"-word, however,
that's, to my understanding, the crucial point in the whole thing.)
Perhaps, we have to discriminate two stories now.
First, and I'm not going to talkt about that now, but it is important:
Of couse we can see a problem, when the system fails to reach
equilibrium or the equilibrium state is lost somehow and we mast return
to equilibrium state. In wireless networks, this may be e.g. a
consequence of a sudden path change.
The other story is that we have to find the optimum workload for
equilibrium. The congavoid paper is extremely dense here and I'm still
to understand many details. (I'm understanding the congavoid paper step
by step for more than 10 years now and each time, I return to this
paper, I understand something new. It is surely one of the densest texts
I ever read.)
Now, the performance of a queuing system is determined by the workload.
(Not by the buffers. Although we determine the workload indirectly by
buffers, limits and discarding packets.) And so the first problem is
that ONE workload determines TWO criteria, i.e. a system's throughput
and a system's sojourn time (wrt. TCP: RTT) as well. So perhaps, we must
determine which is the criteria of interest. A performance p :=
throughput/RTT may be a reasonable tradeoff - however in some cases we
might want not to use it.
In VJCC, we assume a stationary system's capacity as well for the
heuristics to determine the workload and to share the workload between
(AIMD, CWND determination, Little's theorem.)
So, the first problem we will certainly encounter in mobile systems is:
In mobile systems, the system's capacity is certainly anything. But
So, it would be no surprise if the "determined" workload varies from one
inadequate size to the other one. (And as we know from queuing systems,
an inadequate workload may cause the performance to decrease
dramatically. Particularly, as David stated more than once, it may cause
unacceptable sojourn times.)
So, I'm not fully convinced that we have a matter of back off here. But
my first approach (which I consider for my research proposal) is path
segmentation in order to
- break up the work load in portions we can handle and in segments we
can handle and MOST IMPORTANT
- use adequate algorithms for workload determination. My very strong
conjecture is that the workload determination scheme in VJCC will
definitely not hold in paths which contain one or more mobile segments
because of a non stationary capacity.
- another problem is the combination of workload determination and
resource sharing, particularly when we combine traffic and cross traffic
with perhaps completely different round trip times - and therefore
completely different time scales to react upon load and path changes.
- and there are lots of other aspects here, the well known "mice and
elephants" issue, the issue of non greedy sources,
however I strongly think, that all these issues will finally convince us
that we must break up the problem into pieces. With the particular goal to
- first find local solutions for local problems (Jerry Salzer's paper),
- keep local problems local (same paper).
The more I think about it the more I'm convinced that it is simply
nonsense to find one and only CWND for a whole TCP path consisting of
perhaps 50 or 60 links. With a RTT of 600 ms because of an overloaded
geostationary satellite link - and when there is a local capacity
problem because of some cross traffic on link 34, we start (depending on
personal preference) the whole AIMD or BIC or CUBIC scheme to lessen a
60 MByte CWND for about 500 bytes.
(Next time, we want to catch a flea, we could even call the national guard.)
In many cases we would not even notice the "local load issue" - in other
cases there is a congestion drop and the whole (completely oversized)
system starts to work, with devastating consequences. And please note:
The capacity issue on link 34 is not a congestion issue - it is a
capacity issue because it's a matter of resource sharing to adjust the
LOCAL (!!!!) part of a flow's CWND here. A back off strategy will
perhaps not work, particularly it will not react upon a local problem
timely and with appropriate dynamics.
Breaking up the flow into segments (and particularly breaking up the
workload into segments!!!!) would offer the possibility to locally fix
the problem - with adequate actions and system dynamics which are
certainly much easier to handle.
When we have a system with dozens of screws to turn - it's perhaps a
helpful idea not to use only one (and perhaps much too big) screwdriver
which in addition does not turn one screw a time - but all screws at once.
From the system theory's perspective, it will perhaps become very
complex to have the system parts decoupled. And we know from the past
that we can encounter huge challenges by long term dependencies,
sometimes people talked about "chaotic behaviour".
So please, these are just my 2 cent. In no case an attempt to present a
"solution" or even a part of it.
In a sense, this reminds me on the discussion of routing and bridging.
"We could" connect many computers in the world to one "billions of
terabyte per second" Ethernet, if this existed. Perhaps without a single
I think, we all agree that there extremely compelling reasons for not
70565 Stuttgart Tel.: +49 711 5208031
mobile: +49 172 6819937
detlef.bosau at web.de http://www.detlef-bosau.de
The nonsense that passes for knowledge around wireless networking,
even taught by "professors of networking" is appalling. It's the
blind leading the blind. (D.P. Reed, 2012/12/25)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the end2end-interest