[e2e] Bandwidth, was: Re: Free Internet & IPv6

Detlef Bosau detlef.bosau at web.de
Tue Dec 25 05:40:49 PST 2012


Am 25.12.2012 04:19, schrieb dpreed at reed.com:
>
> Good luck getting people to use the term "bitrate" instead of the 
> entirely erroneous term "bandwidth" that has developed since the 
> 1990's or so.
>

David, I very well see the "custom" side of the problem. In the sense, 
that using "bandwidth" here, is a custom. H

> Indeed, bandwidth is now meaningless as a term, just as "broadband" 
> is.   Once upon a time, both referred to bounds on the frequency 
> components of the physical layer signal, both in wires (twisted pair, 
> coax, etc) and in RF.   The RF bandwidth of 802.11b DSSS modulation 
> was about 10 MHz, whereas the bitrate achieved was about 2 Mb/sec.  
> Now we use OFDM modulation in 802.11n, with bandwidths of 40 MHz more 
> or less, but bitrates of >> 40 Mb/sec.   (yes, that is mostly because 
> of 64-QAM, which encodes 6 bits on each subcarrier within an OFDM 
> "symbol").
>


You're talking about two different things which should be taken apart. 
The one thing are the physical limitations of technologies. And I agree: 
We now have broad frequency ranges even for wireless channels so that we 
can - in principle - achieve huge throughputs there. "In principle". 
It's not that long ago that I encountered a classroom scenario where the 
students should establish MANETs  - and although the "bandwidth" was 
quite attractive, the achieved "throughput" wasn't. This was no 
university scenario. And now the teachers try to explain.....


> What causes the 802.11n MAC protocol to achieve whatever bitrate it 
> achieves is incredibly complex.
>

I absolutely agree.

However, I don't agree to quite a lot of simulations and emulations 
which simply ignore that complexity and replace it by simply wrong models.

>   Interestingly, in many cases the problem is really bad due to 
> "bufferbloat" in the 802.11n device designs and drivers, which causes 
> extreme buildup of latency, which then causes the TCP control loops to 
> be very slow in adapting to new flows sharing the path.
>

Interestingly ;-)

Could it be, that we (at least to a certain degree) encounter a "home 
made problem" here?

Particularly when you refer to nonsense like that one:

@article{ meyer,
     author ="Michael Meyer and Joachim Sachs and Markus Holzke",
     title  ="{Performance Evaluation of A TCP Proxy in WCSMA Networks}",
     journal ="IEEE Wireless Communications",
     year = "2003",
     month = "October"
}

Where the path capacity of a GPRS link is given by the "latency 
bandwidth product" and the "bandwidth" is 384 kBit/s (which is the gross 
data rate of GPRS) - and then users are recommended to use large initial 
window sizes for TCP to fully exploit this "capacity"? Which simply 
ignores, and I had lots of arguments here even with EE and CE guys, that 
the path capacity is not used for initial transmissions but because of 
retransmissions, some packets are placed on "the channel" dozens of times?

Could it be, that using a simple stop 'n wait algorithm for mobile links 
(which is generally a good idea because in terrestric mobile networks 
the "radio link" quite frequently doesn't keep the amount of data which 
is necessary for one IP address, so there's no need to use sliding 
window here) and sending a packet to the link when another packet has 
left the link (which is the recommendation by Jacobson and Karels for 
more then twenty years now ;-) would alleviate the problem?

Actually, the GPRS standard accepts delivery times up to five minutes. 
So, assuming a "bandwidth" of 384 kBit/s, I sometimes consider making my 
notebook talking to itself using GPRS as some kind of memory 
extension.... (However, I'm still looking for a fast random access 
algorithm, the serial access is sometimes a bit annoying.)

(It could be a nice product brand: "WAX". "Wireless Address eXtension." )

No kidding: When we send huge amounts of data to an interface with the 
strong expectation that this data is conveyed to somewhere - although 
the interface does not get any service for some reason, this is as 
reasonable as lending money to Greece and strongly expecting to get the 
money paid back - with interest.

(I herewith apologize to greek readers here. You might tell me, that not 
we Germans suffer from Greece - but it's the other way round, Greece 
suffers from Germany. However, hope is on the way: The next elections in 
Germany are in fall 2013.)

So, it's not surprising that we have buffer bloats there (like the debit 
of Greece to Germany). (To the non european readers: Germany sometimes 
talked
Greece in buying useless things in Germany, like e.g. submarines. 
Particularly Germany vouched for Greece's credit worthiness - and now, 
Greece has useless submarines and no money but an incredible debit, 
German submarine builders are unemployed - and German taxis payers are 
made to believe that Greece were the cause of the problem.)

The analogy is not new: In some text books, sliding window systems as 
they are used in TCP, are called "credit based schemes". And we can find 
quite some analogies between data transportation systems and networks on 
the one hand and economic systems on the other. So, the often mentioned 
"buffer bloat" problem in networking is similar to the "balance bloat" 
problem, which is often talked about in economics. And the reasons are 
not that different: In both cases, we have a strong mismatch between 
expectation and reality.



> This latter problem is due to the fact that neither the radio 
> engineers (who design the modulators) nor the CS people (who write the 
> code in the device drivers and application programs) actually 
> understand queueing theory or control theory very well.
>


The more important problem is, at least in Germany, that they do not 
understand each other.

And the very reason for this is, at least in Germany, that they do not 
listen to each other.

David, when I talk to colleagues and claim that throughput may be 
different from a gross bit rate, I'm blamed as a know it all - and 
frankly spoken, I'm more than offended by this after having experienced 
this for quite a couple of years.

And with particular respect to radio engineers: At least in Germany, 
these are mostly electrical engineers by education. And a typical 
electrical engineer is supposed to have more forgotten knowledge and 
understanding of control theory than a CS guy is supposed to ever have 
heard of.

Let me take the Meyer/Holzke/Sachs paper for  example. It took me weeks 
to understand how these guys forged their results.

It took me YEARS to get an understandig of mobile networks to see, that 
these "results" are wrong, however, they are submitted, accepted, 
published, so anyone is convinced: "This is correct, it's published by 
scientists, at least one of them holds a PhD, so the paper must be 
correct." It is neither correct nor science, it is bullshit. However, 
the public believes in this "story" and other guys than the authors are 
blamed when systems do not work as promissed by this paper.


> For example, both seem to think that adding buffering "increases 
> throughput"
>

I do not know who is "both". I'm neither of them.

When people say, increasing buffers means increasing throughput, I first 
of all would discriminate workload from buffers and when we talk about 
workload and buffers, it's always a good idea to have a look at 
textbooks like "Queuing systems" by Len Kleinrock.

Throughput is achieved, as the word says, by "putting through" and not 
by "putting at". The pyramids would not be there if the slaves only 
putted the stones somewhere on stock near the building place - and no 
one would have moved the stones to their final place.
>
> - whereas typically it causes catastrophic failure of the control 
> loops involved in adaptation.
>

And that is much too simple.

One say, buffers increase throughput. You say: buffers cause 
catastrophic failure.

Could we agree upon: "It depends."?

And that it is worthwhile to carefully look at the system of interest, 
if buffers can be beneficial and should be added - or if buffers are 
malicious and should be left out?

Otherwise we would end up in having "the answer". No matter, what's the 
question.

When you say we should stay away from thoughtless buffering, I couldn't 
be more with you.

(And in much private conversations I pointed to the Takoma bridge 
disaster, where a "buffer bloat" (sic! State variables in dynamic 
systems can be seen as buffers for energy!) caused structural damage.

And it's the very core of the congavoid paper to ensure stability (and 
in the very core, stability does not mean anything else as avoiding 
buffer bloat) by fixing a system's workload to a reasonable size.

(The, not easy, question is: What is "reasonable"?)

The congavoid paper does not make too much words on this issue. However: 
Beware of the footnote on page 2. The note on the Ljapunov equation.
Stated in the terms of TCP that does mean, amongst others: By limiting 
the workload on the path, we limit the workload which can gather in a 
local buffer.
There are other, more complex, implications.

And that's why I posted my question to this list. One implication is: It 
might be beneficial to have some MBytes of workload in a TCP flow, when 
the path includes a geostationary satellite link from Hamburg to New 
York. The same workload is completely malicious when the path consists 
only of a GPRS link.
So, may I speak frankly? God in heaven, which devil made us use one and 
only one congestion window for the whole path end to end?

And when I recently proposed to change this in a research proposal, I 
got the question: "Which problem do you want to solve?"

Saying this, I strongly emphasise: For the model used by JK, the 
congavoid paper is a stroke of genius.

And for those, who see shortcomings in the congavoid paper, I say: The 
shortcomings don't lie in the congavoid paper but in your reading.
If we carefully read the work by Jacobson and Karels and the work by Raj 
Jain and others from the same period of time, many of our problems would 
not appear new. The authors anticipated much of our problems. If we only 
spent the time for careful reading.  "The badness lies in the brain of 
the reader."
>
> Or worse, many times the hardware and firmware on an 802.11n link will 
> be set so that it will retransmit the current frame over and over 
> (either until delivery or perhaps 255 times) before dropping the packet.
>

That's exactly what happens on all wireless networks to my knowledge 
(not only on 802.11 but in others as well, and again: Retransmission 
itself isn't bad. But thoughtless retransmission is evil.) And that's 
what's simply ignored in the paper I referred to yesterday (Herrscher, 
Leonhardi, Rothermel) and the paper I referred to above (Meier, Sachs, 
Holzke) and what's simply ignored in countless private conversations.


However: You run into the same pitfall yourself!

Please look at the alternative you mention: Either you retransmit the 
packet over and over - or you drop it.

We know the saying: If you have only two alternatives, both of which are 
impossible, choose the third.

In some cases it may be possible to adapt your system, so that further 
transmissions may become successful.
In some cases it may be possible to change your route.

I don't know - and again: It depends.

However, I doubt that "retransmit the packet over and over" and 
"(silently) drop" the packet are the only possibilities in each and 
every case.
So the problem is sometimes not the lack of opportunities. It's the lack 
of willingness to choose them.

Sometimes, there is no third way. As often stated: TANSTAAFL.

And again to my criticism for using only one CWND. Exactly using one 
CWND for a whole path (of say 70 hops ore more) means to use ONE answer 
fo MANY questions. And ONE solution to ANY problem. (My apologies go to 
IBM guys ;-))




> Such very long delays mean that the channel is heavily blocked by 
> traffic that slows down all the other stations whose traffic would 
> actually get through without retransmission.
>

Absolutely.

We should not make the whole world suffer from our local problem.

(It was part of my research proposal.)

> Yet CS people and EE's are prone to say that the problem is due to 
> "interference", and try to arrange for "more power".
>

More power? Sometimes interference can be alleviated by less power!
>
> More power then causes other nearby networks to be slowed down (as 
> they have to listen for "silence" before transmitting).
>

I stated so, when I discussed the Herrscher, Leonhardi, Rothermel Paper 
yesterday.

However, you don't make a difference between external noise (which may 
interfere with your wireless cell) or multipath interference, where your 
signal is split up in rays which interfere with themselves. These are 
different scenarios which should be treated differently. (E.g. the first 
one by power regulation, the second one by MIMO systems or rake 
receivers. Sometimes, different problems require different solutions.)

In some cases, power regulation will not work. Why do not act in the 
other way round then? Make the cells use the same frequency range and 
couple the adjacent bands from, say, two "fife band" ranges to one "ten 
band" range and increase sending power. And then change your line coding 
and channel coding to a higher net data rate - and hence shorter (in a 
temporal sense) packets. So, you couple your cells to increase the 
joined capacity - and doing so you lower your network load. Could this 
be a way to go? Alleviate interference by increasing sending power. 
Sounds strange, however may work sometimes.
(And it's an example for the aforementioned "third alternative".)

> Thus, Detlef, we do need an improvement in terminology, but even more 
> in understanding.
>

David, is that really a contradiction? Or isn't a careful terminology 
helpful to improve better understanding?


> The nonsense that passes for knowledge around wireless networking, 
> even taught by "professors of networking" is appalling.  It's the 
> blind leading the blind.
>

May I put this in my signature?

This is the by far the most wise sentence, I've ever read on this subject.
>
> I don't think graduate students in computer networking will ever be 
> required to learn about Poynting vectors, control theory, and wavelet 
> decompositions, and the ones in EE will not learn dynamic queueing 
> theory, distributed congestion control, and so forth.   And the 
> information theory students will probably never use a vector signal 
> analyzer.
>

And this is perhaps even not necessary. But it is highly useful to 
listen to each other and be willing to get things explained by each 
other. We can walk around like a blinds leading blinds - or we can try 
to walk around adding our views.
>
> So the terminology and understanding issues will persist.
>

But it should lead to a better understanding instead of more confusion.



-- 
------------------------------------------------------------------
Detlef Bosau
Galileistraße 30
70565 Stuttgart                            Tel.:   +49 711 5208031
                                            mobile: +49 172 6819937
                                            skype:     detlef.bosau
                                            ICQ:          566129673
detlef.bosau at web.de                     http://www.detlef-bosau.de
------------------------------------------------------------------

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20121225/800b0e77/attachment-0001.html


More information about the end2end-interest mailing list