[e2e] j'accuse NFV
detlef.bosau at web.de
Fri May 1 12:20:38 PDT 2015
Am 01.05.2015 um 19:42 schrieb Matt Mathis:
> I tried to fight the "bandwidth" vs "data rate" battle years ago but
> came to realise that the memory, cache, bus, and CPU community (both
> HW and SW) were also misusing bandwidth, and had no conflicts with the
> "proper" use, and thus no motive to make the change. We are in the
> rather unique position of needing both concepts, so I always use "data
> rate" in my writing, but I have to admit that I often slip when speaking.
> Note that all jargon is always context sensitive. Ask a Orthopedist,
> Banker, Chemical Engineer and Computer Scientist what "process"
Your general observation is correct, however in the particular case CS
and CE, the "dual meaning" of "bandwidth" makes communication difficult.
Particularly as the CE stuff is often difficult to understand (I'm a CS
Think of a simple WLAN cell. How often do we see the question "how does
noise affect the bandwidth?" or "does cell occupation affect the
bandwidth?" and it is difficult to explain, that neither noise nor cell
occupation affects "the bandwidth". The frequency band used for packet
transmission is always the same.
What is affected is the allocation of a sending time (by the
CA-mechanism) and the number of transmission attempts which is necessary
to eventually have the packet received.
The situation is more complex, when it comes to mobile networks and more
sophisticated channel coding techniques, e.g., the multicode operations
in HSDPA, where we fully exploit a spreading gain in noisy channels,
when only one code is used, or we don't exploit the maximum spreading
gain when the channel is less noisy and the full robustness isn't
necessary, when we use several codes.
Nevertheless, I well remember years of working with the NS2, where
"bandwidth" was used as a characteristic parameter for a link -
eventually to calculate the serialization delay of a packet. As far as I
see, most of our simulators work in the same way.
As a consequence, it is extremely difficult (although attempts were
made) to model or to simulate the "delivery time" for a packet over an
Once again to the multicode example. The situation becomes even more
difficult when you talk to people, who don't use ALL 15 codes for ONE
flow (you should do so, hence, when only one code is used, the others
are left empty in order to fully exploit the spreading gain) but use
multicode operations to share a transport block between several flows.
At least for my purpose, dealing with these things turned to be a never
ending story, which didn't provide me the answer to my two essential
1. Will the packet be successfully delivered?
2. How long does it take to deliver the packet?
And the tough problem is not the technologies. Modelling them is often
tedious and nasty - but it is possible.
The tough problem is to model the wireless channel. When I talked to
some guys involved in the EURANE project, the problem was, that the models
modelled always more transport blocks as "corrupted" than were in
reality (when the calculated results where compared to observed ones) or
But they hardly modelled the correct corruption ratios or the real TB
discards, and this problem is propagated bottom up throughout all layers.
70565 Stuttgart Tel.: +49 711 5208031
mobile: +49 172 6819937
detlef.bosau at web.de http://www.detlef-bosau.de
More information about the end2end-interest