[e2e] How do we simulate mobile networks? Was: Simulator for wireless network
detlef.bosau at web.de
Thu Apr 19 07:49:23 PDT 2007
John Heidemann wrote:
> I believe this is a key observation, and one that's often node made
> explicit in a rush to "more realism" in simulations. One can only
> answer "what's the best simulator" (or more generally, what's the best
> evaluation tool) in the context of a specific research question.
Perhaps, the original question turns into: "How do we simulate
particular functions in wireless networks?" Or even better: "What must
we simulate when we simulate wireless networks?" Those discussions which
tool were simply the best are typically quite aggravating.
To be concrete.
Let´s assume my favourite szenario:
FH--------BS ..... MH
Fixed Host, Base Station, Mobile Host
were the mobile host is attached to BS via some mobile network technology.
Which components / functions must be simulated / modeled then?
E.g., I often bothered this mailing list with the question whether there
are interactions between TCP and MAC scheduling. Now, let´s imagine we
were interested in an answer to this question. Not only for GPRS or
HSDPA or UMTS, but technology independent.
Perhaps, I don´t know, the answer is not even the same for all of the
networks listed above.
I sometimes met particular simulator components for the one or the other
technology for the NS2, but to my knowledge we do not yet have some
generic 3G and beyond simulator architecture in the NS2. Is this
correct? And does somebody know the situation with NS3?
Unfortenately, I did not yet take the time to get aquainted with the NS3
architecture. Particularly, I do not know whether a it makes sense to
"port" simulation approaches viable with the NS2 into the NS3. However,
assumed the NS3 is still a discrete event simulator, some key elements
will be similar to all other d.e.s. as NS2 or omnet. So, hopefully this
post is not made completely obsolete by the NS3...
Back to the question.
Let´s start with the basic processing chain from the base station to the
mobile terminal and vice versa.
An IP datagram (let´s assume IP here for simplicity) is
a) first split up into a number of RLP blocks which are
b) block coded and added a checksum,
c) perhaps interleaved by an outer interleaver,
d) channel coded, typically by a convolutional coder,
e) perhaps interleaved by an inner interleaver,
f) queued for media access in case of common channels,
g) granted media access and added an upstream state flag in some
networks and eventually
h) sent to the wireless channel. Let´s ignore the details of physical
modulation, transfer and demodulation here.
(Question: Is f) correct that way? To my understanding, an USF can be
added not earlier than wenn the media access is granted, particularly an
USF would be added in symbols instead of bits then?)
At the receiver side, the frame is
i) taken to the inner deinterleaver to reconstruct the physically coded
j) taken to the decoder, e.g. Viterbi decoder, to decode the original
information word, i.e. the RLP frame with checksum,
k) taken to the outer interleaver,
l) taken to the block decoder
m) finally: forwarded to the ARQ receiver or, in case of a corrupted
n) the ARQ receiver reassembles the original IP packet or requests
retransmissions if necessary.
A note to interleaving: "Perhaps" means that interleaving does
necessarily apply to each technology.
Now, the processing chain from b) to m) takes a constant time plus some
variable MAC delay in f).
To my understanding, this processing chain forms a pipeline with surely
more than one RLP frame capacity.
So, I don´t think that a stop´n wait approach will be always sufficient
At the moment, I plan to simulate the processing chain from b) to m) as
a sliding window chain. This can be done by use of queue/delay pairs as
it is accomplisehd typically in the NS2. The _delay_ reflects the
processing delay from b) to m), not only the physical propagation delay.
I plan to use individual links, i.e. queue/delay pairs, for each mobile
terminals in the NS2, which are controlled by a common MAC component
which grants "media access" by allowing an individual delay component
to serve its queue based upon the chosen MAC algorithm.
Hence, although I have individual links, I can make them behave exactly
like a common channel in reality.
Up to know, whe have
- the processing chain from the ARQ sender to the ARQ receiver, which is
modeled by a queue/delay component which abstract the whole pipeline
into a simple "proecessing delay" which reflects the pipeline´s capacity
(Little´s theorem: average workload = average processing time *
arbitrary arrival rate, we use the arrival rate to reflect the physical
bandwidth and use the average processing time to model the capacity
- a MAC controller which can obey arbitrary MAC disciplines,
particularly opportunistic ones which in turn need
- filters for throughput estimation.
I plan to simulate RLP frames of a constant physcial length, although
they may have different _payload_ length to simulate different
coding schemes. (Typically, the coded rlp frame has always the same
lenght because the convolutional code is not changed but the payload
length may differ due to different puncuturing.)
For the ARQ sender / receiver, I plan to use a simple selective
retransmission scheme as can be found e.g. in RLP version 3 (IIRC) in
the 3GPP. I do not yet consider HARQ, and I´m still not quite sure
whether HARQ does not restrict us to a stop´n wait protocol.
So, the basic degrees of freedom are:
- the pipeline´s capacity (1 frame in case of stop´n wait),
- the MAC scheme, if applicable,
- the coding scheme overhead, which is fixed in case of non adaptive FEC,
- the throughput estimation filter, which is basically an EWMA filter in
actual PF-schedulers, although other filters may be considered, e.g. TSW.
As a consequence of a retransmission scheme, we have
- new (first attempt) transmissions,
- control frames (e.g. NAK) which compete for the wireless media.
Following RLP v3, I serve them priorized: control frames are given the
highest priority, followed by retransmissions and new transmissions are
given the lowest priority. To my understanding, this is reasonable for
flow control reasons: A reciever should be delivered from not yet
completed packets as soon as possible in order to free buffer space. So,
requests for retransmissions and retransmissions themselves are given
higher priority than first transmissions.
From that, we have some more degrees of freedom:
- Can packets be dropped at the queue in f)?
- Will the different queues share common memory or will they be assigned
NB: We did not yet talk about a retransmission buffer at the sender´s side.
Finally, because we want to simulate wireless networks, we need an error
model. In fact, this is easily implemented as a block error process at
the delay component. However, the most interesting question is: How
should errors be modeled?
In fact, this is perhaps an easy part in the implentation, but a wrong
choice in this place can render the whole effort completeley useless.
I think, this error model is the right place to add "reality" to the system.
O.k. This post was some kind of brain storming, which I´m currently
doing in cooperation with Emmanual Lochin, who particularly studies the
effect on different estimators used for PF scheduling. However, any
comment is greatly appreciated, particularly concerning the
block error model. As soon as we some results and some useful simulator
components, we would be glad to share this with others.
And if this is all old stuff, already available in existing products but
only not documented, then I think it´s at least helpful to have an open
source version (o.k., if ever someone is willing to use my source code
;-)) and perhaps some lines of documentation.
At the moment, I´m working with ns2, 2.30.
Mail: detlef.bosau at web.de
Mobile: +49 172 681 9937
More information about the end2end-interest