[e2e] Wireless Channel Properties
detlef.bosau at web.de
Tue Sep 6 11:06:36 PDT 2005
Dave Eckhardt wrote:
> Here are some (bit-level) traces:
> Dave Eckhardt
Thanks a lot!
I did not run them on my little simulation program yet, because my
little simulation, .... "program" is perhaps a too big word, which I
wrote the last few days cannot read from bit level traces yet.
However, as far as I have seen when I first looked at your traces, the
results appear to be quite harmless.
Of course, there are some drops. This is no problem. Packet drops occur
in networks for various reasons.
>From a mobile networks perspective, these drops are typically not
perceived by a TCP flow. Typically, in mobile networks, we
have something like this:
So, latencies perceived by L2 PDUs are mainly due to
1. Radio, Serialization Delay, Propagation Dely
2. MAC/Scheduling Delay
3. some "virual propagation delay" due to FEC overhead
4. ARQ Retransmissions.
Recently, I was told about some interleaving done in UMTS.
Q.: Where is this done in the layer diagramm above?
In my little program, radio blocks have a constant length (i.e. it
doesn´t matter) and there ist no Mac/Scheduling delay.
Thus, a block is sent and received or it is sent and dropped. At the
moment, I simulate stop´n wait ARQ. When I think of typical
radio block lengths (about 170 bit AFAIK), typical radio link bandwidth
(e.g. 2 MBit/s physikal in UMTS or less due to CDMA spreading)
a typical UMTS cell (several hundred meters) will hardly keep severel
blocks "on the air", hence I don´t see a reason for sliding window here.
O.k. Concering delay spikes, which may cause trouble to TCP, these
results from ARQ in my little program. So, I´m about to add
an error model which allows a block to pass or which causes a block to
fail. On this layer, there are only these two alternatives.
I played around with some error models today, dropped one block in ten
or 20 blocks in hundred, I used different sizes of ARQ buffers and
different retransmission strategies (FIFO, random, failed packet is left
in place and repeated, failed packet is appended to the queue) and all
yielded different delay traces. My error models were far more aggressive
than what I´ve seen from Dave´s WLAN traces.
Alas - when I generated srtt, svar and RTO with the typical TCP
algorithms and compared RTO and "observed" rtt, anything was perfectly
Depending on the success/failure ratio on the "FEC layer" the path
caused different latencies and thus exhibited a different bandwidth to
user. That´s life. However, I did not yet achieve spurious timeouts, at
least not unexpectedly often. As I wrote in earlier posts, some amount
of spurious timeouts is _tolerated_ by the RTO.
So at the moment, I´m afraid I can use Dave´s traces as error model -
and TCP may be perfectly lucky whith it.
Either I´m a bad programmer (which is surely true, I´m one of the worst
programmers I know at all) or TCP´s timers remain obstinate and refuse
to have any problems...
A possible alternative of course is, that I use much to "soft" error
models. Perhaps, the situation will change when I drop 20 % or 30 % of
Another alternative is, that I have to consider scheduling delays.
So my question to the list is:
Q: What exactly causes delay spikes / unduly often spurious timeouts in
mobile wireless networks?
Mail: detlef.bosau at web.de
Mobile: +49 172 681 9937
More information about the end2end-interest