[e2e] Delays / service times / delivery times in wireless networks.

David P. Reed dpreed at reed.com
Mon Feb 26 09:27:35 PST 2007


No one can reason, a priori, why packet losses happen in either GPRS or 
802.11.   In many 802.11 scenarios, packets are lost because of noise, 
hidden terminal problems, etc.   This is not congestion, per se.   The 
only parallel is that if you really slow down all transmissions to very 
low rates, hidden terminal (arbitration failure) problems do get 
mitigated (probability of collision eventually drops if packets are 
really rare), but that is not a Little's Lemma queueing delay  of the 
sort that TCP-in-the-theory-of-VJ manages through its rate control.

But in fact, GPRS and 802.11 lose packets for all sorts of reasons, 
because the medium is not pristine and point-to-point.   It is noisy 
(very much so), variable, and so forth.

My point was that adding almost-unbounded queues in the path (which both 
technologies have tendencies to do) just makes things worse.

Francesco Vacirca wrote:
> Once more I'm not agree with your comparison between TCP over GPRS and 
> 802.11:
>
> in the first scenario packet losses are due to causes independent of 
> TCP, whereas in the second case, packet losses are mainly due to 
> collisions, i.e. congestion... not retransmitting here (or delaying 
> retransmissions) can benefit other TCP flows.
>
> In the GPRS case (I'm thinking about a GPRS dedicated channel), stop 
> retransmitting a packet does not benefit TCP (I'm suggesting to use a 
> large number for the maximum retry limit, that is the best adaptive 
> number you can guess).
> Adding a fixed FEC adds a fixed overhead that decreases the bandwidth 
> seen by the mobile terminals (often without any reasonable reason), 
> adding an adaptive FEC can be a good solution, but if a packet is lost 
> on the channel (nevertheless the FEC overhead), it has to be 
> retransmitted by the link layer.
> Francesco
>
>
>
>
>
>
> David P. Reed wrote:
>> It's important to remember that TCP was never intended to be optimal 
>> in any scenario.   E.g. it wasn't supposed to be a protcol that met 
>> the "Shannon Limit" for the weird and wonderful "catenet channel" 
>> that is created when when one tries to unify all networks on a "best 
>> efforts" basis.
>>
>> Of course, there will always be theorists who try to make silk purses 
>> out of decomposing sow's ears, and produce some wonderful algorithm 
>> that works based on some theoretical assumptions they can write down 
>> and convince a whole series of conferences (and DARPA) are the *one 
>> and only true way that networks work*.
>>
>> So I wouldn't spend a lot of time trying to "control the packet loss 
>> rate" in the GPRS channel to optimize TCP, as you seem to be 
>> suggesting here.
>>
>> A simple rule of thumb is that TCP likes small bandwidth x end-to-end 
>> delay, and it likes low error-caused packet loss rates.   Thus, 
>> retransmission more than once on a link ( efforts that are >100% 
>> coding overhead) is probably too much effort to deal with link 
>> errors.   It's just as reasonable to use FEC on the link, which has 
>> an overhead far less than 100%.
>>
>> It's sad that 802.11 frequently retries up to 256 times, but that's 
>> not comparable, exactly since it isn't bit errors, but arbitration 
>> errors that cause that.
>>
>> Srinivasan Seshan wrote:
>>> Actually, how smart is not too hard to estimate - assuming that we 
>>> are really just doing this for TCP.
>>>
>>> As far as packet loss rates are concerned, the target probably 
>>> shouldn't be something fixed like 10^-3. It really depends on the 
>>> link speed. What you want to probably do is ensure that the packet 
>>> corruption rate is an order of magnitude less than the drop rate due 
>>> to congestion. You can get the congestion drop rate by estimating 
>>> the average RTT for flows and the raw speed of the link. Apply some 
>>> TCP modeling magic and you should be able to pull out a loss rate. 
>>> Note that I assumed a single flow here, which is the worst case. 
>>> Multiple flows will raise the congestion loss rate. So, if you can 
>>> assume that you have more flows, you can accommodate higher 
>>> corruption loss rates.
>>>
>>>    Srini
>>>
>>> Detlef Bosau wrote:
>>>> David P. Reed wrote:
>>>>
>>>>> Once the folks who ran IP networks over frame relay realized that 
>>>>> you should never provision reliable delivery if you were running 
>>>>> IP, this stopped happening.
>>>>>
>>>>> So the story is that GPRS can, if it tries to provide QoS in the 
>>>>> form of never dropping a frame, screw up TCP.
>>>>>
>>>>> But this has nothing to do with mobility per se.   It has to do 
>>>>> with GPRS, just as the old problems had to do with Frame Relay, 
>>>>> not with high speed data.   The architecture of the GPRS network 
>>>>> is too smart.
>>>>
>>>>
>>>> How smart is "too smart"?
>>>> And how much smartness is necessary?
>>>>
>>>> Some authors note that the IP packet delivery time in mobile 
>>>> networks is in fact a random variable, because the information rate 
>>>> in wireless networs sometimes changes several times _within_ one 
>>>> packet. The reasons are manifold and as a computer scientist, I 
>>>> have only a rough understanding of some of the relevant issues here.
>>>>
>>>> To my understanding, the basic question is: Which packet corrution 
>>>> rate can be accepted by an IP network?
>>>> This is perhaps no fixed number but there is some tolerance in it. 
>>>> However, I think we can agree that a packet corruption rate less or 
>>>> equal to  10^-3 does not really cause grief. On the other hand, 
>>>> when the rate of successful transmussions is less or equal to 
>>>> 10^-3, the network is quite unlikely to be used.
>>>>
>>>> So the truth is perhaps not out there but somewhere in between ;-)
>>>>
>>>> Detlef
>>>
>>>
>>
>



More information about the end2end-interest mailing list