[e2e] Some thoughts on WLAN etc., was: Re: RES: Why Buffering?
lachlan.andrew at gmail.com
Sat Jul 4 14:33:43 PDT 2009
2009/7/5 Detlef Bosau <detlef.bosau at web.de>:
> Lachlan Andrew wrote:
>> "a period of time over which all packets are lost, which extends for
>> more than a few average RTTs plus a few hundred milliseconds".
> I totally agree with you here in the area of fixed networks, actually we use
> hello packets and the like in protocols like OSPF. But what about outliers
> in the RTT on wireless networks, like my 80 ms example?
That is why I said "plus a few hundred milliseconds". You're right
that outliers are common in wireless, which is why protocols to run
over wireless need to be able to handle such things.
> Was there a "short time disconnection" then?
> Certainly not, because the system was busy to deliver the packet all the
>From the higher layer's point of view, it doesn't matter much whether
the underlying system was working hard or not... If the outlier were
more extreme, then I'd happily call it a short term disconnection, and
say that the higher layers need to be able to handle it.
> So the problem is not a "short time disconnection", the problem is that
> timeouts don't work
Timeouts are part of the problem. Another problem is reestablishing
the ACK clock after the disconnection.
> Actually, e.g. in TCP, we don't deal with "short time disconnections"
There may not be an explicit mechanism to deal with them. I think
that the earlier comment that they are more important than random
losses is saying that we *should* perhaps deal with them (somehow), or
at least include them in our models.
> So, the basic strategy of "upper layers" to deal with short time
> disconnections, or latencies more than average, is simply not to deal with
> them - but to ignore them.
> What about a path change? Do we talk about a "short time disconnection" in
> TCP, when a link on the path fails and the flow is redirected then? We
> typically don't worry.
Those delays are typically short enough that TCP handles them OK. If
we were looking at deploying TCP in an environment with common slow
redirections, then we should certainly check that it handles those
short time disconnections.
> To me, the problem is not the existence - or non existence - of short time
> disconnections at all but the question why we should _explicitly_ deal with
> a phenomenon where no one worries about?
The protocol needn't necessarily deal with them explicitly, but we
should explicitly make sure that it handles them OK.
> Isn't it sufficient to describe the corruption probability?
No, because that ignores the temporal correlation. You say that the
Gilbert-Elliot model isn't good enough, but an IID model is orders of
Lachlan Andrew Centre for Advanced Internet Architectures (CAIA)
Swinburne University of Technology, Melbourne, Australia
Ph +61 3 9214 4837
More information about the end2end-interest