[e2e] Satellite networks latency and data corruption

Detlef Bosau detlef.bosau at web.de
Sat Jul 16 09:52:02 PDT 2005


Christian Huitema wrote:
> 
> Well, last time I checked that was about 20 years ago, but conditions
> probably have not changed too much.
> 
> The error rate depends on the propagation conditions. In the band where
> most satellites operate (12/14 GHz), these conditions are affected
> mostly by the weather, and more precisely by the presence of
> hydrometeors. A large cumulonimbus between antenna and satellite can
> drop the transmission balance by 3 to 5 db. Cumulonimbus can be a few
> kilometer wide, so a typical event can last a few minutes, depending of
> the size of the wind.
> 
> The effect on the error rate depends of the engineering of the system.
> If the system is "simple" (no FEC), users may see a very low error rate
> when the sky is clear, and a rate 1000 times higher during a weather
> event. If the system uses FEC, the effect can be amplified, i.e. quasi
> no error in clear sky, and a high error rate during the event.
> 
> -- Christian Huitema

Just another question. I´m trying to understand a paper "Dynamik 
Congestion Control to Improve Performacne of TCP Split-Connections over 
Satellite Links" by Lijuan Wu, Fei Peng, Victor C.M. Leung, appeared in 
Proc. ICCCN 2004, and I´m about to throw the towel. I somehow feel, it´s 
related work for me, but after a couple of days I still do not see, 
which problem is solved in this paper.

Therefore, I would like to continue this discussion a little bit and 
start with an empty sheet of paper.

What I would like to now:
-Which bandwidths can be achieved by satellitle links?
-Which packet corruption rates? (I know, you said this above, but I´m 
not an electrical engineer, so I have to translate his into my way of 
thinking. As I understand you say there are basically two states: Sane: 
Packet corruption errors are neglectible. Ill: Packet corruption rates 
are...? Are there typical values? 90 %, 9 %, 0.9 %.....?


Refering to the aforementioned paper, a satellite link is "lossy".

Now, the authors propose a splitting/spoofing architecture:


SND----netw.cloud----P1---satellite link----P2-----netw.cloud----RCV

P1, P2: "Proxies".

Let´s assume splitt connection gateways (I-TCP, Bakre/Badrinath, 1994 or 
1995).

SND-P1 and P2-RCV run TCP, P1-P2 may use some other protocol which is 
appropriate for satellite links, e.g. STP.

My first question is: Where do I expect congestion in this scenario, 
particularly congestion, which is not handled by existing congestion 
control mechanisms?

One of the authors told me: He expects congestion at P1, because several 
TCP flows may share the same uplink here.

Hm. Wouldn´t it be correct to say: There is _no_ unhandled congestion?
TCP and STP offer congestion handling, thus none of the three parts of 
the path suffer from unhandled congestion. Is this correct?

So, if we assume STP is working just fine (I don´t know, I´m not 
familiar with STP): What´s the problem in this scenario?

Personally, I see one. And it seems to be exactly one of the problems, I 
want to overcome whit PTE. The problem is that there is no 
synchronization of rates across the proxies. If the bottleneck were 
P1-P2, this would result in a _flow_ control problem between RCV and P1:
P1 would experience frequent buffer shortages and would have to slow 
down SND.

This problem essentialy results from the fact, that P1 breaks the TCP 
self clocking semantics, at least if there is something like that on 
P1-P2. If the link P1-P2 were rate controled, the self clocking 
semantics would not be broken, a self clocking simply would not exist.

Is this way of thinking correct?

Detlef Bosau




-- 
Detlef Bosau
Galileistrasse 30
70565 Stuttgart
Mail: detlef.bosau at web.de
Web: http://www.detlef-bosau.de
Mobile: +49 172 681 9937



More information about the end2end-interest mailing list