[e2e] Satellite networks latency and data corruption

Christian Huitema huitema at windows.microsoft.com
Sat Jul 16 15:27:05 PDT 2005


> What I would like to now:
> -Which bandwidths can be achieved by satellitle links?

It depends. Back in the 1980's, the satellite that we were using featured simple transponders, receiving on one frequency, shifting the signal to another, and amplifying it back. The final stage was a 20 W vacuum tube, and the bandwidth was 35 MHz. The signal was sent on a single beam that covered Western Europe. The ground stations used 3 meter antennas. Using a simple modulation, the transponder's capacity was 35 Mbps.

All that can change from system to system. Power on board satellites is still limited, but better electronics on board the satellites and in ground stations can reduce the transmitter and receiver noise -- although I am not sure whether solid state electronics are actually less noisy than vacuum tubes. Some satellites use directed beams, and so can concentrate more power towards the expected receiver. With modern DSP, you can certainly implement much more sophisticated reception algorithms. On the other hand, modern systems tend to use smaller antennas, which are much more practical than a 3 meter diameter dish, but which are also less directive and less powerful. 

We can certainly carry several 100 Mbps through a satellite, and I would not be surprised if some systems were able to carry several Gbps. But the reality is that there is no such thing as a generic satellite link. Different systems will have different characteristics.

> -Which packet corruption rates? (I know, you said this above, but I´m
> not an electrical engineer, so I have to translate his into my way of
> thinking. As I understand you say there are basically two states: Sane:
> Packet corruption errors are neglectible. Ill: Packet corruption rates
> are...? Are there typical values? 90 %, 9 %, 0.9 %.....?

Again, the values depend on the specific characteristics of the system. The designers will aim for some reasonable point of operation, typically express as "a bit error rate of less than X for a fraction Y of the time". The fraction Y will depend of the expected usage -- the classic two, three, five nines. This will determine the worst weather condition under which the system can operate, i.e. conditions that are only met 1%, 0.1%, 0.001%. Then, the designers will pick a desired service level. If they expect to serve voice, they will aim for a bit error rate compatible with the voice compression codec -- typically 10-5. If they aim for data, they will pick a lower rate -- typically 10-6 or 10-7. In some cases, they will propose two level of circuits, using FEC to upgrade from "voice" to "data" quality.

Then, there is the difference between the design spec and the actual implementation. The design spec typically allocates a budget to each of the elements in the chain -- stations, antenna, transponders. The engineers in charge of each component strive to match the allocation, and in many cases do a little bit better than expected. These little gains accumulate, and in the end you may well find out that the actual bit error rate much lower than the initial spec.

Bottom line, you need to actually measure the system for which you are designing the protocol. There is no "one size fits all" answer to your question.

> Now, the authors propose a splitting/spoofing architecture:
> 
> SND----netw.cloud----P1---satellite link----P2-----netw.cloud----RCV
> 
> P1, P2: "Proxies".

In theory, there is no particular performance benefit to the proxy architecture. If the transmission protocol uses selective retransmission and large windows, the differences of performance between SND-RCV and P1-P2 are truly minimal. In practice, there is only an advantage if the end-to-end implementations are not well tuned, e.g. do not allow for large windows.

-- Christian Huitema


More information about the end2end-interest mailing list