[e2e] TCP in outer space

Eric Travis travis at gst.com
Tue Apr 10 07:03:27 PDT 2001

> Right; and there's one thing I never understood:
> Why isn't TCP translated into a rate-based scheme and vice versa across
> a satellite (or other wireless) link?
> Like this:
>   O ----------- O XXXXXXXXX O --------- O
> Sender         GS1         GS2       Receiver
> GS1, GS2 = Ground Stations 1 and 2
> XXXXXX = noisy link
> Sender, Receiver: regular TCP
> GS1: perform buffering, behave like TCP receiver, perform
> flow control (NO congestion control) with GS2
> GS2: act like TCP sender, tell GS1 how fast to send via explicit
> rate signaling (based on acks from Receiver)
> GS1: send acks based on min( congestion window, rate from GS2)

Isn't the rate based mechanism above simply link layer ARQ?
The space community has been doing that for some time :o)

If the above is not link ARQ, it probably should be -  IPSEC is likely
to prevent the ground stations from doing anything clever based on the
content of the IP datagram... and the complications of properly handling
the buffering of the TCP segments so that you don't hurt yourself
(or someone else downstream) are likely to give you a headache.

However, if your application environment is willing to support it (they
certainly do if I correctly interpret the "outer space" in the original message
as communication to/from an entity onboard an *unmanned* spacecraft - rather
than through a communications satellite), you are likely to realize "better"
performance (link utilization over the space link, timeliness of delivery, etc.)
splitting the transport connection at the ground station:

        Receiver      ^      Ground Station              Spacecraft

|                                                   (Sender)
                               +-- bottleneck B

The standard disclaimers certainly apply (fate sharing, loss of true TCP
between actual sender and receiver transport entities, IPSEC complications that
force the Ground Station to be a trusted entity, etc.).

Any real end-to-end reliability MUST be performed within application space
(application level ack of data received), so I don't consider this to be a
purpose transparent solution.

However - if the above is applicable to your communications environment:

Once you've split the transport, you've split the control feedback path and
now you can tweak your transport protocol [*] so that it's behavior is better
suited to the challenged subnet without worrying about deploying similarly
tweaked implementations at the terrestrial endpoints. The added bonus is
that you've isolated the terrestrial network from the possibility of any nasty
side effects of the tweaked TCP or other deployed transport entity.

It's a fairly special purpose hammer for a niche communications environment
(outer space) but  we've experienced some fairly spectacular performance gains
this way.  There is a pointer in the signature block to more info and code to

Not that any of this should be interprete

Eric Travis
travis at gst.com

[*] Could be TCP but once your beyond lunar distances (perhaps even L1/L2)
      you are likely to want to abandon TCP for something with a looser feedback


More information about the end2end-interest mailing list