[e2e] TCP in outer space

Craig Partridge craig at aland.bbn.com
Tue Apr 10 06:36:28 PDT 2001

In message <A17BDB85B175D311804E00E07D02A21D276303 at CONAN>, Michael Welzl writes

>Right; and there's one thing I never understood:
>Why isn't TCP translated into a rate-based scheme and vice versa across
>a satellite (or other wireless) link?
>Like this:
>  O ----------- O XXXXXXXXX O --------- O
>Sender         GS1         GS2       Receiver
>GS1, GS2 = Ground Stations 1 and 2
>XXXXXX = noisy link
>Sender, Receiver: regular TCP
>GS1: perform buffering, behave like TCP receiver, perform
>flow control (NO congestion control) with GS2
>GS2: act like TCP sender, tell GS1 how fast to send via explicit
>rate signaling (based on acks from Receiver)
>GS1: send acks based on min( congestion window, rate from GS2)
>Of course, it's not going to be easy - but there should be
>some way to bridge the noisy link, right? Or am I being naive?

Well it is a bit tricky -- what happens (as it inevitably does) when
you get the two control loops out of phase, such that GS1 is acking
data at a rate faster than the GS1->GS2 link can absorb it (say the
error rate spikes up).  Buffering gets exciting.

Other question: what do you do if GS1 crashes?  You've acked data
to Sender and it thinks it is received...  (and don't suggest delaying
the ACK for the FIN... if GS1->GS2 delay is much longer than Sender->GS1
then the FIN may timeout before the round-trip can take place :-)).


More information about the end2end-interest mailing list