[e2e] A simple scenario. (Basically the reason for the sliding window thread ; -))
detlef.bosau at web.de
Thu Jan 11 03:43:04 PST 2007
Joe Touch wrote:
>> Well, it´s just how I understand the semantics of a "CLOSE ACK". When a
>> receiver issues a CLOSE ACK, we know that all data has reached the
>> receiving socket.
> We should know that. But when we have intermidiates spoofing ACKs, all
> we know is that the two endpoints agree that they have closed. The data
> itself is not known.
> Case in point - if the intermediary ACKs data and continues to buffer
> it, and the window wraps, and then the intermediary goes down, the
> endpoints think the data reached the buffer correctly but it really did not.
Of course. But, assumed we can overcome the window wrap problem, to my
understanding spoofing boxes must not spoof CLOSE ACK, so that the
sender is not notified that all data has reached the final receiver
until this happens.
Of course, we don´t know anything of intermediate states and of course
we run into a problem if a spoofing box fails.
>> What we do not know is whether the data has reached
>> the application.
> TCP is a reliable transport protocol; it is not a reliable application
> protocol. Actions outside of TCP are not ensured by TCP.
Then one could even argue with an end to end argument: When a tranport
protocol cannot assure that sent data has been read successfully by the
receiving application, we do need an acknowledgement scheme at the
application level anyway.
Please don´t misunderstand me. I don´t want to be careless about the
All I want to say is that there may be situations, e.g. extremely large
delay bandwidth products where one perhaps really wants to have an
alternative to AIMD probing to have an acceptable startup behaviour,
where proxies / splitters / spoofing boxes should be considered very
I don´t remember the paper but I think Sally Floyd once wrote about a
satellite connecetion where it takes 20 minutes or so for a flow to
achive acceptable throuhgput due to an extremely large delay bandwidth
product. So, when we _have_ acknowledgements at application level and we
can reduce fate sharing problems to an acceptable level and some proxy
could help us to significantly accelerate the start up phase here, I
think we should at least consider this as one way to go.
>> To my understanding that´s one reason why we use
>> acknowledgements on application level when it is necessary to know
>> whether an application has received all data.
> Agreed, but we do know some other things. As a *receiver*, when we issue
> a CLOSE, we keep reading until there is no more data. If we do so, AND
> we receive a "no more data", then we *know* all the data has been
> received correctly.
O.k., so we can detect an error: The sender sent a CLOSE and there is
trailing data afterwards. In that case (I don´t know what the RFCs say
here) we can issue an error message , e.g. a RST. So, let´s take the
sender´s view then: How long shall a sender wait for a possible error
message like that? Doesn´t this lead to the problem that a missing NAK
is not equivalent to an ACK?
> I.e., the semantics of who knows what are receiver-driven, not sender.
However, in a reliable connection the sender wants to know when all data
has been completely delivered.
>> So, to my understanding a PEP which keeps the semantics at the
>> connection level keeps all semantics which is provided by TCP itself.
>> Acknowledgements at the application level are beyond the scope of TCP.
> See above; PEPs that spoof ACKs can result in different data streams
> being 'correctly' processed without either side knowing so.
More information about the end2end-interest