[e2e] [tcpm] RTTM + timestamps

Detlef Bosau detlef.bosau at web.de
Sat Jul 30 13:45:47 PDT 2011


First of all: Do we discuss this matter on the e2e list? Or the tcpm list?

Frankly spoken, I felt somewhat hurt this afternoon.



On 07/30/2011 10:03 PM, Scheffenegger, Richard wrote:
>>
>> The timer ambiguity was originally solved by Karn&  Partridge. To my
>> understanding, this ambguity does not even occur when timestamps are
>> used. Do you agree?
>
> Timer (RTTM?) ambiguity != retransmission ambiguity.
>
> The semantics of RFC1323 explicitly make it impossible to
> disambiguate a received retransmission from a delayed
> original transmission, whenever there is still a preceding
> hole in the received sequence space.
>
> But these semantics were designed, to always result in
> conservative RTT estimates, even when dealing with
> delayed ACKs and ACK loss, without requiring any
> other signal.(*)
>

What is the meaning of "conservative" here? Does this mean "if in doubt, 
the RTT should be rather too large than too small"? Or the other way round?
To my understanding, this should be the meaning. Is this correct or am I 
wrong here?
> Furthermore, the disambiguation (which will only work for
> the first hole under RFC1323) will only work when the
> timestamp value has changed between the original
> transmission and the retransmission.

Hang on. Do we talk about timers or timestamps here?

IIRC, the timestamp, 1323 is talking about, is simply reflected by the 
receiver. So it does not matter, to which packet the timestamp belongs. 
Actually, you measure the timestamp's RTT itself.

> Thus the timestamp
> clock tick delay must be shorter than the RTT.

Not quite. I read a few discussions in this direction during the last 
days. When talking about TCP, these discussions miss the point.
In the very case of TCP, and we should make clear what we are talking 
about, we want to estimate a confidence interval for the RTT.
Hence, when the sender's clock granulation is 1 second and the RTT 
estimator yields 1 second and the variance estimator yields 0, the RTO 
would be 1 second or 2 second (G or 2 G, refer to 2988...) and when the 
actual RTT is 
0.000000000000000000000000000000000000000000000000000000000000000000001 
second, the resulting RTO is, at least, sufficiently large.

It is an immediate result from the well known principle of robustness, 
that we must not exclude systems from the internet for the only reason 
of being unable to provide for a timer resolution smaller than, say, 0.1 
seconds.

Anyoune here will agree, that a timer resolution should be "reasonable 
small". However, from my point of view, 0.1 seconds _is_ reasonable 
small. Should we discourage the use of TCP/IP on a single Ethernet 
segment for that reason?
> With common timestamp clock granularities of
> 10..100 ms, it's easy to see that this may work for
> intercontinental paths, but not for continental, regional or
> local paths (not to mention data center / LAN environments).

Please ignore my ignorance, Richard. Perhaps I'm a bit stupid and have a 
certain lack of experience. However, from what I see _HERE_, actually in 
this very moment, is:  TCP/IP works in my living room. With nodes being 
less than 5 meters apart and connected by an ordinary 802.11 WLAN.

It is, however, a completely different story, whether we're talking of 
some "general purpose one size fits all" network or some special purpose 
network which is particularly designed e.g. for a mesh used in some 
multiprocessor machine. Perhaps, you do not even have some kind of mesh 
which reminds you of an Ethernet or something like that, but reminds of 
some synchronous data bus or something like that.

In this particular mailing list, we talk about internetworking. So we 
should agree on the assumptions which can be made for nodes in an 
internetwork.
These may differ from the assumptions which can be made for 
interprocessor communication on a blade.

Do you agree here?

> Thus there are two issues:
> o) The typical timestamp granularity is too coarse for a
> high fraction of sessions;

Is this so? And should we exclude all nodes from the Internet which 
cannot provide for a "sufficiently fine timestamp granularity"?

> o) Secondly, the current timestamp semantics do not allow
> discrimination of non-contiguous received segments between
> original vs. retransmitted segment, because of a deliberate
> loss of accuracy to "solve" the RTTM issue during loss
> episodes.

Is this mainly a matter of accuracy?

Or do we have other issues as well?

Detlef

-- 

------------------------------------------------------------------
Detlef Bosau
Galileistraße 30	
70565 Stuttgart                            Tel.:   +49 711 5208031
                                            mobile: +49 172 6819937
                                            skype:     detlef.bosau
                                            ICQ:          566129673
detlef.bosau at web.de                     http://www.detlef-bosau.de
------------------------------------------------------------------




More information about the end2end-interest mailing list