[e2e] TCP losses

Sireen Habib Malik s.malik at tuhh.de
Mon Feb 28 01:54:09 PST 2005


Hi,

Catching up a couple of days late on the discussion.... but not missing 
the snow, -7C,  and bright sunshine all packed in 48 hours (Hamburg is 
one of the most beautiful towns on this planet).

1. 10% losses ....enough said on that.........however, a relevant query.

How long does TCP take to decide that its connection has gone to the 
dogs and that the Application must do something about it? RFC1122 
(section 4.2.3.5) talks about "atleast" 100seconds. Is this applied 
practically?

2. TCP seems to have no major problems standing on its own. Very simple 
theory yields the strength of the protocol.

Having said that it is also clear that considering the protocol without 
considering the under-lying layers is not correct. Considering 
"congestion" alone as cause of losses is not sufficient - the protocol 
when zipping through layers underneath it, sees all kinds of failures. 
Those failures which cause the TCP to have "serious" performance problems.

Obviously one way to correct those failures is at their respective 
layers.  The other is to make TCP responsible for the 2*E2E  loop for 
both the data transport and fault-recovery functionality.

At this point, i am interested in knowing what breaks TCP outside its 
own congestion related problems.  What are those failures? How 
frequently they occur?
Any idea of duration?

I would also appreciate it if you could point out wether the solution to 
the problem is dealt at the TCP layer or elsewhere. For example, I can 
think of Link Layer Retransmissions in UMTS which relieve TCP some of 
its 2*E2E non-congestion related fault recovery. The trade off is delay 
against no-losses.

It would be nicer, if the errors relevant for future large 
bandwidth-delay product IP over DWDM networks be given priority.

Any data, papers, guesses...etc. will be appreciated.

Thank you and regards,
regards,
SM














More information about the end2end-interest mailing list