[e2e] [Fwd: RED-->ECN]

Christian Huitema huitema at exchange.microsoft.com
Thu Feb 1 09:52:54 PST 2001


Jon,

Yes, we could indeed decide that penalizing long sessions is a good
thing. But, guess what, the guys writing the download applications are
no dummies. If they observe that 
	loop until EOF
		open connection
		go to current file location
		get an additional 5 megabytes, or the rest of the file
if less
... gets then better performance than just "open a connection and get
the file," guess what they will do? Indeed, you could call that an
intelligence test -- smart elephants morph into mice, the other ones go
the way of the dinosaurs. But then, why are we bothering writing complex
requirements for TCP?

-- Christian Huitema 

-----Original Message-----
From: Jon Crowcroft [mailto:J.Crowcroft at cs.ucl.ac.uk] 
Sent: Thursday, February 01, 2001 9:33 AM
To: Christian Huitema
Cc: end2end-interest
Subject: Re: [e2e] [Fwd: RED-->ECN]



In message <CC2E64D4B3BAB646A87B5A3AE97090420EFADA10 at speak.dogfood>,
Christian 
Huitema typed:

 >>I believe that the only way to solve this problem is to change the CA
 >>algorithm. The only way elephants can keep ahead of mice is, if their
 >>window scales as 0(1/L), instead of 0(1/sqrt(L)).
 
 Christian 

all jolly technically true and all that, but what about the intrinsic
value of content - tcp is used for downloads - most short downloads 
are part of web sessions with a sequence of interactions where as most
long downlaods are fetching new (microsoft?:-) releases - why shouldnt
the poor interactive user getter a better deal...what about shortest
job first scheduling argument for overall average numeber of jobs
compelted in a workload mix? eh?

j.



More information about the end2end-interest mailing list