[e2e] [Fwd: RED-->ECN]

Cannara cannara at attglobal.net
Thu Feb 1 12:12:40 PST 2001


Good, Christian.  One observation of real TCPs in real corporate networks
comes to mind here -- two user-application phases:  a) user sets up some
operations on a remote host, using system commands, scripts, etc., ending in
an FTP request; b) the FTP request results in multi-MB transfer.  This happens
many times every day in design firms.

Suppose the router feeding the bottleneck link, say a T1, has simple FIFO
queuing.  Users in phase a) suffer huge delays, simply because even one other
user already in phase b) has a host dumping a many-packet window into its LAN
side of the router -- these packets are queued for output ahead of almost
every single-packet command a phase a) user emits.  Since the data packets are
typically full sized, they take 8mS each to enter a T1.  Suppose the
round-trip delay is 100mS, then a 12-packet TCP transmit window for the FTP
host will double the delay each phase a) user command experiences.  If these
commands are part of a lengthy script, for instance, the actual user will see
his 20-second command sequence bumper to 40 seconds, etc.  If phase b) is
occurring in both directions on the link, the users in phase a) have their
normal delays at least quadrupled.

Obviously, the networks-for-dummies solution of buying another T1 might help a
bit, but the best thing is to treat the small-packet, tick-tock commands with
priority, say weighted queuing, round-robin queues distinguished by packet
size, etc.  But, how many network admins actually understand this and how to
fix it?  We all know how many, or should know.

Any network (taken as a system) that can't easily and automatically figure
these simple things out is not a true system, certainly not a feedback-control
system of any sort.  We know TCP is a poor example of feedback control,
because it often exerts the wrong 'control', so something more intelligent
needs to be done in the network itself.

Take one more example, happening every second of every TCP day -- the odd-
packet timer problem.  A corporate net with connections around the world
experiences round-trip delays on the order of 1/4 second.  A set of database
lookups for common applications (Oracle, PeopleSoft, etc.) usually take one,
two or three packets of data in either direction.  Running on defaulted TCP
incurs a penalty -- when many of the operations take an odd number of packets
to complete, the receiving TCP, being defaulted these many years to Ack
alternate packets, will not Ack the 1st, 3rd, etc. packet right away. 
Instead, it will go into Ack timeout, typically defaulted to 200mS.  So, if
the round-trip delay is 200mS the overall application responsiveness is now
cut in about half.  Does the receiving TCP evidence a "systems-control"
intelligence that uses the consistent input from the sender that packets come
in odd numbers?  None that I've seen.  Unless the admin is smart and realizes
that common TCPs (Microsoft, Sun...) and their writers don't have a clue about
many real-life networking issues, he/she will leave the defaulted Ack timer,
perhaps not even knowing it exists, and go trying to buy higher bitrate links,
only to discover little if any effect.

The above are two everyday examples that can be addressed directly by
attention to how the network and aged TCP can be improved to actually enter
the millenium respectably.  Of course, we consultants love this myopic,
bureaucratic Internet stuff.  {:o]

Alex


Christian Huitema wrote:
> 
> Jon,
> 
> Yes, we could indeed decide that penalizing long sessions is a good
> thing. But, guess what, the guys writing the download applications are
> no dummies. If they observe that
>         loop until EOF
>                 open connection
>                 go to current file location
>                 get an additional 5 megabytes, or the rest of the file
> if less
> .... gets then better performance than just "open a connection and get
> the file," guess what they will do? Indeed, you could call that an
> intelligence test -- smart elephants morph into mice, the other ones go
> the way of the dinosaurs. But then, why are we bothering writing complex
> requirements for TCP?
> 
> -- Christian Huitema
> 
> -----Original Message-----
> From: Jon Crowcroft [mailto:J.Crowcroft at cs.ucl.ac.uk]
> Sent: Thursday, February 01, 2001 9:33 AM
> To: Christian Huitema
> Cc: end2end-interest
> Subject: Re: [e2e] [Fwd: RED-->ECN]
> 
> In message <CC2E64D4B3BAB646A87B5A3AE97090420EFADA10 at speak.dogfood>,
> Christian
> Huitema typed:
> 
>  >>I believe that the only way to solve this problem is to change the CA
>  >>algorithm. The only way elephants can keep ahead of mice is, if their
>  >>window scales as 0(1/L), instead of 0(1/sqrt(L)).
> 
>  Christian
> 
> all jolly technically true and all that, but what about the intrinsic
> value of content - tcp is used for downloads - most short downloads
> are part of web sessions with a sequence of interactions where as most
> long downlaods are fetching new (microsoft?:-) releases - why shouldnt
> the poor interactive user getter a better deal...what about shortest
> job first scheduling argument for overall average numeber of jobs
> compelted in a workload mix? eh?
> 
> j.



More information about the end2end-interest mailing list