The 1/e myth, was Re: [e2e] TCP Local Area Normal behaviour? any

Cannara cannara at attglobal.net
Sat Jan 22 12:07:11 PST 2005


The 37% solution was always bogus -- part of the Token-Ring crowd's (IBM's)
attempt to market against Enet, despite extremely higher costs.  
There's a DEC Research Lab report by Boggs et alia, from '88, that proves via
real-life experiments that any Enet segment with 20 or so nodes on it can
easily get over 90% throughput.  I may have it as a file, if someone wants
it.  There's also a graphic that I've used with networking students that helps
a great deal in seeing how CSMA/CD is superior to any token-passing system,
which I may also have as a file -- the gist is that time to complete a given
pkt send starts out above zero in a token system and increases in proportion
to the number of connected nodes, while the time to send in CSMA/CD starts at
0 on a lightly-loaded LAN and is not at all affected by connected nodes, only
by those wanting to transmit also.  This latter leads to collisions and
backoff, of course, but the delay-to-send curves for token and CSMA/CD start
out with the latter lower and only rising to cross the token curve at
simultaneous access rates well above 30% for 128B pkts.  Since most LANs run
well below 1/3 of the nodes colliding at the same time, this crossover, like
1/e, is a silly measure of relative merit.  
And, the spike in token's heart is not just cost per node, but that the
crossover moves to higher access rates as nodes are added, meaning that
CSMA/CD becomes better, relatively, as LAN segments have more nodes.

Of course, $ is always the bottom line and when I walked into a PacBell
billing center to consult on a performance problem about 8 years ago, and saw
their IBM 9745s had Ethernet interfaces, it was obvious where token was
going.  Some of us have also thought of ATM as the Token Ring of the 2nd
millenium, but it died even faster, in the corporate net.  :]
Alex

Jonathan Stone wrote:
> 
> In message <20050121193447.910DA21A at aland.bbn.com>,
> Craig Partridge writes:
> 
> >In message <Pine.LNX.4.58.0501211335290.11281 at tesla.psc.edu>, Matt Mathis write
> >s:
> >
> >>But the more pragmatic solution (adopted here at PSC and may other places) is
> >>to declare half duplex Ethernet to be broken, and eradicate it wherever
> >>possible.  Where not possible, tell people that the maximum theoretical
> >>utilization is 1/e (35%), and they should be pleased if they get any better
> >>than that, because they are operating beyond the designed operating point for
> >>the media.
> >
> >That 1/e is not consistent with Boggs & Mogul's work from SIGCOMM 1988.
> >Van Jacobson also reported results inconsistent with 1/e.  Indeed, I'd
> >thought 1/e had generally been discredited as a mistaken result from
> >inaccurate models.
> 
> If (very) dim memory serves, 1/e is valid for slotted Aloha.
> 
> But Ethernet -- even half-duplex Ethernet -- is not Aloha.  Indeed, I
> beleive different Ethernet chips in fast enough workstations (33 MHz
> r3000a or thereabouts) would repeatably give different saturation
> throughputs for ttcp on a two-host half-duplex Ethernet, due to small
> differences in collision-detect and BEB hardware implementation
> yielding slightly more idle time on the wire, or something like that.
> 
> I think Lance versus SEEQ is the pair I once noted in my lab book.  I
> can ask some more determined practitioners of the time, if you care
> for more details.
> 
> >Boggs & Mogul used multiple TCP's.  Van used a single one.
> >
> >So what did they do such that the capture effect didn't happen?
> 
> I seem to recall they used comparatively slow CPUs (DECWRL Titan) and
> an Ethernet chip that required a software intervention after each
> packet send. According to the tech report cited below, the driver
> interaction took about 100 usec, or about 2 10Mbit contention-slot
> times. (I have not checked the arithmetic.)
> 
> >And does capture really yield 1/e or something different?
> 
> See ``A New Binary Logarithmic Arbitration Method for Ethernet'', Mart
> L. Molle, Tech report CSRI-298, Computer Science Research Insitute,
> University of Toronto, 1994.  Molle examined the results of Boggs and
> Mogul, and proposed a better, fairer backoff algorithm, BLAM that (in
> Molle's words) gave a logarithmic rather than linear estimator of the
> offered workload, yielding a less non-work-preserving discipline than
> binary exponential backoff.  BLAM never went anywhere, since
> half-duplex shared Ethernet became obsolescent about that time.



More information about the end2end-interest mailing list