[e2e] end2end-interest Digest, Vol 108, Issue 24

Srinivasan Keshav keshav at uwaterloo.ca
Thu Apr 18 12:27:53 PDT 2013

Some clarifications and comments on this thread:

1. I wrote the REAL simulator (by heavily modifying the NEST simulator from Columbia) [1] in 1988 as a summer intern at Xerox PARC, working for Scott Shenker. REAL's many failings served to inspire Steve McCanne to write ns around 1991 [2]. I released several versions of REAL into the public domain; the source code is probably still floating around somewhere.

2. REAL stands for 'realistic and large,' my design goals. I guess these goals still hold true for more recent simulators. I think we all recognize that 'to simulate' means 'to lie,' and every simulator is only a model of reality, so naturally excludes some aspects of reality. A simulator that excludes aspects of reality that are relevant to the system being studied will generate incorrect results. REAL did not model wireless links, so any results I obtained from REAL (for example for packet-pair) do not necessarily hold for wireless links, in general.

For the specific case of today's wireless links, i.e., CSMA/CA and cellular links, its trivial to observe that packet-pair won't work, because you can't send two packets back-to-back over any real wireless link that uses either CSMA/CA or RRM (Radio Resource Management in cellular networks). 

3. I do not believe in proof by simulation: having done many simulations myself, I am well aware of their shortcomings. I hold with Hamming, who famously said: "The purpose of computing is intuition, not numbers."(*) with s/computing/simulation/

4. My thesis 'validated' packet pair using simulations because no networks of FQ schedulers existed at that time and I didn't feel like building one of my own. I don't think this is good validation, but couldn't think of any other way to test my ideas. 

5. There are two reasons why people think packet-pair style congestion control did not (and will not) work in the real world. One is a myth, one is a reality.

First, the myth: it advocates per-flow WFQ, which does not scale well, requiring per-flow state in routers. Later work by Stoica et al [3] showed how to remove this need for per-flow state in  the Internet core, and large-scale TCAMs allow quite a large amount of per-flow state these days. So, in practice, per-flow WFQ is quite feasible today (and is supported by Cisco, for example [4]).

Second, the reality: it requires all routers on the path to support WFQ. This seems straightforward at first glance, but has a big problem: how is an end point to pay for choosing a large(r) scheduling weight? To whom does it pay? And how does this payment get distributed amongst the entities along the path? These fundamental issues are part of the reason why Internet QoS exists only in paper form (see [5] for some other very good reasons) *other than in private networks*.

6. In contrast, an end-system based control that responds quickly to packet reorderings and drops and makes no assumptions on scheduling disciplines is legacy compatible, hence the success of VJCC and its successors.


*Preface to Numerical Methods for Scientists and Engineers. 1962.

[1] Dupuy, Alexander, et al. "NEST: A network simulation and prototyping testbed." Communications of the ACM 33.10 (1990): 63-74.
[2] Bajaj, Sandeep, Lee Breslau, Deborah Estrin, Kevin Fall, Sally Floyd, Padma Haldar, Mark Handley et al. "Improving simulation for network research." USC TR 99-702, 1999.
[3] Stoica, Ion, Scott Shenker, and Hui Zhang. Core-stateless fair queueing: Achieving approximately fair bandwidth allocations in high speed networks. Proceedings of the ACM SIGCOMM '98 conference on Applications, technologies, architectures, and protocols for computer communication, ACM SIGCOMM Computer Communication Review          
Volume 28 Issue 4, Oct. 1998.
[5] Crowcroft, Jon, et al. "QoS's Downfall: At the bottom, or not at all!." Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS: What have we learned, why do we care?. ACM, 2003.

> I wasn't clear (as usual)
> keshav wrote a simulator (somewhat before NS[123] exists I suppose
> since it pre-dates them and even Tcl) called
> REAL, which I suppose he used for his PhD - this is all online here:
> http://blizzard.cs.uwaterloo.ca/keshav/wiki/index.php/Paper-chron#1988
> The REAL simulator is rather nice...
> I guess it was the basis later for the startup he had called Ensim
> (maybe he'll notice this and comment here)
> I think the point i was trying to convey was that 
> packet pair (and packet train) and fair queuing (and other more active
> queue management algorithms) have seen evaluation by 
> many people using many tools (including measurement as well as
> simulation -- and also even analytical models (adversarial queuing, 
> network calculi etc)
> and (once you get away from fifo, fluid approx stuff works, so lots of 
> things get easier theoretically)
> but yes, something only ever done in ns2 is hardly convincing.
> there are other simualtors, however and some have detailed models
> of physical layer - a nice trick to consider is uing hybrid systems such as Matlab to generate a box
> of code which models CSMA/CD or what have you, including a detailed scattering/fading model
> and then plug that in to a discrete event simulator such as NS2 - or you could use opnet, which has detailed models
> of quite a lot of low level systems ....
> the world is full of wondrous things
> j.

More information about the end2end-interest mailing list