[e2e] Is sanity in NS2?

Rik Wade rik at rikwade.com
Wed Sep 14 15:25:47 PDT 2005


On Thu, September 15, 2005 02:59, Detlef Bosau wrote:

> The best, and basically the only _real_, validation is to compare
> simulation results with reality.

And we all know how accurate and RFC-compliant real-world protocol
implementations are. I've worked in a few large ISP/telcos and cannot
begin to describe the pain caused by broken (whether intentionally
misbehaving, or just buggy) protocol implementations at all layers from 2
to 7. Sometimes these bugs are the result of "performance tweaks" by the
vendor, otherwise they are just the result of poor software engineering.

So the question is, should a simulation try to simulate "reality"
(whatever that is), or "standards" (we know what those are).

Simulating the standards and then comparing the results with reality may
be an interesting exercise in certain contexts, but we shouldn't kid
ourselves that it's a reflection of real Internet performance. Simulating
the standards in order to assess the performance of NewProtocolX or
TCPRenoTweakY is fine, however, and the core NS developers have put in a
lot of work to enable self validation that supports precisely this type of
simulation.

My simulation work began with Keshav's REAL and migrated to NS with a bit
of X-Kernel thrown in for good measure. It did provide some confidence
that the algorithms I had implemented in REAL gave very similar results
when executed in NS (I didn't get that far in X-Kernel unfortunately).
Without venturing down the path of detailed mathematical modelling, I felt
confident in saying that the combination of simulator validation, and
results from multiple environments, provided "acceptably accurate"
results. Having run several hundred simulations of certain models and
protocol implementations in each environment, I knew that it was
repeatable.

There is, however, this conflict of simulating reality and simulating
basic algorithms and models. Once an implementation has been made and
tested in a simulated environment, the next step is to attempt to test its
performance with "real traffic" and something approximating a "live
environment" (which is why the idea of X-Kernel is nice). Having created a
clinical, deterministic, environment, we then want to add elements of
chaos and uncertainty in order to really test how our model performs when
the unexpected occurs. In my mind, this is where the value of simulation
decreases and real kernel implementation should be considered.

Is the current approach of simulation in these environments really the
best way of engineering protocols for an Internet where standards
compliance is often not in the host's interest?

On the subject of NS support, is there a project/community Wiki? A Wiki
would enable people to find and contribute FAQs quite easily as well as
their own code along with links to the relevant papers, raw data etc. If
one doesn't exist and the NS team don't have time, I'm happy to set one up
under an appropriate domain name.
--
rik



More information about the end2end-interest mailing list