[e2e] end2end-interest Digest, Vol 19, Issue 11

Yogesh Prem Swami yogesh.swami at nokia.com
Thu Sep 15 14:35:19 PDT 2005


I tend to agree with your's and Prof. Reed's comments. My question was
not so much about usefulness, but more about realism (but I myself
messed up that question in the last paragraph of the e-mail :-)).

When ever we run simulations, we tend to ignore parameters or events
that we believe will be insignificant to the end result. So for example,
when running TCP performance tests, we ignore how the ARP cache works.
We *assume* that these "small perturbations" in the system would only
result in small perturbations in the final outcome. And this might be
true on a small scale (whatever small scale means). However, as the
scale increases, it might no longer be true that small perturbations
will lead to only small perturbations in the outcome (I'm speculating,
not suggesting).

In the worst case, it might turn out that simulations require infinite
precision to realistically simulate the system. So even if we build
highly sophisticated simulators, it might not be possible with out
finite precision machines to get extremely meaning results. This also
means that people who are using simulators must know (or guess) what
external parameters they need to include in their simulations and what
the scale of their simulations should be. Unfortunately, I have found
that trying to guess these external parameters is often non-trivial.

All this might seem like unnecessary jargon to people who just want to
write some papers using NS-2, so thanks for bearing with me.

Thanks
Yogesh

ext Kathleen Nichols wrote:
> I'm feeling a curmudgeonly rant coming on.
> 
> David Reed gives some excellent perspective on the use of simulation. In
> my experience, it's the "perspective" that seems to be out of order in
> much of the research use of simulation. From the start, I've cast a
> suspicious eye at the results of my own simulations, not to mention
> others. The most egregious problem seems to be claiming internet-wide
> peformance improvements based on very limited simulations. (Jon
> Crowcroft gave me the chance to rant on this at the Decides BoF at
> the Oslo IETF in 1999 in the diffserv context.) Making sweeping changes
> in widely deployed internet protocols on the basis of simple simulation,
> or even simple lab experiments just seems like a bad idea.
> 
> On the other hand, simulation can be excellent for understanding the
> dynamics of smaller scale interactions. Years ago while working at
> the Bell Labs That Was, I was working on a visual simulator and
> modeled a colleague's distributed computer network. The visuals
> beautifully showed a deadlock. I told him not to worry, my simulator
> was still being developed. 3 weeks later he told me that they'd
> found the deadlock in the lab. Okay, sometimes simulation works.
> When working at my very first start up, I simulated TCP interactions
> with a cable data system and found some problems in the protocol.
> This seems another nice use of simulation - to explore inter-layer
> interactions before everything gets built and deployed.
> Went through all the channels to explain how this needed to be fixed.
> The engineer writing the code decided not to implement the change.
> When a problem showed up in the beta roll-out, it was obvious the
> change hadn't been implemented. Over the years, I've used simulations
> to learn various cool things about how things interact and get
> some insight. It never occured to me to recommend new protocols
> for the internet on the basis of a simulation though.
> 
> The first time I used ns (before ns-2), I was persuaded to do so by
> Van Jacobson who told me that ns's event sorting was very efficient
> and that the TCP code was very good. I found out that events were
> stored in a linked list (I think the statute of limitations on my
> complaining about this is about up, though) and the TCP implementation
> did not have the three way handshake nor did it shut down connections
> properly. On the other hand, it took me less time to fix this stuff and
> get a useful set of simulations than the amount of time I'd put into
> trying to model MAC layer-TCP interactions in Opnet. When I picked up
> ns-2 the first time, I found the "full TCP" code was full of bugs and
> the oTcL/C++ interaction made it much harder to code your own models.
> So, I think I'm supporting Detlef and those who say it's important to
> read through the models and to know what you are modeling and to know
> what the goals of your simulation are. I'm sort of looking for something
> without the baggage of ns-2 these days. Has anyone used Omnet++?
> 
> Perspective is a good thing to have in life. Especially when it comes
> to the use of simulation. I'm not sure we can expect the average
> grad student to have such perspective, but it seems like it might
> reasonably be expected of those who advise graduate students.
> 
> 	respectfully,
> 		Kathie Nichols
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 479 bytes
Desc: OpenPGP digital signature
Url : http://www.postel.org/pipermail/end2end-interest/attachments/20050915/1d920980/signature.bin


More information about the end2end-interest mailing list