[e2e] Is sanity in NS2?

Detlef Bosau detlef.bosau at web.de
Wed Sep 21 10:54:59 PDT 2005

Roland Bless wrote:
> Detlef Bosau wrote:
> > Let´s look what you have now: You hava an OMNET++ simulator with a BSD
> > stack.
> > And hopefully, you get a paper from that, good luck!
> >
> > However, you replaced one simulation by another. Did you by chance read
> > my reply on George´s post at the end of last week?
> It was rather lengthy, but I surely agree on the fact that
> you must decide up to which degree of abstraction you want to
> model your problem. If you don't care about TCP
> dynamics/behavior/performance etc, then you probably don't need
> to simulate TCP below your protocol.

That´s not my point. I´m "thinking" on L2 most of my time, so TCP is not
below me, it´s above me :-)

Let me give you just one example. TCP is sensitive against delay spikes.
And there are quite a few papers who claim delay spikes are often.
Some others say: delay spikes are rare. So: Do delay spikes occur? Do
spurious timeouts occur? Honestly: I don´t know.

I can do simulations - and I´m afraid I would fool myself. I´m eager to
know: Do spurious timeouts happen? And how often do they happen?
And both I want to know about reality.

> The simulator is indeed only a tool to run the model, but as in
> reality: a good, high quality tool makes life easier.

Absolutely. All I wanted to say is: There are much simulators around. So
basically we know how to write a discrete event simulator.
The problem is the lack of validated models. Years ago, someone asked
me: What is special in wireless networks? Why must they be treated
different from wirebound ones? The longer I think about it, it becomes
clear to me: The first question is: _Do_ wireless networks
differ from wirebound ones? Must we really treat them different to
wirebound ones?

In all simulations, I´ve seen so far, the discussion was:

Question: Are wireless networks different?
Assumption: Wireless networks are different.
Reusult: Wireless networks are different!

Proof by repeated assertion.  You can do this with any simulator you
want, even with paper and pencil.
I´ve _never_ seen a comparison "model vs. reality". Perhaps, there is
one. However, I did not see one yet. 

Sometimes, I even think: TCP works just fine and has no problems with
wireless networks.
> > IIRC, I´ve written there, that´s no problem to implement TCP into a
> > simulator. And btw: I don´t want to discuss whether Linux is reality or
> It doubt that, since it's really a lot of effort to get it right.

That depends upon the part of reality you talk about. Implementing or
building a construction from standards was first done, let me think,
by Imhotep, IIRC ;-)

Validating a theory is a different story. 

It´s not a theory to implement TCP. It´s reading, understanding and
obeying standards. And it has been done dozens of time before. 

So, the question is: What is the theory we want to validate? And how is
this achieved?
It´s exactly the same as Heinrich Hertz provided an experimantel proof
for electromagnetic waves, the existence of which is follows
from Maxwell´s equations. 
It´s exactly the same as the theory of the "Ether", which was
experimantally falsified by Michelson and Morley.

> > BSD (of course, BSD is considered the reference implementation), I
> > personally have great respects for _standards_. So, a TCP implementation
> > should always be done
> > RFC conformant.
> Yes, but besides the fact that it's even not easy to determine which
> RFCs are relevant (cf.

This is a problem. And when you talk to engineers not coming from CS,
they often propose to fix that. Propper standards are an outcome of
propper engineering.

> http://www.ietf.org/internet-drafts/draft-ietf-tcpm-tcp-roadmap-04.txt),
> currently the IETF doesn't define conformance tests (which is ok for
> me). So especially, if you want to analyze TCP performance, you need a
> very detailed TCP model and you have to test nearly every particular
> externally visible  TCP behavior (like response to different loss
> sequences etc.). This would mean to run and evaluate many test
> scenarios, which would be different from running interoperability and
> functional tests only.

O.k. Which scenarios to you choose? (All, of course. However, the number
is infinite.)
And what is your TCP model? A numerical one (Padhye, Mathis....)? A
reference implementation?

This can be compared to reality.

If you choose a simulator: How do you simulate the links? How do you
simulate the routers? Recently, Keshav posted a list of delays
by a TCP packet. The ns2 simulates two of them. Serialization delay and
propagation delay. Anything else is neglected.

If you, e.g., consider a network with a huge number of nodes, can you
neglect the processing time for a packet at a router with a routing
consisting of 20.000 lines? Perhaps, you want to compare greedy routing
with MPLS and its consequences on TCP throughput.
So, the processing time and its model is the place where you can inject
the desired result.
Who may judge whether you simply fooled yourself or intendedly produced
fraudulent artifacts?

Let´s make the situation more complicated. Researcher A uses OMNET++,
researcher B uses ns2. The results contradict.
Which simulator is closer to reality?

> > BTW: IIRC, a great deal of the NS2 implementation of TCP was done by
> > Sally herself. So, the person who wrote the standards and the person
> > who proposed TCP/Reno and the person who wrote the code were identical.
> > That´s great! Why did you write a new implementation?
> > And this is the normal situation. When I worked with TCP/Snnop, I
> > compared Hari Balakrishnan´s code for BSD and the NS2 more or less line
> > by line.
> > And believe me: Despite the parts where the NS2 and BSD are that
> > different that different coding is _required_, I think the same problem
> > arises
> > in OMNET++, the code is identical. Great!
> I don't get that point.

You asked for propper implementations of TCP. So, the easiest way is to
use existing ones. That´s all I wanted to say :-)
In your last post you told us, that an actual TCP stack conforming to
BSD was added to OMNET++. That´s wonderful! Its like
German industry: "Hi World! We built a car! And it´s running!" Chinese
Anwer: <yawn>

> > However, my point was not to discuss TCP implementations. This is
> > implementation work. I´m interested in the behaviour of wireless
> > networks.
> Yep, but if you want to look at TCP performance aspects in wireless
> networks, a quite detailed TCP model is necessary. Thus you need

Excuse me, I don´t follow.

If you want to build a TCP throughput modell, you must understand TCP
performance aspects in wireless networks.
Otherwise, you would not have a solid basis to build a model. Perhaps,
we should make clear what´s the model and what´s 
the implementation? 

For me, anything what happens in a TCP agent in ns2 is implementation.

> a heavily tested implementation (I don't doubt that the ns-2 TCP
> implementation is quite good and widely used).
> > details of network simulators. However, it´s concerned with
> > architectural issues in TCP, and therefore the question whether a
> > wireless
> > link raises architectural issues by it´s nature is stronly on topic.
> I don't know exactly what you're interested in, but I think there has
> been a lot of research in this area (output like RFC 3522, 4015 etc.)

I know :-)

Question: Is Eifel necessary?

Or does TCP over 3G networks works just fine without any Eifel?
IIRC the "hiccup" tool was part of the ns2. Does reality suffer from
hiccups as well?

Let me put it in extremely sharp words: Are spurious timeouts an
artifical phenomenon to base a number of PhD theses upon?
Or has it ever been proven to be a real, existing and relevant problem?

Or in terms of models: Does the hiccup tool model a wireless link
correctly? Or is it pure phantasy? I don´t know! And I´m not 
comfortable with that. From my own attitude, I would request
experiments. When "hiccup" is the proposed model, then we must
do _experiments_. If it is necessary to compare that model against
1000.000 GPRS flows observed in a period of time of, say, one year,
we have to do so. 

I spent a lot of my time working together with civil engineers. When
they proposed a model for the behaviour
of steel, a PhD thesis could simply consists of a five years work of
putting 20.000 specimens into a testing machine and to 
validate - or to falsify - the model. If it was necessary to do
experiments with 20.000 specimens and all these specimens
had to be prepared by handwork - this was done! _That´s_ why you can
cross a bridge without being afraid it will crash during the
next minute.

As a student worker, I had to write programs for measurements done at a
bridge, where a fully loaded train was put on the bridge
and taken away and put on the bridge again and than left the bridge half
way and than drove back and so on.

Civil engineers have very sophisticated models. And FEM calculations.
Done on large computers. But when it comes to reality,
a fully loaded train is driven on a bridge and driven away etc. and the
bevaviour of the bridge is measured in reality.
And from that, the models are validated - or falsified.

> > So, the discussion is: How do we simulate a wireless link? And the only
> > way to validate a model for that is to compare a model with reality.
> That depends on what you want to look at. So if you model not
> every detail, but the important ones that you're interested in,
> then that might be ok. However, like with mathematical models,
> you must know what simplifications are reasonable and will

Exactly that´s the question. And that´s why modeling should be done
bottom up. Then any step in abstraction is done with
a solid basis. And is, _of_ _course_, validated in experiments.

Eventually, you can abstract the behaviour of a whole AS or comparable
complex structure in a model as it is done e.g. in the
group of Torsten Braun. However, these models must be valid.

E.g. the processing time at a router is often considered neglectible.
However, how much times neglectible is still neglectible?
In ns2, I don´t see a user model or application model. We don´t model
(it´s neglectible...) how long it will take for
a PDA to acknowledge a TCP datagram. Is this neglectible in a network
with 10.000 PDA? I don´t know!

In my original post, I asked for flow control in ns2. If OMNET++ would
provide this, it could be interesting. However, when 
flow control and dynamic AWND is left out in ns2, the reason is quite
obvious: When I want to model the amount of memory
available at the receiver stack, I need an application model which
models when data is read from the stack.
Thus, the simplification may seem unjustified. However, it´s a
consequence from the lack of a propper and validated application/user
The implmentation of TCP flow control is one story. Providing a user
model is a different one. And once again, you can inject
the desired results here.

> likely not affect the qualitative outcome etc. the same is true
> for reasoning used traffic patterns etc.

Absolutely. Engineering science is mostly concerned with this question.

> > Of course, they are situations where the whole overhead of a simulator
> > is not necessary and I can do the implementation work myself.
> > However, this raises another question we talked about last week: Who
> > will ever believe me? Jon argued, that first the code should
> > be made available, then we may publish the papers. From this point of
> The paper should describe all necessary details about the model and
> assumptions as requested in the paper from Pawlikowski et al. about
> credibility of simulations:
> On Credibility of Simulation Studies of Telecommunication Networks,
> IEEE Communications Magazine, January 2002, pp. 132--139

It´s interesting what Pawlikowski says there (page 9 of 15):
"However, if this is a scientific activity, then one should follow the
scientific method. ... This method says that any scientific activity
should be based on controlled and independently repeatable

At least, this does not necessarily hold for a simulation. Although a
simulation with the same parameter set is of course repeatable, the so
done experiments are not. In fact, you would repeat an error as often as
you run the simulation.

> > view, the ns2 is basically a commonly accepted and commonly used
> > simulator which is pretty well known to the vast majority of researchers
> > in this area. Even more, you will find that a lot of source code
> > was contributed by the authors of the original papers and PhD theses.
> > This is one of the particular strength of he ns2.
> Yep, agreed, and that was once a main motivation to have ns-2, right?
> Its usability is however not always acceptable.

That´s correct. However, one motivation for me to use the ns-2 is that I
know the relevant parts of the ns2 pretty well now and therefore, I know
I´m doing. (At least I have a rough about idea...) 

I don´t know whether a simulator will ever be _easy_ to use. I had a
glance at the OMNET page this afternoon and OMNET comes with a number
of nice screenshots.

I personally would appreaciate a toolbox like the ns2, if only the mixed
language part could be dropped and anything would be done solely in C++.
However, this is not my first interest. At first, I must define _what_
will be simulated and _why_. And I think that´s the most important

When I use a simulator, I will read (and hopefully understand) large
parts of the source anyway. It´s the best documentation ever.
(If only the error list in the documentation would be as actual as in
the source....)
> >
> >
> No, I don't use simulation for prooving protocol correctness.
> Simulations are usually good to investigate the qualitative
> behavior of a system under variation of certain parameters.

Which again rises the question: Which parameters are neglectible, which
are relevant?
And once again, we have lots of opportunities to inject our results here

Detlef Bosau
Galileistrasse 30
70565 Stuttgart
Mail: detlef.bosau at web.de
Web: http://www.detlef-bosau.de
Mobile: +49 172 681 9937

More information about the end2end-interest mailing list