[e2e] Question the other way round:

Detlef Bosau detlef.bosau at web.de
Sun Nov 17 05:23:39 PST 2013


Am 17.11.2013 11:20, schrieb Sergey Gorinsky:
>   Actually, the formulae are contained within a page or so. The main idea of
> the RD network services is quite simple: instead of differentiated pricing
> or admission control (which seem difficult in the multi-provider Internet),
> the rate-delay trade-off can serve as a natural basis for performance
> differentiation. The design comes with built-in incentives for
> delay-sensitive apps to use the low-delay D service and for
> throughput-sensitive apps to communicate over the higher-throughput R
> service. 

The good ol' "Metro Pricing" :-)
(Rumour says that the "Metro" in Paris introduced higher prices for
first class travellers. So, only persons who could afford this price
travelled first class. A certain "entre nous" feeling for the rich. I
never checked whether or not this is an urban legend.)
>> a very particular situation. My general attitude is
>   Making general conclusions based on very particular situations?

No. And I did not even consider a rate-delay trade off in this attitude.

Of course, we can conduct a discussion which buffer design will cause
which delay and which loss.

Too small  buffers causes loss, too large buffer cause delay.  At least
in today's Internet.
>  Is there
> an experimental evidence against existence of rate-delay trade-offs, or
> rate-loss trade-offs in networks without buffers, in wireless settings?

Hardly. And beware of experimental results in wireless settings.
Wireless experiments are hardly reproducible. Remember the "spurious
timeout" discussion in the late 90s. For about five years, a couple of
PhD students "rocked the conferences" with "spurious timeout" results
and approaches - at the end of the day, Hasenleither et al. made huge
efforts for finding those in UMTS networks. With negative result. Many
doctoral hats, some of which perhaps with honours, gifted away for
solving a non problem.

We should learn the lesson, that loss delay considerations are extremely
difficult in wireless networks. (In the sense of: practically impossible.)

But this is an old debate, Hasenleithner et al. published their results,
the hats were given, the issue moved out of sight and we turned to the
next story.
>
>   Available network capacity is not fully predictable, especially in mobile

What is "network capacity" all about? I often read: "Bandwidth Delay
Product."

Bandwidth (interestingly given in Bit/s) times delay.

Actually, this is taken from Little's Law:

Average number of nobs ("Bytes") in the system = Average arrival rate *
Average sojourn time.

Think of Gig Ethernet: rate = 10^9 Bit/s, example: sojourn time = 100
ms, => Number of bits in the system
= 10^9 bit/s * 100 * 10^-3 s = 10^6 * 100 bit = 10^8 bit.

No one gives a thought, were first the bits reside and second whether
Little's Law applies at all.

Nevertheless, the network capacity is given in Bit - and we are
comfortable. With respect to VJCC, we have a capacity which is stable
and where we can apply the conservation principle, because the capacity
is stable and the network must reach an equilibrium where no bit is
offered to the net until another one has been taken from the net.

And because the network capacity is that nice stable - we share it among
competing flows.

Although we don't have the least idea, what we are sharing here at all.

Not to be misumderstood, I did so myself for years.

(Funny remark: We put buffers into the system in order to increase a
system's capacity, and afterwards we complain about buffer bloat and
increasing sojourn times. Didn't I write already that the sojourn time
is the quotient "number of bytes in the sytem / average arrival rate"?
So what's going to happen when I keep my rate constant and increase the
capacity by adding buffers? That's really a 1 million dollar question.)

However, the basic assumption for Little's Law is: The mentioned
averages do at least exist - hence a sojourn time has a finite
expectation, spoken more drastically: There is no loss of jobs!

As soon as dropped packets come into play, Little's Law doesn't apply
and we can forget about all the nice formulae.

(Do you know the song Kodachrome by Simon and Garfunkel?
http://www.youtube.com/watch?v=QXZTBu_3ioI
> Mama don't take my Kodachrome away
> Mama don't take my Kodachrome away
> Mama don't take my Kodachrome away
)

May be, I'm bitter here. However, sometimes I think that all this
Little's Law stuff - and hence much of the formulae derived from that  -
is simply
Kodachrome. Which gives us that nice bright colours, which gives us the
greens of summers and makes us think all the Net's a sunny day.

And the larger the network grows, the worse becomes the situation.

Basically, formulae aren't bad. No way. But even Sir Peter, JC will
certainly agree here as one of the physicists on the list, wasn't
satisfied until the Higgs Boson was actually found. That's the
difference between a PhD thesis, which claims something which is perhaps
never found ("spurious timeous") and the Nobel price.

And as far as I know, Sir Peter put his theory in question until the
final proof.

The problem is that we deal with "capacity" in the same way with any
"real" resource and don't think about its physical realization.
And we don't think about the resource which is actually shared. Which
may be (Ethernet) transmission time. Or (UMTS) transmission power.


> networks. While the uncertainty cannot be eliminated, it can be reduced.
> Probing is a fundamental way of doing so. Without being too ambitions and
> risking losses or delays (with buffers), how can one discover the full
> transmission potential? Do you have in mind an alternative way of
> determining an appropriate transmission rate?

Not only in mind. You're sitting in front of it.

How does your computer share computing time - although it doesn't know
the time needed by each Job?

It does time slicing.

More general: There is a scheduler who makes a choice to whom (of the
feasible jobs) computation time should be granted.

I would like to consider this in the network as well. IP scheduling is
done "implicitly". Of course, there are schedulers.  Any switching node
has schedulers. And these cooperate (or interfere) with (e.g.) TCP's
self scheduling.

However: A  process scheduler knows about available resources.

A TCP sender "probes" for "capacity" - whatever that may be. (And very
drastically put: This capacity is basically a mathematical artifact.)

Detlef
>   Thank you,
>
>   Sergey
>
>
>


-- 
------------------------------------------------------------------
Detlef Bosau
Galileistraße 30   
70565 Stuttgart                            Tel.:   +49 711 5208031
                                           mobile: +49 172 6819937
                                           skype:     detlef.bosau
                                           ICQ:          566129673
detlef.bosau at web.de                     http://www.detlef-bosau.de



More information about the end2end-interest mailing list