[e2e] end2end-interest Digest, Vol 17, Issue 26

S. Keshav keshav at uwaterloo.ca
Thu Aug 4 07:30:35 PDT 2005


Detlef,
    In general, what you are asking for is difficult. Consider the following
scenario. Suppose a router forecasts that the queueing delays at a
particular interface are small at time t and expects this forecast to hold
until t+200ms. Now, suddenly, a burst of packets from multiple input ports
destined to that interface arrive at time t+epsilon. This builds up the
queue, increasing delays. You have two choices:

1. violate the forecast
   or
2. drop packets in order to meet the forecast.

Neither one is a good alternative. If you violate the forecast, then what
use is it? If you drop packets to meet the forecast, that's a waste, because
adequate buffers exist. I do not think that dropping packets in order to
make RTO computations sane is a good tradeoff.

A similar situation holds if traffic is generally high, so that queue
lengths are large, and you forecast a large delay. Now, if the traffic dies
down, you have to either violate the forecast or add new work to the system.
Adding new work delays all subsequent packets, so if you now get a burst,
you are in trouble.

As such, I believe that any sort of forecast is only possible if there is a
way to bound the total incoming traffic, both in terms of rate and
burstiness. 

keshav

> 
> In my other post from today (Augst, 3rd) I tried to weaken the problem
> that way, that I only ask for a limited forecast capability. It is not
> necessary to keep a queueing delay constant or makeing it obey a certain
> distribution. It would be sufficient to forecast its expectation, and if
> possible its variance, for a limited period of time, e.g. 200 ms.
> 
> Do you think, there´s a way to do so, thereby maintaining the typical
> "packet-switching best effort" nature of the Internet?
> 
> Perhaps, this is a borderline between "best effort" traffic shaping (if
> this even exists) and some kind of guaranteed service. I really don´t
> know yet.
> 





More information about the end2end-interest mailing list