[e2e] TCP ex Machina
detlef.bosau at web.de
Mon Jul 22 06:01:18 PDT 2013
Am 22.07.2013 02:14, schrieb Keith Winstein:
> Detlef, I'm afraid I don't think your email quite summarized our approach
> accurately. We do not give the optimizer advance information about who
> wants to send what to whom and we don't calculate an optimized "schedule."
> Remy develops rules for a TCP sender; e.g. when to increase the window and
> by how much, when/how to decrease the window, when to enforce a minimum
> interval between outgoing packets (a pacer) and what that interval should
And I'm absolutely not sure whether it makes sense to increase or to
decrease windows. Van Jacobson himself complains about packets lying in
the network quite clumsy and not being "evenly distributed" along the
path. Stated in an extremely harsh way, this would mean that the concept
of "self scheduling", to which the original self clocking is extended by
VJCC and the like, simply does not hold.
So, we can improve purely endpoint based congestion control algorithms
as much as we want - and perhaps increase the PhD outcome to 100
doctoral hats per year on congestion control, we will meet the same
problems again and again, as we do for about 20 years now.
> It tries to find the best rules for a given set of assumptions
> specified explicitly -- e.g., what are the range of possible networks the
> protocol is intended for, and what is the goal.
> We model the arrival and departure of flows as drawn from some stochastic
> process, e.g., flows are "on" for some amount of time or some number of
> bytes, drawn from an exponential or Pareto distribution or from an
> empirical CDF of flow lengths on the Internet. The traffic model given to
> Remy at design time usually is not the same as the case we then evaluate in
> ns-2 when comparing against the other schemes.
> Regarding wireless links, you might be interested in some of our prior work
> (http://alfalfa.mit.edu) that shows one can achieve considerable gains by
> modeling the link speed variation of cellular networks as a simple
> stochastic process, then making conservative predictions about future link
> speeds at the endpoints in order to compromise between throughput and delay.
> Best regards,
> On Sun, Jul 21, 2013 at 5:14 PM, Jon Crowcroft
> <jon.crowcroft at cl.cam.ac.uk>wrote:
>> it is a tiny bit cleverer than that - the work is the moral equivalent of
>> the Axelrod experiment in emergent cooperation, but neater because it is
>> quantitative rather than just qualitative selection of strategies - what is
>> important (imho) is that they use many many simulation runs to evaluate a
>> "fitness" of a given protocol...this is heavy lifting, but pays off - so it
>> will be nice to see empirical follow up work but this isn't some naive
>> "overfitting" undergrad work - it is rather different and requires a
>> considered response
>> On Sun, Jul 21, 2013 at 9:28 PM, Detlef Bosau <detlef.bosau at web.de> wrote:
>>> To my understanding, you write down the whole communication (who wants to
>>> sent what to whom) and afterwards you calculate an optimized schedule.
>>> Reminds me of undergraduate homework in operating systems, gantt diagrams
>>> and that funny stuff.
>>> You cannot predict your link's properties (e.g. in the case of wireless
>>> links), you cannot predict your user's behaviour, so you conjecture a lot
>>> from presumptions which hardly ever will hold.
>>> Frankly spoken: This route leads to nowhere.
>>> Detlef Bosau
>>> Galileistraße 30
>>> 70565 Stuttgart Tel.: +49 711 5208031
>>> mobile: +49 172 6819937
>>> skype: detlef.bosau
>>> ICQ: 566129673
>>> detlef.bosau at web.de http://www.detlef-bosau.de
70565 Stuttgart Tel.: +49 711 5208031
mobile: +49 172 6819937
detlef.bosau at web.de http://www.detlef-bosau.de
More information about the end2end-interest