[e2e] Is the end to end paradigm appropriate for congestion control?

Detlef Bosau detlef.bosau at web.de
Tue Nov 12 04:40:24 PST 2013

Am 12.11.2013 08:19, schrieb Jon Crowcroft:
> but its not a common case...  most of the "pipe" is not made of packets
> "in flight" but more accurately, just in buffers...

However, it depends. Years ago, I was told here on the list (IIRC bei
Steinar Haug?) that particularly extremely long fibres with high data
rates may keep quite some data "in flight", however to my knowledge, you
are physicist, hence I assume that your knowledge here is far more sound
than mine.

> and of course, any LAN tech has to start receiveing bits before you
> finish sending them....  (unless you choose the cambridge ring model
> with 16 bit minipackets - a precursor to atm)

Unfortunately, I don't know the cambridge ring (I always was bad in
history ;-)) - however, some fundamental principles in networking have
hardly changed during the last 40 years. One of these is the use of some
kind of "minipackets" which still exists in various flavours from cell relay
(wireline) to HSDPA (wireless). Actually, the traditional packets in the
ARPAnet were quite the same. (Why not? A car has four wheels, a steering
wheel, a brake. Only some flavours of black have been added since Henry

Together with Little's Law, which in general does not apply in computer
networks!, we often take an  "abstract view" where a path
is some kind of bit pipe with some (ideally quite constant) capacity.
(Even the term "serialization" is to be used with care here,
particularly a "serialization latency", which depends on the line coding
and channel coding scheme being in use and which may change quite often.)
The term "serialization latency" becomes highly complicated when it
comes to networks with a local recovery scheme.

E.g. in WLAN, "packets in flight" may be "packets in progress". You
issue an ICMP echo request - and go for a cup of tea while your notebook
and your WLAN AP have a seminal conversation conveying your packet over
the air interface....

Aterwards, anything is put into a black box where a packet is inserted
at time Ti and delivered at time Td and from the temporal differences
between Td and Ti together with the packet lengths, we derive a
"throughput" and a "propagation latency" so that
 Td - Ti = propagation latency + serializiation latency = propagation
latency + packet lengh / throughput.

(And that's the way you find it in discrete event simulators.)

There is no packet loss, no retransmission. And no varying throughput.
(You talked about satellite networks, the channel coding of which may
change depending on the line conditions.)

Anything is hidden behind the (to my understanding highly questionable)
term "latency bandwidth product" - so the whole link, or even more: the
path (be it a bridged Ethernet or carrier pigeons with recovery
extension) appears as a "black box" with a certain "bandwidth" *ouch*
and a certain "LBP".

And it is exactly this LBP which is shared among competing flows by VJCC. 
> on the related topic...
> layer 2 flow control in switches is being mucked about with
> as we speak  for various special case data center net
> magics to avoid tcp incast hassles but its a nich use afaik...
> (as discussed before on this list)...

When I think about it, L2 flow control was dropped in RFC 791 to avoid
head of line blocking, is this correct?  Packets from one conversation
must be allowed to "overtake" packets from another conversation.

I think, that's one issue in L2 flow control in conjunction with TCP/IP.

Detlef Bosau
Galileistraße 30   
70565 Stuttgart                            Tel.:   +49 711 5208031
                                           mobile: +49 172 6819937
                                           skype:     detlef.bosau
                                           ICQ:          566129673
detlef.bosau at web.de                     http://www.detlef-bosau.de

More information about the end2end-interest mailing list