[e2e] Is the end to end paradigm appropriate for congestion control?
detlef.bosau at web.de
Mon Nov 11 15:35:01 PST 2013
Am 11.11.2013 22:59, schrieb Richard Bennett:
> It's interesting that so many people still talk about CSMA/CD as if it
> were still real.
I beg you pardon?
> IEEE 802.3i added a full duplex mode to Ethernet (known as 10BASE-T)
> in 1990, and 802.3u (100 BASE-TX) effectively eliminated the half
> duplex, CSMA/CD mode for speeds of 100 Mbps and higher on Ethernet.
> The full duplex modes of Ethernet have a flow control signal that
You certainly know that RFC 791 practically excludes L2 flow control for
IP networks and that L2 flow control may cause deadlock problems is known.
> author = "Sven-Arne Reinemo and Tor Skeie",
> title = "Ethernet as a lossless deadlock free system area network",
> booktitle= "Parallel and Distributed Processing and Applications",
> publisher ="Springer Berlin Heidelberg",
> pages = "901--914"}
> moderates congestion,
According to the accepted definition of congestion control and flow
control you mix up these two.
> and on-switch buffers for storage of frames (either full or partial)
> in-flight. Nowdays, you only see CSMA on unlicensed broadcast
> networks, e. g. Wi-Fi.
Which are certainly not very wide spread ;-)
(I'm actually using one and can listen to six others which can be
reached from my apartment. And I know for sure, that there are
additional ones which I cannot reach because I'm living on the 2nd
floor, the other WLAN is on the 7th floor, I set up the network myself.)
> You can't effectively manage congestion solely with the knowledge end
> points have,
This is what I guessed. And what I'm more and more convinced of.
> but any congestion management scheme needs end point cooperation
> since end points are the sources of congestion.
When a wireless line on the path suffers from throughput shortage and a
Wi-Fi router cannot drain a buffer sufficiently fast, you have dropped
packets. This is typically taken as "congestion" - however, the problem
results from noise along the path.
> The Internet's congestion problem is insufficient signalling from the
> elements that can immediately detect congestion - the switches - back
> to the elements that create it.
Where I don't agree is that congestion is caused solely by the end
points. We sometimes made ourselves think in the way a "TCP connection"
and the PATH is some extremely well behaved, homogeneous miracle -
whereas the PATH is actually a mess.
> There have been and are efforts to improve the signalling - ECN and
> PCN - and there was a botched attempt to deal with it in the original
> versions of IP with source quench. The longevity of the Jacobson CC
> and efforts to extend its life like AQM are quite unfortunate.
Although some guys say, I'm often quite arrogant in my attitude, I will
make one point very clear. Actually, VJCC is the very core of TCP
congestion control and it survived not only for more than twenty years
now but it survived an Internet growth by six orders of magnitude.
First of all, I owe respect to this achivevement.
What, however, encourages me to ask even critical questions are VJ's own
words, that the old solutions aren't bad - however the problems have
So I have no lack of respect towards VJCC, but I realize that VJCC is
sometimes facilitated by more (Coddle) or less (Splitting) sophisticated
mechanisms which intentionally circumvent problems which may cause grief
to the VJCC world. (And which make the mess of the actually
heterogeneous nature of the aforementioned path even worse.)
> The problem with end-to-end is the lack of overall system knowledge in
> any given end point.
70565 Stuttgart Tel.: +49 711 5208031
mobile: +49 172 6819937
detlef.bosau at web.de http://www.detlef-bosau.de
More information about the end2end-interest