From kelsayed at gmail.com Wed May 2 02:19:32 2007 From: kelsayed at gmail.com (Khaled Elsayed) Date: Wed, 02 May 2007 11:19:32 +0200 Subject: [e2e] Packet dropping Message-ID: <463857A4.1020205@gmail.com> Given a per-connection queue that could potentially become full (or in case of RED, hits dropping threshold), an incoming packet arrives and finds the queue full. What would be the best policy: 1) admit the new packet and drop one at the queue front 2) drop the newly arriving packet. For real-time connections, it is intuitive that dropping at queue front would tend to result in better delay responses (this was already shown in an early paper by Yin and Hluchyj in IEEE Trans. Comm, June 1993). What about data/non-real time connections? Assume an FTP or HTTP session subject to above situation, would TCP behave better if packet is dropped from front or the new packet is dropped? I have no evidence but I tend to feel that if the congestion is persistent for some reasonable time, it would make more sense to deliver whatever is in the queue right now and drop the new ones at the expense of increasing overall avg. packet delay. If the congestion duration is small, it would not make a lot of difference (I guess). Any thoughts? Khaled From craig at aland.bbn.com Wed May 2 03:33:08 2007 From: craig at aland.bbn.com (Craig Partridge) Date: Wed, 02 May 2007 06:33:08 -0400 Subject: [e2e] Packet dropping In-Reply-To: Your message of "Wed, 02 May 2007 11:19:32 +0200." <463857A4.1020205@gmail.com> Message-ID: <20070502103308.0E521123842@aland.bbn.com> For non-real time, the answer I believe is drop the new packet. Dropping the earlier packet (assuming the earlier packet has a lower sequence number) is more likely to slow effective delivery of data to the recipient and require a more complex set of retransmissions to recover from. Craig In message <463857A4.1020205 at gmail.com>, Khaled Elsayed writes: >Given a per-connection queue that could potentially become full (or in >case of RED, hits dropping threshold), an incoming packet arrives and >finds the queue full. What would be the best policy: > >1) admit the new packet and drop one at the queue front >2) drop the newly arriving packet. > >For real-time connections, it is intuitive that dropping at queue front >would tend to result in better delay responses (this was already shown >in an early paper by Yin and Hluchyj in IEEE Trans. Comm, June 1993). >What about data/non-real time connections? Assume an FTP or HTTP session >subject to above situation, would TCP behave better if packet is dropped >from front or the new packet is dropped? > >I have no evidence but I tend to feel that if the congestion is >persistent for some reasonable time, it would make more sense to deliver >whatever is in the queue right now and drop the new ones at the expense >of increasing overall avg. packet delay. If the congestion duration is >small, it would not make a lot of difference (I guess). > >Any thoughts? > >Khaled From Jon.Crowcroft at cl.cam.ac.uk Tue May 1 23:40:07 2007 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Wed, 02 May 2007 07:40:07 +0100 Subject: [e2e] Collaboration on Future Internet Architectures Message-ID: virtualization could be a very good thing for solving one past and one future problem. 1/ some people have claimed that one can build many-to-many multihop radio systems that offer more capacity as the number of nodes join. If this is true, this can operate within quite a narrow band (e.g. ISM) and should be sufficient for a very long time. If we can show its true in that band, other bands can follow - within regions we still need to multipelx spectrum in some hard non liquid (dave reed) way just til some of the technology is better the identifiers and management of this could be done through Virtual Private Wireless Channel Idenfiers which might use some name space we have seen before 2/ some people claim that IPv6 will never deploy in the core, and that we have to live with IPv4 core networks even thugh practically all significant end systems are IPv6 capable. on the other hand other people have looked at the net and decided that one of the big problems is that receivers can be sent data from just about anywhere even though their kinship groups are relalyl quite limited. what we need is a virtual private IPv4 internet per kinship group, and the VPII (Virtual Private Internet Identifier, yes it does sound like a VPI in ATM speak:) would be the IPv6 provider number (yes this isn't new, but i thought i'd spell it out) of course, with plutarch, all this would be easy, but its taking us longer to code than we expected:-) >>A vital part of this effort concerns fostering collaboration and >>consensus-building among researchers working on future global network >>architectures. To this end, NSF has created a FIND Planning Committee >>that works with NSF to organize a series of meetings among FIND grant >>recipients structured around activities to identify and refine >>overarching concepts for networks of the future. As part of the research >>we leave open the question of whether there will be one Internet or >>several virtualized Internets. I made some comments on FIND in a podcast given by the guardian newspaper online people - linked froom iTunes podcast stuff (its free) or probably findable via http://blogs.guardian.co.uk/podcasts/2007/04/science_weekly_for_april_30.html cheers jon p.s. is this what they mean by being poleaxed: http://news.bbc.co.uk/1/hi/world/europe/6613261.stm From msaqibilyas74 at yahoo.co.uk Wed May 2 04:44:24 2007 From: msaqibilyas74 at yahoo.co.uk (Saqib Ilyas) Date: Wed, 2 May 2007 12:44:24 +0100 (BST) Subject: [e2e] Packet dropping In-Reply-To: <20070502103308.0E521123842@aland.bbn.com> Message-ID: <62857.93128.qm@web25415.mail.ukl.yahoo.com> I would expect for non-real-time protocols such as FTP that if the packet at the head of the queue is dropped, there'd be several duplicate acks which could trigger congestion control. Regards Craig Partridge wrote: For non-real time, the answer I believe is drop the new packet. Dropping the earlier packet (assuming the earlier packet has a lower sequence number) is more likely to slow effective delivery of data to the recipient and require a more complex set of retransmissions to recover from. Craig In message <463857A4.1020205 at gmail.com>, Khaled Elsayed writes: >Given a per-connection queue that could potentially become full (or in >case of RED, hits dropping threshold), an incoming packet arrives and >finds the queue full. What would be the best policy: > >1) admit the new packet and drop one at the queue front >2) drop the newly arriving packet. > >For real-time connections, it is intuitive that dropping at queue front >would tend to result in better delay responses (this was already shown >in an early paper by Yin and Hluchyj in IEEE Trans. Comm, June 1993). >What about data/non-real time connections? Assume an FTP or HTTP session >subject to above situation, would TCP behave better if packet is dropped >from front or the new packet is dropped? > >I have no evidence but I tend to feel that if the congestion is >persistent for some reasonable time, it would make more sense to deliver >whatever is in the queue right now and drop the new ones at the expense >of increasing overall avg. packet delay. If the congestion duration is >small, it would not make a lot of difference (I guess). > >Any thoughts? > >Khaled Muhammad Saqib Ilyas Assistant Professor Department of Computer and Information Systems Engineering NED University of Engineering and Technology, Karachi, Pakistan http://www.saqibilyas.info Graduate Student, LUMS Country Leader, INETA Pakistan Microsoft Most Valuable Professional - C++ --------------------------------- Yahoo! Mail is the world's favourite email. Don't settle for less, sign up for your freeaccount today. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070502/8f37a190/attachment.html From craig at aland.bbn.com Wed May 2 10:36:30 2007 From: craig at aland.bbn.com (Craig Partridge) Date: Wed, 02 May 2007 13:36:30 -0400 Subject: [e2e] Packet dropping In-Reply-To: Your message of "Wed, 02 May 2007 12:44:24 BST." <62857.93128.qm@web25415.mail.ukl.yahoo.com> Message-ID: <20070502173630.AAF1E123842@aland.bbn.com> In message <62857.93128.qm at web25415.mail.ukl.yahoo.com>, Saqib Ilyas writes: >I would expect for non-real-time protocols such as FTP that if the packet at t >he head of the queue is dropped, there'd be several duplicate acks which coul >d trigger congestion control. > Regards I understand that point. My reasoning may be flawed, but just to dig myself deeper. The question was what's the best way for the queue to drain? So let's consider ten packets 0-9 in flight at sender, router and receiver In the scenario, the router is about to receive buffer 9 Sender's buffer Router Buffer Receiver Buffer 012345679 012345679 If it drops 9, then in one RTT we'll have Sender's buffer Router Buffer Receiver Buffer 9abcdefghi If it drops 0, then in one RTT with fast retransmit we'll have Sender's buffer Router Buffer Receiver Buffer 012345679 0 123456789 In either case the router queue looks similar -- the issue is which wins going forward. And it wasn't immediately clear to me that fast retransmit was better. If we drop 0, we're in fast retransmit and about to enter slow start on packet a. If we drop 9, we're about to fire off dupe acks on a-i, and will enter fast retransmit on packet b. Craig From roman at pletka.ch Wed May 2 12:58:25 2007 From: roman at pletka.ch (Roman Pletka) Date: Wed, 02 May 2007 21:58:25 +0200 Subject: [e2e] Packet dropping In-Reply-To: <463857A4.1020205@gmail.com> References: <463857A4.1020205@gmail.com> Message-ID: <4638ED61.8010902@pletka.ch> Hi Khaled, I did some research on the impact of dropping packets from the front of a queue that might be of interest for you. It was related to the approximative longest queue drop algorithm and shows how packet drops from the front of a queue can help TCP to maintain a certain bandwidth share in presence of a non-responsive source. Have a look at section 3 in the report: http://ecwww.eurecom.fr/~pletka/publications/report-pletka-99.pdf Best regards, Roman Khaled Elsayed wrote: > Given a per-connection queue that could potentially become full (or in > case of RED, hits dropping threshold), an incoming packet arrives and > finds the queue full. What would be the best policy: > > 1) admit the new packet and drop one at the queue front > 2) drop the newly arriving packet. > > For real-time connections, it is intuitive that dropping at queue front > would tend to result in better delay responses (this was already shown > in an early paper by Yin and Hluchyj in IEEE Trans. Comm, June 1993). > What about data/non-real time connections? Assume an FTP or HTTP session > subject to above situation, would TCP behave better if packet is dropped > from front or the new packet is dropped? > > I have no evidence but I tend to feel that if the congestion is > persistent for some reasonable time, it would make more sense to deliver > whatever is in the queue right now and drop the new ones at the expense > of increasing overall avg. packet delay. If the congestion duration is > small, it would not make a lot of difference (I guess). > > Any thoughts? > > Khaled > > -- --------------------------------------------------------------------------- Dr. Roman Pletka roman at pletka.ch Meilibachweg 11 Private: +41 43 244 6654 CH-8810 Horgen Mobile: +41 79 293 6948 --------------------------------------------------------------------------- From arjuna at erg.abdn.ac.uk Wed May 2 23:00:02 2007 From: arjuna at erg.abdn.ac.uk (Arjuna Sathiaseelan) Date: Thu, 3 May 2007 07:00:02 +0100 Subject: [e2e] Packet dropping (Khaled Elsayed) In-Reply-To: Message-ID: <200705030600.l436086w010745@erg.abdn.ac.uk> My belief is as Craig said, for real-time packets - dropping the oldest packet would be the best solution - so it would be better to drop from the front of the queue, as most of the real-time packets (VoIP, videoconferencing) would be carried on UDP or DCCP - which do not require transport layer retransmissions. We need to note dropping real-time packets such as VoIP packets (carried by UDP or DCCP) would be more of a concern to the application layer rather than the transport layer. But for non-real time applications running over TCP - then I would prefer to see the new packet being dropped rather the oldest packet - as it would be a burden to the transport layer - since the transport layer has to buffer up all the out of order packets! Arjuna ------------------------------ Message: 2 Date: Wed, 02 May 2007 06:33:08 -0400 From: Craig Partridge Subject: Re: [e2e] Packet dropping To: Khaled Elsayed Cc: end2end-interest at postel.org Message-ID: <20070502103308.0E521123842 at aland.bbn.com> For non-real time, the answer I believe is drop the new packet. Dropping the earlier packet (assuming the earlier packet has a lower sequence number) is more likely to slow effective delivery of data to the recipient and require a more complex set of retransmissions to recover from. Craig *************************************** From dpreed at reed.com Thu May 3 07:21:41 2007 From: dpreed at reed.com (David P. Reed) Date: Thu, 03 May 2007 10:21:41 -0400 Subject: [e2e] Packet dropping In-Reply-To: <20070502103308.0E521123842@aland.bbn.com> References: <20070502103308.0E521123842@aland.bbn.com> Message-ID: <4639EFF5.6090700@reed.com> Dropping the new packet slows the current TCP congestion control algorithm's response, leading to slower and slower clogged Stevens' pipes, and a long time to recover. Making the control loop for congestion faster end-to-end is a real-time problem embedded in non-real-time data transfers, and it affects overall performance. Craig Partridge wrote: > For non-real time, the answer I believe is drop the new packet. > > Dropping the earlier packet (assuming the earlier packet has a lower > sequence number) is more likely to slow effective delivery of data > to the recipient and require a more complex set of retransmissions to > recover from. > > Craig > > > In message <463857A4.1020205 at gmail.com>, Khaled Elsayed writes: > > >> Given a per-connection queue that could potentially become full (or in >> case of RED, hits dropping threshold), an incoming packet arrives and >> finds the queue full. What would be the best policy: >> >> 1) admit the new packet and drop one at the queue front >> 2) drop the newly arriving packet. >> >> For real-time connections, it is intuitive that dropping at queue front >> would tend to result in better delay responses (this was already shown >> in an early paper by Yin and Hluchyj in IEEE Trans. Comm, June 1993). >> What about data/non-real time connections? Assume an FTP or HTTP session >> subject to above situation, would TCP behave better if packet is dropped >> > >from front or the new packet is dropped? > >> I have no evidence but I tend to feel that if the congestion is >> persistent for some reasonable time, it would make more sense to deliver >> whatever is in the queue right now and drop the new ones at the expense >> of increasing overall avg. packet delay. If the congestion duration is >> small, it would not make a lot of difference (I guess). >> >> Any thoughts? >> >> Khaled >> > > From dpreed at reed.com Thu May 3 07:30:35 2007 From: dpreed at reed.com (David P. Reed) Date: Thu, 03 May 2007 10:30:35 -0400 Subject: [e2e] Packet dropping (Khaled Elsayed) In-Reply-To: <200705030600.l436086w010745@erg.abdn.ac.uk> References: <200705030600.l436086w010745@erg.abdn.ac.uk> Message-ID: <4639F20B.5030302@reed.com> Actually, TCP typically retransmits all the "out of order" packets you refer to Arjuna, because the sender won't know the receiver has received them unless SACK is working, an option that is not necessarily there. So the receiver can just drop all the out of order packets after a loss due to congestion without affecting throughput. If you had a channel that could carry acknowledgements faster than the speed of light (a so-called ESP channel), of course you could invent a more interesting protocol than TCP. But then you'd have to require that of every technology supported by TCP, and it's important to remember that TCP is a protocol for interop among heterogeneous technologies as its number one goal. Super hyper optimized multihop subnets of uniform technology are best deployed as an underlay under TCP, and viewed as a link. Arjuna Sathiaseelan wrote: > My belief is as Craig said, for real-time packets - dropping the oldest > packet would be the best solution - so it would be better to drop from the > front of the queue, as most of the real-time packets (VoIP, > videoconferencing) would be carried on UDP or DCCP - which do not require > transport layer retransmissions. We need to note dropping real-time packets > such as VoIP packets (carried by UDP or DCCP) would be more of a concern to > the application layer rather than the transport layer. > > But for non-real time applications running over TCP - then I would prefer to > see the new packet being dropped rather the oldest packet - as it would be a > burden to the transport layer - since the transport layer has to buffer up > all the out of order packets! > > > Arjuna > > ------------------------------ > > Message: 2 > Date: Wed, 02 May 2007 06:33:08 -0400 > From: Craig Partridge > Subject: Re: [e2e] Packet dropping > To: Khaled Elsayed > Cc: end2end-interest at postel.org > Message-ID: <20070502103308.0E521123842 at aland.bbn.com> > > > For non-real time, the answer I believe is drop the new packet. > > Dropping the earlier packet (assuming the earlier packet has a lower > sequence number) is more likely to slow effective delivery of data > to the recipient and require a more complex set of retransmissions to > recover from. > > Craig > *************************************** > > > From perfgeek at mac.com Thu May 3 08:16:49 2007 From: perfgeek at mac.com (rick jones) Date: Thu, 3 May 2007 08:16:49 -0700 Subject: [e2e] Packet dropping In-Reply-To: <20070502173630.AAF1E123842@aland.bbn.com> References: <20070502173630.AAF1E123842@aland.bbn.com> Message-ID: <342865cd2bb214aa635405fc9d80082f@mac.com> On May 2, 2007, at 10:36 AM, Craig Partridge wrote: > My reasoning may be flawed, but just to dig myself deeper. The > question was what's the best way for the queue to drain? > > So let's consider ten packets 0-9 in flight at sender, router and > receiver > > In the scenario, the router is about to receive buffer 9 > > Sender's buffer Router Buffer Receiver Buffer > > 012345679 012345679 > > If it drops 9, then in one RTT we'll have > > Sender's buffer Router Buffer Receiver Buffer > > 9abcdefghi > > If it drops 0, then in one RTT with fast retransmit we'll have > > Sender's buffer Router Buffer Receiver Buffer > 012345679 0 123456789 > > > In either case the router queue looks similar -- the issue is which > wins > going forward. And it wasn't immediately clear to me that fast > retransmit > was better. If we drop 0, we're in fast retransmit and about to enter > slow start on packet a. If we drop 9, we're about to fire off dupe > acks > on a-i, and will enter fast retransmit on packet b. At the risk of ignoring previously stated context I think it is prudent to _not_ assume there will be segments abcdfeghi, in which case, dropping 0 gives us the best chance at having fast retransmit in the first place rather than a retransmission timeout. rick jones there is no rest for the wicked, yet the virtuous have no pillows From Anil.Agarwal at viasat.com Thu May 3 08:43:10 2007 From: Anil.Agarwal at viasat.com (Agarwal, Anil) Date: Thu, 3 May 2007 11:43:10 -0400 Subject: [e2e] Packet dropping (Khaled Elsayed) In-Reply-To: <4639F20B.5030302@reed.com> Message-ID: <0B0A20D0B3ECD742AA2514C8DDA3B0654F0C40@VGAEXCH01.hq.corp.viasat.com> David reed wrote: > Actually, TCP typically retransmits all the "out of order" packets you refer to Arjuna, > because the sender won't know the receiver has received > them unless SACK is working, an option that is not necessarily there. > So the receiver can just drop all the out of order packets after a loss due to > congestion without affecting throughput. This is not quite true (perhaps this is a simplified explanation), since the sender, after a timeout will proceed to do slow start, starting with cwnd = 1 segment. If the receiver has buffered segments received after the lost packet in question, it will ack all segments after the lost segment retransmission is received; the other segments will not be retransmitted. Similarly, a fast retransmit (without SACK) will cause the lost segment to be retransmitted. If the retransmission succeeds, then no other segments will be retransmitted, if the receiver has buffered other segments. Also, slow-start will not be triggered. Hence, it helps to buffer out-of-sequence receive segments. Anil Arjuna Sathiaseelan wrote: > My belief is as Craig said, for real-time packets - dropping the > oldest packet would be the best solution - so it would be better to > drop from the front of the queue, as most of the real-time packets > (VoIP, > videoconferencing) would be carried on UDP or DCCP - which do not > require transport layer retransmissions. We need to note dropping > real-time packets such as VoIP packets (carried by UDP or DCCP) would > be more of a concern to the application layer rather than the transport layer. > > But for non-real time applications running over TCP - then I would > prefer to see the new packet being dropped rather the oldest packet - > as it would be a burden to the transport layer - since the transport > layer has to buffer up all the out of order packets! > > > Arjuna > > ------------------------------ > > Message: 2 > Date: Wed, 02 May 2007 06:33:08 -0400 > From: Craig Partridge > Subject: Re: [e2e] Packet dropping > To: Khaled Elsayed > Cc: end2end-interest at postel.org > Message-ID: <20070502103308.0E521123842 at aland.bbn.com> > > > For non-real time, the answer I believe is drop the new packet. > > Dropping the earlier packet (assuming the earlier packet has a lower > sequence number) is more likely to slow effective delivery of data to > the recipient and require a more complex set of retransmissions to > recover from. > > Craig > *************************************** > > > From dpreed at reed.com Thu May 3 07:18:47 2007 From: dpreed at reed.com (David P. Reed) Date: Thu, 03 May 2007 10:18:47 -0400 Subject: [e2e] Collaboration on Future Internet Architectures In-Reply-To: References: Message-ID: <4639EF47.1080909@reed.com> Jon Crowcroft wrote: > 1/ some people have claimed that one can build many-to-many multihop radio > systems that offer more capacity as the number of nodes join. If this is true, > this can operate within quite a narrow band (e.g. ISM) and should be sufficient > for a very long time. If we can show its true in that band, other bands can > follow - within regions we still need to multipelx spectrum in some hard non > liquid (dave reed) way just til some of the technology is better the identifiers > and management of this could be done through Virtual Private Wireless Channel > Idenfiers which might use some name space we have seen before > A linear growth in capacity as nodes join probably does not provide more capacity per node. The simple model one might imagine achieving is: Cap[System] = o(M*W*log(S/N)) where M is the number of nodes, W is the bandwidth, and S is signal power per station. It is actually unknown whether this is an upper bound, if only because the standard analysis presumes that the noise process is independent at each receiver, an overly non-physical and way-too-conservative-assumption by a factor likely o(M^k) where k is >= 1. The per-node capacity in this hypothetical conservative model is thus Cap[node] = o(W*log(S/N)), and as you can see from that the English language translation would be "narrow band radio sucks!" or "narrow band radio is good for cooking!" W is a limit, unless you want to fry any biological organisms in the field who respond not o(log(S)) but o(S). In other words, the reason for multiplexing a wideband system is that everyone can potentially achieve much higher burst rates without turning the world into a microwave oven. This is the same reason that packet systems rather than rate-limited systems are better for many applications other than 3 kHz telephony - around which one might believe the entire communications incumbency rallies every time its government or god granted monopoly is threatened by technology change. But there are still people out there who reason that "no one will ever need more bits per second than a human can type or a human can read", and US Senators who babble about Internets made of clogged pipes. > From marbukh at antd.nist.gov Wed May 2 10:39:41 2007 From: marbukh at antd.nist.gov (marbukh@antd.nist.gov) Date: Wed, 02 May 2007 13:39:41 -0400 Subject: [e2e] Collaboration on Future Internet Architectures In-Reply-To: References: Message-ID: <7.0.1.0.2.20070502133411.025ba8a0@antd.nist.gov> Just a note to 1/: As the number of nodes grows, the upper limit capacity per node drops. This may have negative implications for large-scale, purely wireless networks. Regards, Vladimir ----------------------------------------------------------- At 02:40 AM 5/2/2007, Jon Crowcroft wrote: >virtualization could be a very good thing for solving one past and one future >problem. > >1/ some people have claimed that one can build many-to-many multihop radio >systems that offer more capacity as the number of nodes join. If this is true, >this can operate within quite a narrow band (e.g. ISM) and should be >sufficient >for a very long time. If we can show its true in that band, other bands can >follow - within regions we still need to multipelx spectrum in some hard non >liquid (dave reed) way just til some of the technology is >better the identifiers >and management of this could be done through Virtual Private Wireless Channel >Idenfiers which might use some name space we have seen before > >2/ some people claim that IPv6 will never deploy in the core, and >that we have to >live with IPv4 core networks even thugh practically all significant >end systems >are IPv6 capable. on the other hand other people have looked at the net and >decided that one of the big problems is that receivers can be sent >data from just >about anywhere even though their kinship groups are relalyl quite >limited. what >we need is a virtual private IPv4 internet per kinship group, and the VPII >(Virtual Private Internet Identifier, yes it does sound like a VPI >in ATM speak:) >would be the IPv6 provider number (yes this isn't new, but i thought >i'd spell it >out) > >of course, with plutarch, all this would be easy, but its taking us longer to >code than we expected:-) > > >>A vital part of this effort concerns fostering collaboration and > >>consensus-building among researchers working on future global network > >>architectures. To this end, NSF has created a FIND Planning Committee > >>that works with NSF to organize a series of meetings among FIND grant > >>recipients structured around activities to identify and refine > >>overarching concepts for networks of the future. As part of the research > >>we leave open the question of whether there will be one Internet or > >>several virtualized Internets. > >I made some comments on FIND in a podcast given by the guardian >newspaper online >people - linked froom iTunes podcast stuff (its free) or probably findable via >http://blogs.guardian.co.uk/podcasts/2007/04/science_weekly_for_april_30.html > > >cheers >jon >p.s. is this what they mean by being poleaxed: >http://news.bbc.co.uk/1/hi/world/europe/6613261.stm From dxu at cs.purdue.edu Wed May 2 12:25:06 2007 From: dxu at cs.purdue.edu (Dongyan Xu) Date: Wed, 2 May 2007 15:25:06 -0400 (EDT) Subject: [e2e] ACM NOSSDAV 2007 (June 4-5, U. of Illinois at Urbana-Champaign) Message-ID: Dear Colleague: I'd like to bring to your attention the upcoming NOSSDAV 2007 Workshop at the University of Illinois at Urbana-Champaign. My sincere apologies if you receive multiple copies of this information. Regards, Dongyan Xu Publicity Co-Chair ACM NOSSDAV 2007 8-<--------------------------------------------------------------------- CALL FOR PARTICIPATION The 17th International Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV 2007) Urbana-Champaign, IL, USA June 4-5, 2007 Sponsored by ACM SIGMM http://www.nossdav.org/2007 Early registration deadline: May 14, 2007 ---------------------------------------------------------------------- The 17th NOSSDAV will be held in June 2007 at the University of Illinois at Urbana-Champaign, IL, USA. We invite you to attend this workshop that is known as an event where new and controversial ideas are presented and receive strong feedback. At NOSSDAV, seasoned researchers meet students for lively discussions. The workshop provides a time and venue to inspire discussion among its participants. NOSSDAV fosters cutting-edge, state-of-the-art research in multimedia and newly emerging areas. From its original focus on support for audio and video, the scope of NOSSDAV has broadened to include networked games, sensor networks, multimedia interfaces, and peer-to-peer networking. At this year's NOSSDAV, Professor Ralf Steimetz presents us with a new challenge in this keynote speech "QoS in Wireless Mesh Networks: A Challenging Endeavor", and industry meets academia in a panel discussion on "Large Scale Peer-to-Peer Streaming & IPTV Technologies". 18 presenters from 9 countries will present their exciting research on new forms of streaming, gaming, coding, IPTV, measurement, mobility, and middleware, and they expect your feedback and challenges. You can find the detailed program at http://www.nossdav.org/2007/program.html If you have any questions, please get in touch with the co-chairs: Reza Rejaie (University of Oregon) Klara Nahrstedt (UIUC) From tvpoh at essex.ac.uk Fri May 4 08:44:06 2007 From: tvpoh at essex.ac.uk (Tze-Ven Poh) Date: Fri, 4 May 2007 16:44:06 +0100 Subject: [e2e] Premium Service Message-ID: <03ad01c78e63$0ca22d50$cf3df59b@essex.ac.uk> Why the new Expedited Forwarding PHB (RFC 3246) memo regards Expedited Forwarding PHB concept as a service which offers QoS guarantee with bounded delay and bounded jitter (see section 2.4)? But on refering to original memo RFC 2598, the service for offering low loss, low latency, low jitter, and assured bandwidth through DS domains. It says nothing about bounding the delay or bounding jitter. This is also stated in the 1st page of RFC 3246. RFC 3246 states that eq. 3 on page 5 must be satisfied which makes it sounds more like delivering hard type of QoS rather than soft. In my imagination, low latency means offering latency at it best (subject to instantenous conditions or other constraints) but not guaranteeing it. Was this what the original definition of Expedited Forwarding supposed to be? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070504/02f1ed74/attachment.html From huitema at windows.microsoft.com Thu May 3 09:22:00 2007 From: huitema at windows.microsoft.com (Christian Huitema) Date: Thu, 3 May 2007 09:22:00 -0700 Subject: [e2e] Collaboration on Future Internet Architectures In-Reply-To: References: Message-ID: <70C6EFCDFC8AAD418EF7063CD132D064046A8DD5@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> > 1/ some people have claimed that one can build many-to-many multihop > radio > systems that offer more capacity as the number of nodes join. Some have claimed it, but it is far from being proved. The number of hops tend to increase with the number of nodes -- typically scaling as the square root of that number if the nodes are arranged in a plane. The available bandwidth per node tends thus to decrease with the number of nodes. -- Christian Huitema From Jon.Crowcroft at cl.cam.ac.uk Fri May 4 00:49:59 2007 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Fri, 04 May 2007 08:49:59 +0100 Subject: [e2e] Collaboration on Future Internet Architectures In-Reply-To: Message from Christian Huitema of "Thu, 03 May 2007 09:22:00 PDT." <70C6EFCDFC8AAD418EF7063CD132D064046A8DD5@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> Message-ID: aside from using mobility (as per grossglauser/tse), see: A. Ozgur, O. Leveque and D. Tse, "Hierarchical Cooperation Achieves Optimal Capacity Scaling in Ad Hoc Networks", submitted to the IEEE Transactions on Information Theory, Sept. 2006 (Revised Feb. 2007). http://www.eecs.berkeley.edu/~dtse/pub.html for an example of a scheme which might work main problems with cooperation techniques is achieving them in practice may not be do-able (it reminds me of quantum computing in this regard, full of promises, but possibly impossible:) In missive <70C6EFCDFC8AAD418EF7063CD132D064046A8DD5 at WIN-MSG-21.wingroup.windeploy.ntdev.m icrosoft.com>, Christian Huitema typed: >>> 1/ some people have claimed that one can build many-to-many multihop >>> radio >>> systems that offer more capacity as the number of nodes join.=20 >> >>Some have claimed it, but it is far from being proved. The number of >>hops tend to increase with the number of nodes -- typically scaling as >>the square root of that number if the nodes are arranged in a plane. The >>available bandwidth per node tends thus to decrease with the number of >>nodes. >> >>-- Christian Huitema >> >> >> cheers jon From dpreed at reed.com Fri May 4 17:18:20 2007 From: dpreed at reed.com (David P. Reed) Date: Fri, 04 May 2007 20:18:20 -0400 Subject: [e2e] moderation criteria awareness? Message-ID: <463BCD4C.1040709@reed.com> Can anyone decode "the message headers matched a filter rule" that was applied in this case? > -------- Original Message -------- > Subject: Your message to end2end-interest awaits moderator approval > From: end2end-interest-bounces at postel.org > To: dpreed at reed.com > Date: Fri, 04 May 2007 16:48:51 -0700 > > > > Your mail to 'end2end-interest' with the subject > > Re: [e2e] Collaboration on Future Internet Architectures > > Is being held until the list moderator can review it for approval. > > The reason it is being held: > > The message headers matched a filter rule > > Either the message will get posted to the list, or you will receive > notification of the moderator's decision. If you would like to cancel > this posting, please visit the following URL: Surely computers can be capable of (say) telling us *what* filter rule, and perhaps even telling us the human meaningful version of the reason for that rule, e.g. "sender is a known flamer" or "some blackhole vigilante site blackballed your MTA because Paul Vixie doesn't like them". It would be nice to know why one's posts are rejected, perhaps lessening one's paranoia... It's an "end to end" protocol issue when mail is not delivered to the endpoints named in the message, and instead stopped by an incomprehensible middlebox. From cottrell at slac.stanford.edu Fri May 4 18:31:02 2007 From: cottrell at slac.stanford.edu (Cottrell, Les) Date: Fri, 4 May 2007 18:31:02 -0700 Subject: [e2e] moderation criteria awareness? In-Reply-To: <463BCD4C.1040709@reed.com> References: <463BCD4C.1040709@reed.com> Message-ID: <35C208A168A04B4EB99D1E13F2A4DB0101DCC29C@exch-mail1.win.slac.stanford.edu> It may be an issue of not telling the spammers how to avoid the filter, i.e. it is a conflict of helpfulness vs. security. It is also possible the filters are hardly easy to interpret simply. -----Original Message----- From: end2end-interest-bounces at postel.org [mailto:end2end-interest-bounces at postel.org] On Behalf Of David P. Reed Sent: Friday, May 04, 2007 5:18 PM To: end2end-interest list Subject: [e2e] moderation criteria awareness? Can anyone decode "the message headers matched a filter rule" that was applied in this case? > -------- Original Message -------- > Subject: Your message to end2end-interest awaits moderator approval > From: end2end-interest-bounces at postel.org > To: dpreed at reed.com > Date: Fri, 04 May 2007 16:48:51 -0700 > > > > Your mail to 'end2end-interest' with the subject > > Re: [e2e] Collaboration on Future Internet Architectures > > Is being held until the list moderator can review it for approval. > > The reason it is being held: > > The message headers matched a filter rule > > Either the message will get posted to the list, or you will receive > notification of the moderator's decision. If you would like to cancel > this posting, please visit the following URL: Surely computers can be capable of (say) telling us *what* filter rule, and perhaps even telling us the human meaningful version of the reason for that rule, e.g. "sender is a known flamer" or "some blackhole vigilante site blackballed your MTA because Paul Vixie doesn't like them". It would be nice to know why one's posts are rejected, perhaps lessening one's paranoia... It's an "end to end" protocol issue when mail is not delivered to the endpoints named in the message, and instead stopped by an incomprehensible middlebox. From szander at swin.edu.au Sat May 5 18:52:19 2007 From: szander at swin.edu.au (Sebastian Zander) Date: Sun, 06 May 2007 11:52:19 +1000 Subject: [e2e] Netgames 2007 deadline is approaching (May 13th) Message-ID: <463D34D3.2060700@swin.edu.au> We apologize if you receive multiple copies of this announcment. +++++++++++++++++++++ Netgames 2007 Call for Papers +++++++++++++++++++++++ 6th Annual Workshop on Network and Systems Support for Games: Netgames 2007 September 19th and 20th 2007, Melbourne, Australia http://caia.swin.edu.au/netgames2007/ In co-operation with ACM SIGCOMM OVERVIEW ======== The NetGames workshop brings together researchers and developers from academia and industry to present new research in understanding networked games and in enabling the next generation of them. Submissions are sought in any area related to networked games. In particular, topics of interest include (but are not limited to) game-related work in: * Network measurement, usage studies and traffic modeling * System benchmarking, performance evaluation, and provisioning * Latency issues and lag compensation techniques * Cheat detection and prevention * Service platforms, scalable system architectures, and middleware * Network protocol design * Multiplayer mobile and resource-constrained gaming systems * Augmented physical gaming systems * User and usability studies * Quality of service and content adaptation * Artificial intelligence * Security, authentication, accounting and digital rights management * Networks of sensors and actuators for games * Impact of online game growth on network infrastructure * Text and voice messaging in games SUBMISSIONS =========== We solicit submisisons of full papers with a limit of 6 pages (inclusive of all figures, references and appendices). Authors must submit their papers in PDF and use single-spaced, double column ACM conference format. Reviews will be single-blind, authors must include their names and affiliations on the first page. Papers will be judged on their relevance, technical content and correctness, and the clarity of presentation of the research. Accepted papers will be archived in the ACM Digital Library and published in the workshop proceedings pending the participation of the authors in the workshop. Paper submissions will be via online upload to EDAS (http://edas.info/5431). Submission of a paper for review will be considered your agreement that at least one author will register and attend if your paper is accepted. Detailed paper submission guidelines are available at http://caia.swin.edu.au/netgames2007/submissions.html COMMITTEE ========= WORKSHOP CHAIR: Grenville Armitage (Swinburne University of Technology, Australia) PROGRAM COMMITTEE: Philip Branch (Swinburne University of Technology, Australia) Kuan-Ta Chen (Academia Sinica, Taiwan) Adrian Cheok (National University of Singapore) Mark Claypool (Worcester Polytechnic Institute, USA) Jon Crowcroft (University of Cambridge, UK) Wu-chang Feng (Portland State University, USA) Carsten Griwodz (University of Oslo, Norway) Tristan Henderson (Dartmouth College, USA) Yutaka Ishibashi (Nagoya Institute of Technology, Japan) Michael J. Katchabaw (University of Western Ontario, Canada) Yoshihiro Kawahara (The University of Tokyo, Japan) Martin Mauve (Heinrich-Heine-Universitat, Germany) John Miller (Microsoft Research, Cambridge, UK) Brian Levine (University of Massachusetts Amherst, USA) Wei Tsang Ooi (National University of Singapore) Lars Wolf (TU Braunschweig, Germany) Hartmut Ritter (Freie Universitat Berlin, Germany) Farzad Safaei (University of Wollongong, Australia) Jochen Schiller (Freie Universitat Berlin, Germany) Sebastian Zander (Swinburne University of Technology, Australia) WEBSITE / PUBLICITY: Lucas Parry (Swinburne University of Technology, Australia) LOCAL ARRANGEMENTS: Warren Harrop (Swinburne University of Technology, Australia) Lawrence Stewart (Swinburne University of Technology, Australia) Netgames 2007 will be held on September 19th and 20th 2007 in Melbourne, Australia. KEY DATES ========= Paper registration opens: March 25th, 2007 Paper registration closes: May 13th, 2007 (11:59pm New York) Full-paper submission: May 20th, 2007 (11:59pm New York) Notification to authors: July 6th, 2007 Early-bird and presenter Registration opens: July 13th, 2007 Camera ready manuscript: August 10th, 2007 (11:59pm New York) Early-bird and presenter registration closes: August 10th, 2007 Workshop: September 19-20th, 2007 +++++++++++++++++++++ Netgames 2007 Call for Papers +++++++++++++++++++++++ From lachlan.andrew at gmail.com Fri May 4 09:49:45 2007 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Fri, 4 May 2007 09:49:45 -0700 Subject: [e2e] Collaboration on Future Internet Architectures In-Reply-To: <4639EF47.1080909@reed.com> References: <4639EF47.1080909@reed.com> Message-ID: On 03/05/07, David P. Reed wrote: > the standard analysis > presumes that the noise process is independent at each receiver, an > overly non-physical and way-too-conservative-assumption by a factor > likely o(M^k) where k is >= 1. Thermal noise in the radio fround-end is almost certain to be independent at each receiver. If all other noise can be averaged out, this noise will still exist, and give the standard assumption. > The per-node capacity in this hypothetical conservative model is thus > Cap[node] = o(W*log(S/N))... ... if we have a single receiver. That is why David Tse's work is all MIMO. $0.02 Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From dpreed at reed.com Fri May 4 11:40:06 2007 From: dpreed at reed.com (David P. Reed) Date: Fri, 04 May 2007 14:40:06 -0400 Subject: [e2e] Collaboration on Future Internet Architectures In-Reply-To: References: <4639EF47.1080909@reed.com> Message-ID: <463B7E06.2090607@reed.com> Nothing magic about MIMO in this regard. Lachlan Andrew wrote: > On 03/05/07, David P. Reed wrote: >> the standard analysis >> presumes that the noise process is independent at each receiver, an >> overly non-physical and way-too-conservative-assumption by a factor >> likely o(M^k) where k is >= 1. > > Thermal noise in the radio fround-end is almost certain to be > independent at each receiver. If all other noise can be averaged out, > this noise will still exist, and give the standard assumption. I agree, but thermal noise in the radio front-end is a controllable of the radio design of the front-end - it's not a limit on capacity that depends either on the number of radios or on the environment in which the radios operate. There is no obvious reason in our analysis that is intended to cover all possible realizable radio systems, to hold the receiver design constant (including its internal noise process, which is not actually "thermal" - in the strict sense of being caused by "temperature" - which is not a physical quantity, just a statistical measure of actual physical processes.) Thermal noise is any statistical input whose behavior relates to temperature - in electronic receivers (based on QED effects) it is due to various statistics of electrons in condensed matter systems. Those statistics are bulk properties that can be varied by choosing metals or semiconductors or other exotic materials. Thermal noise in a superconductor is different in quality than thermal noise in a semiconductor or in a copper wire. If we are considering all possible receivers, we should stop at the process that interacts with the electromagnetic field - be it a wire or plate antenna, or a SQUID. > >> The per-node capacity in this hypothetical conservative model is thus >> Cap[node] = o(W*log(S/N))... > > ... if we have a single receiver. That is why David Tse's work is all > MIMO. You didn't follow my derivation. If the total system capacity of M radios is o(M*W*log(S/N)), then the per-node capacity is o(W*log(S/N)). Simple division has nothing to do with MIMO. In fact, there is nothing magic about MIMO systems at all - the magic arises from the shape of space, which means that the waveform output by an antenna is a 3D waveform, so that the signals from ANY two antennas are orthogonal in 3-space. That's equivalent to saying that the only thing that can interfere with a photon is the photon itself - a basic axiom of quantum mechanics that goes back to the beginning of the 20th century. This orthogonality is the basis (get the pun) of MIMO systems, but it is exploitable by any system that samples space-time in multiple distinct points. (of course you have to have think in 4D vector spaces, rather than in scalar functions of time to get the point). From lachlan.andrew at gmail.com Fri May 4 12:59:51 2007 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Fri, 4 May 2007 12:59:51 -0700 Subject: [e2e] Collaboration on Future Internet Architectures In-Reply-To: <463B7E06.2090607@reed.com> References: <4639EF47.1080909@reed.com> <463B7E06.2090607@reed.com> Message-ID: On 04/05/07, David P. Reed wrote: > Nothing magic about MIMO in this regard. > > Lachlan Andrew wrote: > > > > Thermal noise in the radio frount-end... > I agree, but thermal noise in the radio front-end is a controllable of > the radio design of the front-end - it's not a limit on capacity that > depends either on the number of radios or on the environment in which > the radios operate. True, but if we ignore noise that is independent between receivers, I doubt that N (and hence S) would be big enough to do much cooking-with-narrowband. However, I don't have figures, and would be happy to be proved wrong :) As an aside, it would be interesting to see if there are further physical limits imposed by the need to have super-cooled receivers near our hot transmitters; could the shielding intrinsically interfere with the radiation patters or something? Maxwell's daemon comes to mind... > > ... if we have a single receiver. That is why David Tse's work is all > > MIMO. > Simple division has nothing to do with MIMO. *grin* Oops. I wasn't thinking... Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From dpreed at reed.com Fri May 4 16:48:40 2007 From: dpreed at reed.com (David P. Reed) Date: Fri, 04 May 2007 19:48:40 -0400 Subject: [e2e] Collaboration on Future Internet Architectures In-Reply-To: References: <4639EF47.1080909@reed.com> <463B7E06.2090607@reed.com> Message-ID: <463BC658.6090802@reed.com> Lachlan Andrew wrote: > > True, but if we ignore noise that is independent between receivers, I > doubt that N (and hence S) would be big enough to do much > cooking-with-narrowband. However, I don't have figures, and would be > happy to be proved wrong :) Depends on the system architecture. Today's radio designs certainly have no ability to scale their power down as the density goes up. > > As an aside, it would be interesting to see if there are further > physical limits imposed by the need to have super-cooled receivers > near our hot transmitters; could the shielding intrinsically interfere > with the radiation patters or something? Maxwell's daemon comes to > mind... > > I completely agree that this is an intersting direction. As a hobby, I've been wondering if the "reversible computing" or "conservative logic" concepts can be applied sensibly to communications. The minimum energy needed to communicate one bit of information from one point in a thermodynamic system to another with perfect reliability would seem to be a simple thermodynamic problem to solve. There is no obvious reason why spatial separation should be a bound... Bennet/Landauer tell us what the minimum energy to destroy a bit of information during a computation must be, and suggest that that is the only essential energy that must be expended in a computation. And there is a standard cionjecture/result from quantum gravity that suggests that the total amount of information within a bounded region of space is proportional to the surface area of that space (thus a fractal surface can enclose more information than a tight convex one). From avg at kotovnik.com Fri May 4 15:20:00 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Fri, 4 May 2007 15:20:00 -0700 (PDT) Subject: [e2e] Collaboration on Future Internet Architectures In-Reply-To: <70C6EFCDFC8AAD418EF7063CD132D064046A8DD5@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> Message-ID: People who claim that increasing the density of raido nodes will increase the per-node bandwidth (or at least leave it unchanged) are simply not good with arithmetic. Let's look at a plane with some distribution of radio nodes on it, with per-node characteristic bandwidth B, achieved at signal/noise ration SN0. Let's now scale down that plane by reducing all distances by factor F. Node-to-node signal changes as square of distance, i.e. we can get the same power at receiver by transmitting at 1/F^2 of the original power. The total noise from interference from other transmitters is proportional to the density of transmitters times their power - density increases as F^2, power of each transmitter is adjused by 1/F^2, as above. Thus by scaling distances down and reducing the power of transmitters accordingly, we end up with the same S/N, and the same bandwidth per node. Which means that we get same effective bandwidth _for the same average number of hops_ (Beff = B/Nhops). But each hop is now shorter; so a packet has to make F times more hops. This makes effective bandwidth from two fixed points (same between the original and scaled network) to decrease: BEFFscaled = BEFForig/F. Note that this result does not depend on directionality of antennae, method of sharing the medium capacity, quality of receivers, etc. Just increasing the number of nodes in a multihop system HAS to decrease the effective bandwidth as square root of the number of nodes. The way to achieve the claimed scaling is to have an overlay short-cut non-interfering network (i.e. fiber-optic, etc) with a number of interconnection points proportional to the number of radio nodes. (Another option is to exploit smaller scale in order to make use of higher-frequency (i.e. free-space optical) interconnects unfeasible at longer distances). --vadim On Thu, 3 May 2007, Christian Huitema wrote: > > 1/ some people have claimed that one can build many-to-many multihop > > radio > > systems that offer more capacity as the number of nodes join. > > Some have claimed it, but it is far from being proved. The number of > hops tend to increase with the number of nodes -- typically scaling as > the square root of that number if the nodes are arranged in a plane. The > available bandwidth per node tends thus to decrease with the number of > nodes. > > -- Christian Huitema > > > > From touch at ISI.EDU Sun May 6 20:52:01 2007 From: touch at ISI.EDU (Joe Touch) Date: Sun, 06 May 2007 20:52:01 -0700 Subject: [e2e] moderation criteria awareness? In-Reply-To: <463BCD4C.1040709@reed.com> References: <463BCD4C.1040709@reed.com> Message-ID: <463EA261.7080708@isi.edu> David (et al.), As one whose posts have matched filter rules before, I am surprised to see this incident raise the issue. David P. Reed wrote: > Can anyone decode "the message headers matched a filter rule" that was > applied in this case? There are a variety of filters used on this list to support the list policies, noted on the www.postel.org/e2e.htm site that describes this list. Further comments below... >> -------- Original Message -------- >> Subject: Your message to end2end-interest awaits moderator approval >> From: end2end-interest-bounces at postel.org >> To: dpreed at reed.com >> Date: Fri, 04 May 2007 16:48:51 -0700 >> >> >> >> Your mail to 'end2end-interest' with the subject >> >> Re: [e2e] Collaboration on Future Internet Architectures >> >> Is being held until the list moderator can review it for approval. >> >> The reason it is being held: >> >> The message headers matched a filter rule >> >> Either the message will get posted to the list, or you will receive >> notification of the moderator's decision. If you would like to cancel >> this posting, please visit the following URL: > > Surely computers can be capable of (say) telling us *what* filter rule, > and perhaps even telling us the human meaningful version of the reason > for that rule, e.g. "sender is a known flamer" or "some blackhole > vigilante site blackballed your MTA because Paul Vixie doesn't like them". Computers can; Mailman, as installed, does not. There are a variety of reasons this could be the case, but you should contact the Mailman developers to answer that one. > It would be nice to know why one's posts are rejected, perhaps lessening > one's paranoia... Held != rejected; this message was held for approval, and you'll note it was approved. If you want to know why a message is held (or rejected) in general, please view our website with our list posting policy; that's good advice for any list, prior to posting anyway. As to why an particular item matched our filter rules, they are not perfect and are designed to err on the side of false positives which are merely held for review. > It's an "end to end" protocol issue when mail is not delivered to the > endpoints named in the message, and instead stopped by an > incomprehensible middlebox. If you want to know more about this list, please read the information that is available on our website; any search engine would suffice, or use the link above. If you have issue with the software, Mailman is open source and accepts contributions. Joe (as list admin) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070506/02dcf57a/signature.bin From touch at ISI.EDU Mon May 7 09:11:31 2007 From: touch at ISI.EDU (Joe Touch) Date: Mon, 07 May 2007 09:11:31 -0700 Subject: [e2e] moderation criteria awareness? In-Reply-To: <35C208A168A04B4EB99D1E13F2A4DB0101DCC29C@exch-mail1.win.slac.stanford.edu> References: <463BCD4C.1040709@reed.com> <35C208A168A04B4EB99D1E13F2A4DB0101DCC29C@exch-mail1.win.slac.stanford.edu> Message-ID: <463F4FB3.4010803@isi.edu> PS - one strong hint: Since I try to moderate all CFPs posted to this list, posting a follow-up to a CFP is a nearly sure way to end up having your post also held. Hint: if you have a new topic of discussion (i.e., you're not really following up to the CFP, but perhaps inspired by it), start a new thread with a new header Joe (as list admin) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070507/d22eb465/signature.bin From L.Wood at surrey.ac.uk Mon May 7 08:27:22 2007 From: L.Wood at surrey.ac.uk (L.Wood@surrey.ac.uk) Date: Mon, 7 May 2007 16:27:22 +0100 Subject: [e2e] Collaboration on Future Internet Architectures References: <4639EF47.1080909@reed.com> <463B7E06.2090607@reed.com> <463BC658.6090802@reed.com> Message-ID: <603BF90EB2E7EB46BF8C226539DFC20701316AB2@EVS-EC1-NODE1.surrey.ac.uk> David P. Reed wrote: > And there is a standard cionjecture/result from quantum gravity that > suggests that the total amount of information within a bounded region > of space is proportional to the surface area of that space (thus a > fractal surface can enclose more information than a tight convex one). This is why David's beard makes him look more knowledgeable. L. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070507/a406458d/attachment-0001.html From Jon.Crowcroft at cl.cam.ac.uk Mon May 7 09:51:40 2007 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Mon, 07 May 2007 17:51:40 +0100 Subject: [e2e] moderation criteria ignorance? In-Reply-To: Message from Joe Touch of "Mon, 07 May 2007 09:11:31 PDT." <463F4FB3.4010803@isi.edu> Message-ID: of course, MCI is no excuse :-) In missive <463F4FB3.4010803 at isi.edu>, Joe Touch typed: >>This is an OpenPGP/MIME signed message (RFC 2440 and 3156) >>--------------enig9A828E52C0FAAF9735077E7F >>Content-Type: text/plain; charset=ISO-8859-1 >>Content-Transfer-Encoding: quoted-printable >> >>PS - one strong hint: >> >>Since I try to moderate all CFPs posted to this list, posting a >>follow-up to a CFP is a nearly sure way to end up having your post also >>held. >> >>Hint: if you have a new topic of discussion (i.e., you're not really >>following up to the CFP, but perhaps inspired by it), start a new thread >>with a new header >> >>Joe (as list admin) >> >> >>--------------enig9A828E52C0FAAF9735077E7F >>Content-Type: application/pgp-signature; name="signature.asc" >>Content-Description: OpenPGP digital signature >>Content-Disposition: attachment; filename="signature.asc" >> >>-----BEGIN PGP SIGNATURE----- >>Version: GnuPG v1.4.3 (MingW32) >>Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org >> >>iD8DBQFGP0+zE5f5cImnZrsRAtMuAJ9pGKGj8Fo8IQKG06sbPyok14A7MQCgjzpS >>OG1upUBaS6YBULxKZxA1AZw= >>=R0uy >>-----END PGP SIGNATURE----- >> >>--------------enig9A828E52C0FAAF9735077E7F-- >> cheers jon From Jon.Crowcroft at cl.cam.ac.uk Thu May 10 02:19:13 2007 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Thu, 10 May 2007 10:19:13 +0100 Subject: [e2e] on detecting social nets and using them for optimising dtn forwarding algorithms Message-ID: the following technical report http://www.cl.cam.ac.uk/TechReports/UCAM-CL-TR-684.html is available for your perusal. Due to circumstances beyond our control, we won't be talking about it in Kyoto unless you want to chat in the baths or temples... Bubble Rap: Forwarding in small world DTNs in ever decreasing circles In this paper we seek to improve understanding of the structure of human mobility, and to use this in the design of forwarding algorithms for Delay Tolerant Networks for the dissemination of data amongst mobile users. Cooperation binds but also divides human society into communities. Members of the same community interact with each other preferentially. There is structure in human society. Within society and its communities, individuals have varying popularity. Some people are more popular and interact with more people than others; we may call them hubs. Popularity ranking is one facet of the population. In many physical networks, some nodes are more highly connected to each other than to the rest of the network. The set of such nodes are usually called clusters, communities, cohesive groups or modules. There is structure to social networking. Different metrics can be used such as information flow, Freeman betweenness, closeness and inference power, but for all of them, each node in the network can be assigned a global centrality value. What can be inferred about individual popularity, and the structure of human society from measurements within a network? How can the local and global characteristics of the network be used practically for information dissemination? We present and evaluate a sequence of designs for forwarding algorithms for Pocket Switched Networks, culminating in Bubble, which exploit increasing levels of information about mobility and interaction. j&b with apologies to magritte, ceci n'est pas un sigcomm papier From dpreed at reed.com Thu May 10 07:25:33 2007 From: dpreed at reed.com (David P. Reed) Date: Thu, 10 May 2007 10:25:33 -0400 Subject: [e2e] on detecting social nets and using them for optimising dtn forwarding algorithms In-Reply-To: References: Message-ID: <46432B5D.5060809@reed.com> I am told that this problem was solved by Nokia already. Perhaps they patented it. Jon Crowcroft wrote: > the following technical report > http://www.cl.cam.ac.uk/TechReports/UCAM-CL-TR-684.html > is available for your perusal. Due to circumstances beyond > our control, we won't be talking about it in Kyoto > unless you want to chat in the baths or temples... > > Bubble Rap: Forwarding in small world DTNs in ever decreasing circles > > In this paper we seek to improve understanding of the structure of human mobility, and to use this in the > design of forwarding algorithms for Delay Tolerant Networks for the dissemination of data amongst mobile users. > > Cooperation binds but also divides human society into communities. Members of the same community interact with > each other preferentially. There is structure in human society. Within society and its communities, individuals > have varying popularity. Some people are more popular and interact with more people than others; we may call > them hubs. Popularity ranking is one facet of the population. In many physical networks, some nodes are more > highly connected to each other than to the rest of the network. The set of such nodes are usually called > clusters, communities, cohesive groups or modules. There is structure to social networking. Different metrics > can be used such as information flow, Freeman betweenness, closeness and inference power, but for all of them, > each node in the network can be assigned a global centrality value. > > What can be inferred about individual popularity, and the structure of human society from measurements within a > network? How can the local and global characteristics of the network be used practically for information > dissemination? We present and evaluate a sequence of designs for forwarding algorithms for Pocket Switched > Networks, culminating in Bubble, which exploit increasing levels of information about mobility and interaction. > > j&b > with apologies to magritte, > ceci n'est pas un sigcomm papier > > From dpreed at reed.com Fri May 11 20:07:44 2007 From: dpreed at reed.com (David P. Reed) Date: Fri, 11 May 2007 23:07:44 -0400 Subject: [e2e] It's all my fault Message-ID: <46452F80.9040801@reed.com> As the person who argued for source routing in the original design, I guess I must confess that I am the devil incarnate. Steve Crocker and others will probably want to join me at the guillotine. Apparently the Vixie and other paranoiacs and vigilantes have gotten it into their heads that RH0 in IPv6 is one "denial of service on steroids" and should be deleted immediately. SInce source routing has many benefits, as those of us (such as Dave Farber) have advocated, one might hope that there would be rational folks who would consider this as a balanced issue. But since the "security" experts (who never saw a deep packet inspection they didn't love) have decided that firewalls can intuit intent 100% correctly merely by inspecting bits, they have decided to declare that single path routing is by definition optimal and good enough for all users. By all means, recreate the fixed-route bellhead net of 1969. And please, let's have the middleboxes decide what messages we ought to be allowed to send - after all, the brilliant routerheads know that us "lusers" at the edge of the net are stupid fools who should never be allowed to actually run any applications that they haven't anticipated. All Hail Paul Vixie! Heil Paul! From randy at psg.com Sat May 12 07:57:53 2007 From: randy at psg.com (Randy Bush) Date: Sat, 12 May 2007 17:57:53 +0300 Subject: [e2e] It's all my fault In-Reply-To: <46452F80.9040801@reed.com> References: <46452F80.9040801@reed.com> Message-ID: <4645D5F1.5030801@psg.com> it would be considerably more helpful if, instead of ad homina and vituperation, you actually spoke to the rh0 security issues and possible approaches to mitigation as a technical and engineering problem. randy From jeroen at unfix.org Sat May 12 09:02:27 2007 From: jeroen at unfix.org (Jeroen Massar) Date: Sat, 12 May 2007 17:02:27 +0100 Subject: [e2e] It's all my fault In-Reply-To: <4645D5F1.5030801@psg.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> Message-ID: <4645E513.2090209@spaghetti.zurich.ibm.com> Randy Bush wrote: > it would be considerably more helpful if, instead of ad homina and > vituperation, you actually spoke to the rh0 security issues and possible > approaches to mitigation as a technical and engineering problem. Well, one of the main mitigation methods is very easy, and actually already takes care that RH0 is useless: uRPF. If all networks would properly implement BCP38 and thus do RPF checks packet bouncing would not be possible. Unfortunately there are a lot of networks connected to the Internet who are not following the Best Common Practices. IMHO there should be an organization that keeps a close eye on Internet providers, that is organisztions who carry packets from A to B. Their job would be to say "this organization is taking good care of their network, they apply BCP38, they resolve problems in adequate manner etc". Then, as an operator who is following this organization one can sandbox the organizations (read: prefixes/asn's) who are not belonging to this and let them play on the toy Internet. Then, when there is a mechanism similar to RH0, you can actually trust your peers to resolve problems quickly, instead of having to wail because finding the source of the problem is impossible and contacting the right people to get it fixed is not possible either. Of course, the first thing that one does is contact the upstream etc, but as the upstream at a certain point is a transit they will say "we do not know where it is coming from". Nice issues: technical with a flake of politics. Greets, Jeroen -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 311 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070512/0ec8a8d8/signature.bin From Jon.Crowcroft at cl.cam.ac.uk Fri May 11 06:06:11 2007 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Fri, 11 May 2007 14:06:11 +0100 Subject: [e2e] Collaboration on Future Internet Architectures In-Reply-To: References: Message-ID: In missive , Vadim Antonov typed: >>People who claim that increasing the density of raido nodes will increase >>the per-node bandwidth (or at least leave it unchanged) are simply not >>good with arithmetic. conversely, at geometry >>Let's look at a plane with some distribution of radio nodes on it, with >>per-node characteristic bandwidth B, achieved at signal/noise ration SN0. try a volume, not a plane. the number of alternate paths the the volume goes up faster, and one can use lots of disperity tricks (path, code etc) to make the alternates only have epsilon interference - if you then alternate dynamic power over a short haul hop to spread the signal to a neighbourhood, with dynamic coding for longer haul to get the message to the next neighbourhood, you get capacity within epsilon*Nhops of N - conjecture: a sequence of "knights moves" of 1 hop up, 2 hops forward can tile a volume in a systematic way and use the scheme above (due to Tse) in a very easy to self organised fashion... Imagine a field full of children all sitting in groups, and at the middle of each group is a teacher. You can ask the teacher a question by putting up your hand, and each teacher can listen and answer one child at a time. If two groups of children are too close together, then one child asking a question, or one teacher answering may drown out the other group. We try and have enough teachers so that if all the children who want to ask a question at the same time do so, then there are enough teachers to answer. If the teacher you are asking is busy, you might go from the edge of the group you are in, nearer to the edge of another group and try and ask their teacher. But imagine you can't be bothered to move, but you notice that the teacher nearest is busy, and a teacher further is not busy. What if you shout your question? What if the teacher nearby can still manage to lisen to the nearby child, even while one or two of you shout to a further teacher? An interesting question to ask is: how many more questions can the teachers and groups of children answer if the teachers can deal with sorting out different people speaking? so now have kids either sit or stand to ask a question, and speak in a low or high squeeky voice, and have teachers sit or stand, and choose to listen to high or low grumbly voices... j. From jari.arkko at piuha.net Sat May 12 10:17:36 2007 From: jari.arkko at piuha.net (Jari Arkko) Date: Sat, 12 May 2007 20:17:36 +0300 Subject: [e2e] It's all my fault In-Reply-To: <4645D5F1.5030801@psg.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> Message-ID: <4645F6B0.1090004@piuha.net> Randy, David, > it would be considerably more helpful if, instead of ad homina and > vituperation, you actually spoke to the rh0 security issues and possible > approaches to mitigation as a technical and engineering problem. > Indeed. Implementors have largely already done the right thing already earlier or else released patches in recent weeks. We are also dealing with the removal/disable of RH0 in the IPv6 WG list discussion. Other parts of the protocol stack that needed something like routing header have already years ago been designed to do something safe instead of RH0. My advice: if you have something to say about the way which we should disable RH0, go to the IPv6 list. Or if you can, apply a patch in your company's products or networks. Or apply your energy in figuring out what other vulnerabilities we have in our stacks; there's plenty of work in this space... Jari From dpreed at reed.com Sat May 12 11:27:52 2007 From: dpreed at reed.com (David P. Reed) Date: Sat, 12 May 2007 14:27:52 -0400 Subject: [e2e] It's all my fault In-Reply-To: <4645D5F1.5030801@psg.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> Message-ID: <46460728.4050006@reed.com> Regarding what Vixie said, see the following. Apparently he is fanning a flame that somehow the invention of source routing or its inclusion in the IP standard causes or amplifies attacks on network resources. http://www.theregister.com/2007/05/11/ipv6_flaw_scramble/ I didn't start the ad hominem attacks or vituperation. Read Vixie's words. Randy Bush wrote: > it would be considerably more helpful if, instead of ad homina and > vituperation, you actually spoke to the rh0 security issues and possible > approaches to mitigation as a technical and engineering problem. > > randy > > From dpreed at reed.com Sat May 12 11:40:58 2007 From: dpreed at reed.com (David P. Reed) Date: Sat, 12 May 2007 14:40:58 -0400 Subject: [e2e] It's all my fault In-Reply-To: <4645F6B0.1090004@piuha.net> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <4645F6B0.1090004@piuha.net> Message-ID: <46460A3A.2030004@reed.com> Jari - Implementors who remove or disable source routing are (in my opinion, of course) taking matters into their own hands on the basis of a misguided theory that source routing *causes* denial of service. Source routing is a standard, and was not included in the standard as a "mistake" (either in IPv4 or in IPv6). It was included as a useful tool. It was intended that end users would be able to use it. Blocking end users from using it is vigilante action. That would be appropriate if source routing were a bad idea. It is not. It is a tool, which can be misused. Removing screwdrivers from the workbench because they can be used to stab people through the eye is the same sort of logic. There is a rather nice analysis of the utility of source routing that Jerry Saltzer and I wrote many years ago. We did not invent the idea - Dave Farber used it prior to that. And source routing is a well understood routing technique taught in the literature. Regarding Bush's point about "amelioration" of source routing's effects. Source routing does not have effects. Denial of service attacks have effects. I am happy to talk about amelioration of denial of service attacks. Regarding Paul Vixie - I rarely speak out against people, mostly going after their ideas. But Vixie has a track record. He is one of the inventors, apologists, and promoters of aggressive spam blackhole lists: holding non-offenders by the thousands accountable for the actions of a few spammers. I and many others have been held hostage by having our email blocked by his "blackhole vigilantes". He has never apologized for it. I personally think he could be sued for millions of dollars of lost work and aggravation. Your mileage may vary. Jari Arkko wrote: > Randy, David, > > >> it would be considerably more helpful if, instead of ad homina and >> vituperation, you actually spoke to the rh0 security issues and possible >> approaches to mitigation as a technical and engineering problem. >> >> > > Indeed. > > Implementors have largely already done the right thing > already earlier or else released patches in recent weeks. > We are also dealing with the removal/disable of RH0 in the > IPv6 WG list discussion. Other parts of the protocol stack > that needed something like routing header have already > years ago been designed to do something safe instead of > RH0. > > My advice: if you have something to say about the way > which we should disable RH0, go to the IPv6 list. Or if > you can, apply a patch in your company's products or > networks. Or apply your energy in figuring out what > other vulnerabilities we have in our stacks; there's > plenty of work in this space... > > Jari > > > From sthaug at nethelp.no Sat May 12 12:12:00 2007 From: sthaug at nethelp.no (sthaug@nethelp.no) Date: Sat, 12 May 2007 21:12:00 +0200 (CEST) Subject: [e2e] It's all my fault In-Reply-To: <46460728.4050006@reed.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> Message-ID: <20070512.211200.71125708.sthaug@nethelp.no> > Regarding what Vixie said, see the following. Apparently he is fanning > a flame that somehow the invention of source routing or its inclusion in > the IP standard causes or amplifies attacks on network resources. > > http://www.theregister.com/2007/05/11/ipv6_flaw_scramble/ > > I didn't start the ad hominem attacks or vituperation. Read Vixie's words. As far as I can see he was speaking specifically about the IPv6 RH0 problem, not about source routing in general. Are you saying he is wrong about the IPv6 RH0 problem? Steinar Haug, Nethelp consulting, sthaug at nethelp.no From dubey.ism at gmail.com Sat May 12 17:47:43 2007 From: dubey.ism at gmail.com (Ashutosh Dubey) Date: Sat, 12 May 2007 17:47:43 -0700 Subject: [e2e] RSA Algorithm for large texts Message-ID: <827f3eeb0705121747r3e9fcd2ej673790d071ee877d@mail.gmail.com> Hi all, Can we use RSA algorithm for encrypting large texts by:- breaking them into small blocks,the length of blocks may not be constant each time. Do we need to tell the length of blocks at the decrypter end? -- Ashutosh Kr. Dubey Indian School of Mines,Dhanbad -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070512/0b3ae894/attachment.html From detlef.bosau at web.de Mon May 14 08:17:41 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 14 May 2007 17:17:41 +0200 Subject: [e2e] I got lost in opportunistic scheduling. In-Reply-To: <4616E722.3070402@web.de> References: <4616E722.3070402@web.de> Message-ID: <46487D95.4040104@web.de> Excuse me, when I return to this somewhat annoying subject. Admittedly, I get somewhat lost here. My leading question is: - Are there any peculiarities in mobile networks, which can affect upper layers? Then I got lost in the assumption, that PF scheduling may lead to delay spikes which can cause trouble to upper layers. My next question was. How large are these? So, I read a lot of papers, put them into water and converted them into paper-mache and built a new house from it - and totally got lost. So, my first question is, and if there is anybody who can help me there I would greatly appreciate this: Why are we doing opportunistic scheduling at all? The question seems to be very "local" at a first glance, but in fact opportunistic scheduling basically may turn store?n forward components into stop?n pray for restart components, so at least they are not "completely transparent" from an end to end perspective. To my understanding, the key idea in opportunistic scheduling is to have a user (in a multiuser mobile network) send preferably in periods of "good network condition" (whatever this may be). To my understanding, one reason for doing so is to increase the spectral efficiency in a network. Instead of mitigating errors by averaging, as it is done as a side effect from spreading by the use of long spreading sequences, the user avoids sending in periods with "bad network condtion" (w.t.m.b.) and therefore errors are avoided. Side effect: Spreading sequences can be shortened and throughput can be increased. At least, I think that is one lesson I have learned from some slides by David Tse. So, we have three issues here. 1. A user shall send in periods with "good network condtion" (...). However, there are two ancillary conditions: 2. There shall be fairness between users. (There exists some work on this, I did not yet read this all, otherwise our whole street would have got new houses from paper-mache. We could rename it then from "Galileistrasse" into "Potemkin Boulevard".) 3. From an end to end perspective, a user wating for a time slot allocation might want to get it allocated some day. ("When I get older, losing my hear...") In David Tse?s talks about "Multiuser Diversity", the first issue corresponds to "Hitting the Peaks". The other ones correspond to the scheduling metric which is actually chosen and the time constants of smoothing components being actually in use. Perhaps, the fist problem is of particularly interest for people living in San Franciso: I?m told, that there are small and tiny earth quakes almost every few days. So it?s not the problem to escape from an average earthquake but it?s interesting to know to "hit the peak" - and hit the road then, because it?s eventually quite a good idea to leave the city ;-) However, I totally got lost here and even for the very simple question of finding appropriate EWMA filters or to decide which kind of filter makes sense, I don?t see where to start. Perhas, some better understanding of some communication engineering details would be helpful here - but at the moment I simply got lost. So, I would appreciate any hint here. Thanks. Detlef -- Detlef Bosau Galileistrasse 30 70565 Stuttgart Mail: detlef.bosau at web.de Web: http://www.detlef-bosau.de Mobile: +49 172 681 9937 From touch at ISI.EDU Mon May 14 09:01:48 2007 From: touch at ISI.EDU (Joe Touch) Date: Mon, 14 May 2007 09:01:48 -0700 Subject: [e2e] It's all my fault In-Reply-To: <46460728.4050006@reed.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> Message-ID: <464887EC.1020502@isi.edu> David P. Reed wrote: ... > I didn't start the ad hominem attacks or vituperation. Read Vixie's words. This list isn't for starting, continuing, or ending ad hominem attacks. Please take such messages elsewhere. Discussions of the issues is, as always, welcome. Joe (as list admin) From avg at kotovnik.com Sat May 12 15:47:10 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Sat, 12 May 2007 15:47:10 -0700 (PDT) Subject: [e2e] Collaboration on Future Internet Architectures In-Reply-To: Message-ID: On Fri, 11 May 2007, Jon Crowcroft wrote: > In missive , Vadim Antonov typed: > > >>People who claim that increasing the density of raido nodes will increase > >>the per-node bandwidth (or at least leave it unchanged) are simply not > >>good with arithmetic. > > conversely, at geometry > > try a volume, not a plane. the number of alternate paths the the volume > goes up faster, and one can use lots of disperity tricks (path, code etc) > to make the alternates only have epsilon interference - if you then alternate > dynamic power over a short haul hop to spread the signal to a neighbourhood, > with dynamic coding for longer haul to get the message to the next neighbourhood, > you get capacity within epsilon*Nhops of N - conjecture: a sequence of > "knights moves" of 1 hop up, 2 hops forward can tile a volume in a systematic way and use the > scheme above (due to Tse) in a very easy to self organised fashion... You're talking about technology, not about scaling. The path and code dispersion tricks work just as well for nodes placed on some 2D surface. Regarding scaling, with volume density of nodes in the network d the received signal power is proportional to d^(2/3) while noise power is proportional to d. If transmitters emit any RF energy only during actual data transmission, the transmitter duty cycle (given some constant S/N required by the technology) goes down with increased density as 1/(d^(1/3)). The average path length (in hops) increases with density as d^(1/3). Thus the amount of bandwidth between fixed points in space depends on 3D density of nodes as 1/(d^(2/3)). Which is even worse than 2D case. Note that this result does not depend on communication scheduling, antennae directionality, etc, etc, etc. Improvements in any of those are merely constants, and do not affect scaling properties of the system. > Imagine... I don't need to imagine when I can calculate. The saving grace of the smart dust is not in the density of motes, but rather in ability to shift to higher frequencies (optical and above) because reduced distances reduce high-frequency signal scattering in the atmosphere. (Actually, the same is true for macroscopic radio systems... with "atmosphere" being replaced with buildings, trees, fences, and such). --vadim From dpreed at reed.com Mon May 14 12:03:54 2007 From: dpreed at reed.com (David P. Reed) Date: Mon, 14 May 2007 15:03:54 -0400 Subject: [e2e] It's all my fault In-Reply-To: <464887EC.1020502@isi.edu> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> <464887EC.1020502@isi.edu> Message-ID: <4648B29A.30202@reed.com> Joe Touch wrote: > Discussions of the issues is, as always, welcome. > No problem. I am eager to discuss the basic rationale fo9r removing an end-to-end capability from users to enhance the value of the network for applications. It should not be based on claims of harm that are unsubstantiated and unsubstantiatable. A factor of 80 in harm due to RH0 is baldly asserted. I'd like to see data, or a proof, or even a technical argument. If someone could point us to such a measurement of harm - in particular actual demonstrated harm, rather than fear of harm - that would be great. One known benefit of source routing is support for edge-based multipath routing, incorporating knowledge about application needs into the decision to have resilient path selection (either concurrently or as a hot spare that is kept alive and measured for congestion). That is one of the technical arguments made in Saltzer, Reed Clark - Source Routing for Campus-wide Internet Transport (finally published in 1980). Others can be found there as well, and were well-discussed beginning with the beginnings of the Internet design, and continuing up to and through the standards track evolution of IPv6. Active use of source routing in research contexts continue today - despite attempts by "firewall mavens" to declare source routing to be a "security hole" without any evidence. If a feature is to be effectively removed from IPv6 that has many uses, that removal should be justified by more than a vituperative attack at CanSecWest on IETF and its processes, coupled with a false assertion of a claim that :"source routing" was invented solely for mobile users. It wasn't even invented for mobile users primarily, much less solely. From an end-to-end protocol point of view it has always been unclear that routing should be centrally controlled. AS's are in charge of their own routing - though many choose to adopt common solutions, the IP standard *deliberately* does not specify how routing is to be done, and explicity includes the option of source routing as a choice. > From detlef.bosau at web.de Mon May 14 13:28:22 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 14 May 2007 22:28:22 +0200 Subject: [e2e] I got lost in opportunistic scheduling. In-Reply-To: <7.0.1.0.2.20070514143809.02813318@antd.nist.gov> References: <4616E722.3070402@web.de> <46487D95.4040104@web.de> <7.0.1.0.2.20070514143809.02813318@antd.nist.gov> Message-ID: <4648C666.9000607@web.de> marbukh at antd.nist.gov wrote: > It appears that problems you touched are pretty much open. Hopefully :-) At the moment, I see two possible problems: - Either I?m too stupid to find the solutions in literature, - or I?m too stupid to find the solutions myself ;-) And the more I try to understand these issues, the more I see, that this seems to be really difficult. > > Opportunistic scheduling allows one to achieve > the maximum theoretical end-to-end throughput region > without knowing the pattern of link capacity variability, > e.g., due to fading, mobility, etc. > O.k., so my guess is correct: The reason for op. sch. is to achieve maximum spectral efficiency ( = throughput / bandwidth), correct? If so, I finally have understood, why we?re doing this :-) > The problem is that it can be done at the expense of > the end-to-end delays, while throughput/delay trade-offs > in variable connectivity networks with opportunistic scheduling > is to a large degree an open problem. > Fine :-) > Some initial ideas include delay-limited bandwidth, > but as far as I know these ideas have been developed > only for cellular networks. > To my understanding, opportunistic scheduling mainly exploits multi user diversity, sometimes it?s metioned in conjuncition with multi path diversity and thus with the MIMO- and beamforming stuff. To my understanding, both approaches / ideas primarily attempt to mitigate the effects of fast faiding (typically modeled as Rayleigh fading) in wireless networks, hcne things become interesting for _fast_ moving users. (whatever may be the meaning of "fast (tm)"). (A rough definition of fast is "neither still standing nor pedestrian".) What makes things extremely difficult is that I have absolutely no idea how to model the wireless channel. (I?m computer scientist and no communication engineer, so I?m in the need of advice here.) Ideally, I would appreciate a model which yields a BLER with respect to time. However, I don?t know whether we have those models. And I dare not to think about HARQ there, because a simple BLER model perhaps will not be sufficient when it comes to adaptive puncturing which is as well a technique to adapt to fast fading and wich to my understanding also may change delay because it changes the "payload length" of a radio block. O.k., I learned that CE people do not understand me here, so it changes the code rate with respect to time. %-) I got in touch with a colleague here in Germany about this topic, but at the moment we both got mad about the differences in terminology. Detlef -- Detlef Bosau Galileistrasse 30 70565 Stuttgart Mail: detlef.bosau at web.de Web: http://www.detlef-bosau.de Mobile: +49 172 681 9937 From L.Wood at surrey.ac.uk Mon May 14 13:47:24 2007 From: L.Wood at surrey.ac.uk (Lloyd Wood) Date: Mon, 14 May 2007 21:47:24 +0100 Subject: [e2e] It's all Farber's fault In-Reply-To: <4648B29A.30202@reed.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> <464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> Message-ID: <200705142047.VAA04296@cisco.com> If only Farber had patented source routing, we wouldn't be using it, and wouldn't be having this tedious discussion. From faber at ISI.EDU Mon May 14 14:04:16 2007 From: faber at ISI.EDU (Ted Faber) Date: Mon, 14 May 2007 14:04:16 -0700 Subject: [e2e] It's all my fault In-Reply-To: <46460A3A.2030004@reed.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <4645F6B0.1090004@piuha.net> <46460A3A.2030004@reed.com> Message-ID: <20070514210416.GA2264@hut.isi.edu> On Sat, May 12, 2007 at 02:40:58PM -0400, David P. Reed wrote: > There is a rather nice analysis of the utility of source routing that > Jerry Saltzer and I wrote many years ago. Must be this one (first Google hit) or you'd've been more specific. http://web.mit.edu/Saltzer/www/publications/sourcerouting/SourceRouting.html I haven't looked it over yet, but I can't help but notice the word "Campus" in the title. -- Ted Faber http://www.isi.edu/~faber PGP: http://www.isi.edu/~faber/pubkeys.asc Unexpected attachment on this mail? See http://www.isi.edu/~faber/FAQ.html#SIG -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: not available Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070514/d8a9ec93/attachment.bin From randy at psg.com Mon May 14 14:45:36 2007 From: randy at psg.com (Randy Bush) Date: Mon, 14 May 2007 23:45:36 +0200 Subject: [e2e] It's all my fault In-Reply-To: <20070514210416.GA2264@hut.isi.edu> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <4645F6B0.1090004@piuha.net> <46460A3A.2030004@reed.com> <20070514210416.GA2264@hut.isi.edu> Message-ID: <4648D880.2050805@psg.com> on the production internet, source routing has been occasionally used to enforce strange aups; and that usually did not last for long. lsr traceroute is occasionally used for inter-provider debugging at inter-provider borders only. it allows provider A to know how provider B is forwarding (and hence implies routing info) for a prefix without having to have complex inter-noc email/voice. randy From huitema at windows.microsoft.com Mon May 14 15:12:48 2007 From: huitema at windows.microsoft.com (Christian Huitema) Date: Mon, 14 May 2007 15:12:48 -0700 Subject: [e2e] It's all my fault In-Reply-To: <4648B29A.30202@reed.com> References: <46452F80.9040801@reed.com><4645D5F1.5030801@psg.com> <46460728.4050006@reed.com><464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> Message-ID: <70C6EFCDFC8AAD418EF7063CD132D0640485DDBF@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> > One known benefit of source routing is support for edge-based multipath > routing, incorporating knowledge about application needs into the > decision to have resilient path selection (either concurrently or as a > hot spare that is kept alive and measured for congestion). That is > one > of the technical arguments made in Saltzer, Reed Clark - Source Routing > for Campus-wide Internet Transport (finally published in 1980). Others > can be found there as well, and were well-discussed beginning with the > beginnings of the Internet design, and continuing up to and through the > standards track evolution of IPv6. Active use of source routing in > research contexts continue today - despite attempts by "firewall > mavens" > to declare source routing to be a "security hole" without any evidence. There is an obvious tension between source routing and traffic engineering. The security concern with denial of service by spiral routes is only an extreme example of that tension. Fundamentally, source routing allows users to direct traffic on user-chosen routes across the network. Users sees that as a great way to go around network limitations. Network owners see that as bypassing their policy or engineering decisions of the network providers. The old UUCP path rewriting logic was a solution to that tension. Usenet relied on source routing, so they had to support it but they went to extreme lengths to tame it. Usenet was relatively low bandwidth, so it was OK to check the path before forwarding a file. Modern IP networks are supposed to forward packets in a fraction of micro second, they cannot really rewrite paths as they go along, so they end up simply dropping the packets. -- Christian Huitema From randy at psg.com Mon May 14 15:15:16 2007 From: randy at psg.com (Randy Bush) Date: Tue, 15 May 2007 00:15:16 +0200 Subject: [e2e] It's all my fault In-Reply-To: <70C6EFCDFC8AAD418EF7063CD132D0640485DDBF@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> References: <46452F80.9040801@reed.com><4645D5F1.5030801@psg.com> <46460728.4050006@reed.com><464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <70C6EFCDFC8AAD418EF7063CD132D0640485DDBF@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> Message-ID: <4648DF74.3040002@psg.com> > The old UUCP path rewriting logic was a solution to that tension. Usenet > relied on source routing, so they had to support it but they went to > extreme lengths to tame it. Usenet was relatively low bandwidth, so it > was OK to check the path before forwarding a file. before we really rathole, uucp != the usenet From huitema at windows.microsoft.com Mon May 14 15:20:56 2007 From: huitema at windows.microsoft.com (Christian Huitema) Date: Mon, 14 May 2007 15:20:56 -0700 Subject: [e2e] It's all my fault In-Reply-To: <4648DF74.3040002@psg.com> References: <46452F80.9040801@reed.com><4645D5F1.5030801@psg.com> <46460728.4050006@reed.com><464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <70C6EFCDFC8AAD418EF7063CD132D0640485DDBF@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> <4648DF74.3040002@psg.com> Message-ID: <70C6EFCDFC8AAD418EF7063CD132D0640485DDCE@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> > > The old UUCP path rewriting logic was a solution to that tension. > Usenet > > relied on source routing, so they had to support it but they went to > > extreme lengths to tame it. Usenet was relatively low bandwidth, so > it > > was OK to check the path before forwarding a file. > > before we really rathole, uucp != the usenet Yes. Sorry about that. Should have written UUCP all the way. From our neck of the wood in the early 80's, the two pretty much appeared as one, but that certainly is not the case anymore. -- Christian Huitema From randy at psg.com Mon May 14 15:35:44 2007 From: randy at psg.com (Randy Bush) Date: Tue, 15 May 2007 00:35:44 +0200 Subject: [e2e] It's all my fault In-Reply-To: <4648B29A.30202@reed.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> <464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> Message-ID: <4648E440.5080602@psg.com> > One known benefit of source routing is support for edge-based multipath > routing on the real internet, s/known/theoretical/ btw, i am not against source routing. but i am strongly for reality based discussion. on that line, do folk have more minimal proposals for plugging the rthdr0 hole? randy From day at std.com Mon May 14 16:26:31 2007 From: day at std.com (John Day) Date: Mon, 14 May 2007 19:26:31 -0400 Subject: [e2e] It's all Farber's fault In-Reply-To: <200705142047.VAA04296@cisco.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> <464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <200705142047.VAA04296@cisco.com> Message-ID: At 21:47 +0100 2007/05/14, Lloyd Wood wrote: >If only Farber had patented source routing, we wouldn't be using it, >and wouldn't be having this tedious discussion. I tell my students that source routing is a male thing: The same as refusing to stop and ask directions. ;-) From gds at best.com Mon May 14 16:37:39 2007 From: gds at best.com (Greg Skinner) Date: Mon, 14 May 2007 23:37:39 +0000 Subject: [e2e] It's all my fault In-Reply-To: <4648DF74.3040002@psg.com>; from randy@psg.com on Tue, May 15, 2007 at 12:15:16AM +0200 References: <46452F80.9040801@reed.com><4645D5F1.5030801@psg.com> <46460728.4050006@reed.com><464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <70C6EFCDFC8AAD418EF7063CD132D0640485DDBF@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> <4648DF74.3040002@psg.com> Message-ID: <20070514233739.A61045@gds.best.vwh.net> On Tue, May 15, 2007 at 12:15:16AM +0200, Randy Bush wrote: > Christian Huitema wrote: > > The old UUCP path rewriting logic was a solution to that tension. Usenet > > relied on source routing, so they had to support it but they went to > > extreme lengths to tame it. Usenet was relatively low bandwidth, so it > > was OK to check the path before forwarding a file. > > before we really rathole, uucp != the usenet Actually, usenet posts were not distributed via source routing. They were exchanged with neighboring sites. The "bang paths" that were generated as a result of a post making its way through usenet were used by uucp mail as a reverse path to the original poster. Path rewriting (using programs such as pathalias) came into play as more optimal routes were able to be computed using the connectivity data from uucp maps. --gregbo From L.Wood at surrey.ac.uk Mon May 14 16:38:58 2007 From: L.Wood at surrey.ac.uk (Lloyd Wood) Date: Tue, 15 May 2007 00:38:58 +0100 Subject: [e2e] It's all Farber's fault In-Reply-To: References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> <464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <200705142047.VAA04296@cisco.com> Message-ID: <200705142339.AAA15419@cisco.com> At Monday 14/05/2007 19:26 -0400, John Day wrote: >At 21:47 +0100 2007/05/14, Lloyd Wood wrote: >>If only Farber had patented source routing, we wouldn't be using it, and wouldn't be having this tedious discussion. > >I tell my students that source routing is a male thing: The same as refusing to stop and ask directions. ;-) Actually, source routing comes out of the confluence of identity and location in address. If we had clear separation there, we wouldn't have source routing... From detlef.bosau at web.de Mon May 14 17:33:53 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 15 May 2007 02:33:53 +0200 Subject: [e2e] It's all my fault In-Reply-To: <70C6EFCDFC8AAD418EF7063CD132D0640485DDCE@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> References: <46452F80.9040801@reed.com><4645D5F1.5030801@psg.com> <46460728.4050006@reed.com><464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <70C6EFCDFC8AAD418EF7063CD132D0640485DDBF@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> <4648DF74.3040002@psg.com> <70C6EFCDFC8AAD418EF7063CD132D0640485DDCE@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> Message-ID: <4648FFF1.5050006@web.de> Christian Huitema wrote: >> >> before we really rathole, uucp != the usenet >> > > Yes. Sorry about that. Should have written UUCP all the way. From our > neck of the wood in the early 80's, the two pretty much appeared as one, > but that certainly is not the case anymore. > > -- Christian Huitema > But didn?t this mixup, if by accident, match the point? Basically, the good ol? bang path offered a neat way to add application based routing / overlay networks to a heterogeneous world. And this kind of routing was active until recently / is still active, e.g. when you are offered to take part in the usenet by uucp. So basically, when we talk about source routing, we talk about heterogenous overlay networks. These are well known, pretty understood, work fine. There are many well known applications for that, usenet news, Internet mail, SAP and saprouter, EDI in all flavours, only to name a few. (And even Geocast is to come :-)) In my humble opinion, we have seen a more elegant way for interconnecting heteregenous systems and to offer things like - path transparency, - path redundancy, - reliability and avoiding single points of failure. It?s called the Internet and IP :-) Basically, I tend to agree with Lloyd. I think, if we did not knew about source routing - we surely wouldn?t miss it. And even for overlay networking, I wonder whether we really need _application_ _based_ routing. If we need a mesh of nodes for a certain application, SAP, IRC, usenet, mail, skype, whatever, isn?t it sufficient to assign an IP address to these and then to rely upon well known and well understood internetworking techniques? And of course _with_ end to end path transparency, end to end path redundancy etc.? Perhaps it?s advisable to provide some mapping between application specific names (organisational, geographical, functional, whatever) to IP adresses. Perhaps, I have a somewhat restricted way of thinking there, but I always think of the DNS as a model for services like that and that one should provide a similar service to applications if necessary - and that?s it. Basically there are two compelling reasons for doing so: 1.: It?s always a good idea to rely upon something that is known to work. 2.: K.I.S.S. (Keep It Small and Simple.) And it?s needless to say that the idea of end to end path transparency / redundancy / reliability cannot be overempasized here. It?s to my understaning _the_ very basic rationale behind the whole Internet. Detlef (who just thinks of saprouters - and that?s really an evil hack to undermine just _all_ security policies and routing / traffic engineering considerations one can think of. I always prefer the front door over the back door.) From day at std.com Mon May 14 17:24:35 2007 From: day at std.com (John Day) Date: Mon, 14 May 2007 20:24:35 -0400 Subject: [e2e] It's all Farber's fault In-Reply-To: <200705142339.AAA15419@cisco.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> <464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <200705142047.VAA04296@cisco.com> <200705142339.AAA15419@cisco.com> Message-ID: At 0:38 +0100 2007/05/15, Lloyd Wood wrote: >At Monday 14/05/2007 19:26 -0400, John Day wrote: >>At 21:47 +0100 2007/05/14, Lloyd Wood wrote: >>>If only Farber had patented source routing, we wouldn't be using >>>it, and wouldn't be having this tedious discussion. >> >>I tell my students that source routing is a male thing: The same >>as refusing to stop and ask directions. ;-) > >Actually, source routing comes out of the confluence of identity and >location in address. If we had clear separation there, we wouldn't >have source routing... You are right. That is an even better joke than mine. ;-) Good one! From detlef.bosau at web.de Mon May 14 17:41:29 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 15 May 2007 02:41:29 +0200 Subject: [e2e] It's all my fault In-Reply-To: <20070514233739.A61045@gds.best.vwh.net> References: <46452F80.9040801@reed.com><4645D5F1.5030801@psg.com> <46460728.4050006@reed.com><464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <70C6EFCDFC8AAD418EF7063CD132D0640485DDBF@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> <4648DF74.3040002@psg.com> <20070514233739.A61045@gds.best.vwh.net> Message-ID: <464901B9.9000303@web.de> Greg Skinner wrote: > > > Actually, As you say: _Actually_. And in case of all servers being connected to the Internet. It?s not that long ago that I had only usenet access via uucp and therefore, anything worked with good ol? bang paths. For both purposes, routing to the correct "rnews" node you want to send your news to and avoiding cycles / "backward flooding" as well. To my knowledge, usenet news started with uucp. From ggm at apnic.net Mon May 14 18:45:52 2007 From: ggm at apnic.net (George Michaelson) Date: Tue, 15 May 2007 11:45:52 +1000 Subject: [e2e] It's all my fault In-Reply-To: <46452F80.9040801@reed.com> References: <46452F80.9040801@reed.com> Message-ID: <20070515114552.7f8d0f4c@garlique.algebras.org> Did phone systems have toll-select and explicit long-haul select before IP networks? manually, I would argue yes. morally, I think asking the operator to connect your call via a specific path is loose-source.. (for quite a while, it paid to know the fax-attuned IDD prefix to hop off australia. the QDU's on the line were lower, and it took less traffic. It seemed to be easier to get round the IDD jogjam at christmas if you knew the prefix) back in the JANET coloured book day.. applications like mail had to have loose-source equivalent because they were lower-stack agile. the transforms on mail represented paths into the transports along the way were (to say the least) amusing, and the potential for mish-mash (ARPA to uk/janet to ARPA to uucp to DECNET to ..) fantastic. Yes, Peter Honeyman made life easier, in the sense that an offline pre-compute of the flatness to user at host worked. Sometimes I wish BGP was offline, and we had defined statics in the DFZ. -Geographics could work in that model, but I don't think you much want that raised here.. SERC mail was pretty flat, but I believe you could specific explicit paths. I don't think they mapped cleanly to X.25 hops, but my memory fades. (mostly, I (and I think everyone else) used X.25 PAD to hop onto the end system, and logged in to mail local users. I certainly did this to EMAS at Edinburgh from YKXA on SERCnet. But I do recall an explicit path notation in a Dec-10 inter-site mailsystem, and in the Jacob Palme news system which pre-dated UUCP/USENET as far as I know, on Tops-10) And I think several people independently (re)implemented mail level explicit path hacks into local mail transports. over and over again. These kinds of things meant that the clue-density for the userbase was higher, with respect to what the likely behaviours were for having explicit control of the lower level packet routing: mentally, you already had to know this kind of connectivity anyway. I only used the telnet @path options once myself. But I know of somebody who used it to force a path into a recalcitrant network when his interior route died on him. This was in the mod 1990s so its within living memory that people understood and expected to use this kind of thing, in extremis. I was very dissapointed when I discovered how small ATM network addressing was. Amazing to see a 19th Century circuit switched model with tiny numberspace for endpoint addressing, and lowly engineers in boilersuits polishing the shiny brass knobs on crossbar switching between elements in the path. You really DO know that its a specific lightpath sometimes. Should a network prevent people from directing packet-paths? Why? Should users expect to be able to direct packet paths? Why? cui bono? Beerworthy. But pragmatically, we've taken ourselves to a world where the dialogue, and the outcome are divorced from this discussion. -G From djm at mindrot.org Mon May 14 19:38:20 2007 From: djm at mindrot.org (Damien Miller) Date: Tue, 15 May 2007 12:38:20 +1000 (EST) Subject: [e2e] It's all my fault In-Reply-To: <4648B29A.30202@reed.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> <464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> Message-ID: On Mon, 14 May 2007, David P. Reed wrote: > Joe Touch wrote: > > Discussions of the issues is, as always, welcome. > > > No problem. I am eager to discuss the basic rationale fo9r removing an > end-to-end capability from users to enhance the value of the network for > applications. It should not be based on claims of harm that are > unsubstantiated and unsubstantiatable. > > A factor of 80 in harm due to RH0 is baldly asserted. I'd like to see data, > or a proof, or even a technical argument. If someone could point us to such a > measurement of harm - in particular actual demonstrated harm, rather than fear > of harm - that would be great. http://www.secdev.org/conf/IPv6_RH_security-csw07.pdf It is a simple consequence of the fact that you can stuff over 40 address pairs into a RH0, and each pair causes a round trip. -d From jnc at mercury.lcs.mit.edu Mon May 14 19:39:59 2007 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 14 May 2007 22:39:59 -0400 (EDT) Subject: [e2e] It's all Farber's fault Message-ID: <20070515023959.4524287307@mercury.lcs.mit.edu> > From: Lloyd Wood > Actually, source routing comes out of the confluence of identity and > location in address. If we had clear separation there, we wouldn't have > source routing... No. Path != destination. Think about the real-world analogy. My identity and my address are different, but still, if you want to drive to my house, you still might want to control the path you take to get there (which is all the source-routing is - path control by the entity using the service, rather than by the service itself). Noel From Jon.Crowcroft at cl.cam.ac.uk Mon May 14 22:59:08 2007 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Tue, 15 May 2007 06:59:08 +0100 Subject: [e2e] It's all my fault In-Reply-To: <20070515114552.7f8d0f4c@garlique.algebras.org> References: <46452F80.9040801@reed.com> <20070515114552.7f8d0f4c@garlique.algebras.org> Message-ID: exactly - the benefits of long haul provider selection are clear note also source route is not necessarily spiral routing (as christian huiteman said) - BGP already results in longer routes than "optimal" in the user sense, and for the same reasons that users want to choose different carriers in phone nets (price, performance, mileage deals), there are sound business reasons for users to ask for different IP segements to there paths - see the MIT RON papers to see how this can be done with overlays to improve resilience. current IP routing is not optimal since it is about expressing ISP business relationshios, not user preferences - the balance has tipped way too far - all the "business case" arguments you hear in favour or against his or that IP technologiy are almost always about ISPs or router vendors rather than users - if the people making these arguments actually were on the sharp end of business, they'd be out of it pretty soon:) of course optimality (and rationality) have had little to do with IP design since the late 1980s... :-) on DDoS prevention/mitigation and ip (loose) source routing: since we have had ddos attacxks for some time without LSR, what is the actual evidence that LSR makes it i) harder to mitgate ii) easier to find the source? given the source is a traffic generator (and often distributed over a botnet) the ways to find it and mitgate it are based on detecting an unusual set of destiantions and inter-transmission intervals to a wider than usual set of IP destinations, NEAR THE SOURCE, not at the sink where it is already too late whether the traffic goes via a "3rd party" (e.g. gosh, a router:), or taxi, plane, satellite, mars lander, or via your pet cat's radio in her golf ball is irrelevant - if you're looking there, you've already missed the bus. In missive <20070515114552.7f8d0f4c at garlique.algebras.org>, George Michaelson typed: >> >>Did phone systems have toll-select and explicit long-haul select before >>IP networks? manually, I would argue yes. morally, I think asking the >>operator to connect your call via a specific path is loose-source.. >> >>(for quite a while, it paid to know the fax-attuned IDD prefix to hop >>off australia. the QDU's on the line were lower, and it took less >>traffic. It seemed to be easier to get round the IDD jogjam at >>christmas if you knew the prefix) >> >>back in the JANET coloured book day.. applications like mail had to >>have loose-source equivalent because they were lower-stack agile. the >>transforms on mail represented paths into the transports along the way >>were (to say the least) amusing, and the potential for mish-mash (ARPA >>to uk/janet to ARPA to uucp to DECNET to ..) fantastic. Yes, Peter >>Honeyman made life easier, in the sense that an offline pre-compute of >>the flatness to user at host worked. Sometimes I wish BGP was offline, and >>we had defined statics in the DFZ. -Geographics could work in that >>model, but I don't think you much want that raised here.. >> >>SERC mail was pretty flat, but I believe you could specific explicit >>paths. I don't think they mapped cleanly to X.25 hops, but my memory >>fades. (mostly, I (and I think everyone else) used X.25 PAD to hop onto >>the end system, and logged in to mail local users. I certainly did this >>to EMAS at Edinburgh from YKXA on SERCnet. But I do recall an explicit >>path notation in a Dec-10 inter-site mailsystem, and in the Jacob Palme >>news system which pre-dated UUCP/USENET as far as I know, on Tops-10) >>And I think several people independently (re)implemented mail level >>explicit path hacks into local mail transports. over and over again. >> >>These kinds of things meant that the clue-density for the userbase was >>higher, with respect to what the likely behaviours were for having >>explicit control of the lower level packet routing: mentally, you >>already had to know this kind of connectivity anyway. >> >>I only used the telnet @path options once myself. But I know of >>somebody who used it to force a path into a recalcitrant network when >>his interior route died on him. This was in the mod 1990s so its within >>living memory that people understood and expected to use this kind of >>thing, in extremis. >> >>I was very dissapointed when I discovered how small ATM network >>addressing was. Amazing to see a 19th Century circuit switched model >>with tiny numberspace for endpoint addressing, and lowly engineers in >>boilersuits polishing the shiny brass knobs on crossbar switching >>between elements in the path. You really DO know that its a specific >>lightpath sometimes. >> >>Should a network prevent people from directing packet-paths? Why? >>Should users expect to be able to direct packet paths? Why? >> >>cui bono? >> >>Beerworthy. But pragmatically, we've taken ourselves to a world where >>the dialogue, and the outcome are divorced from this discussion. >> >> >>-G cheers jon From gds at best.com Mon May 14 23:57:57 2007 From: gds at best.com (Greg Skinner) Date: Tue, 15 May 2007 06:57:57 +0000 Subject: [e2e] It's all my fault In-Reply-To: <464901B9.9000303@web.de>; from detlef.bosau@web.de on Tue, May 15, 2007 at 02:41:29AM +0200 References: <46452F80.9040801@reed.com><4645D5F1.5030801@psg.com> <46460728.4050006@reed.com><464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <70C6EFCDFC8AAD418EF7063CD132D0640485DDBF@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> <4648DF74.3040002@psg.com> <20070514233739.A61045@gds.best.vwh.net> <464901B9.9000303@web.de> Message-ID: <20070515065757.A61621@gds.best.vwh.net> On Tue, May 15, 2007 at 02:41:29AM +0200, Detlef Bosau wrote: > As you say: _Actually_. And in case of all servers being connected to > the Internet. It?s not that long ago that I had only usenet access via > uucp and therefore, anything worked with good ol? bang paths. For both > purposes, routing to the correct "rnews" node you want to send your news > to and avoiding cycles / "backward flooding" as well. Hmmm ... I'm not aware of people commonly using source routing to get their news posted at a news server of their choice. While I suppose it was theoretically possible (if one had permission to execute uux directly), it would defeat the purpose of having the articles distributed to all of the intermediate news servers along the path (assuming they ran usenet news). And the articles from the receiving news site would eventually make their way back along the source-routed path, thus wasting the initial uux. OTOH, anyone with access to a mail UA could specify their own path for their mail, even paths that were not the result of the distribution of usenet articles. Those individuals who had knowledge of uucp topology could often find better paths than those the usenet articles took. (In some cases, uucp paths existed that did not also involve usenet news transfer.) The path rewriting of pathalias actually served as a traffic engineering aid, rather than as a means of circumventing policy. Usenet neighbors tended to be chosen for reasons of convenience, rather than as a result of careful engineering. Pathalias actually reduced the overall number of hops email took between uucp nodes, and made reasonable attempts to use the best performance paths. Some of those best performance paths went via the old ARPAnet, which was a controversial subject at the time, especially among site managers who were concerned that such "transit" was in violation of their ARPAnet authorization. Paths that performed less well might also have transited the old ARPAnet, as there were ARPAnet sites who remotely executed rnews over TCP connections. (And eventually uucp was rewritten to use TCP connections.) --gregbo From jeroen at unfix.org Tue May 15 02:05:12 2007 From: jeroen at unfix.org (Jeroen Massar) Date: Tue, 15 May 2007 10:05:12 +0100 Subject: [e2e] It's all my fault In-Reply-To: <4648E440.5080602@psg.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> <464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <4648E440.5080602@psg.com> Message-ID: <464977C8.5050403@spaghetti.zurich.ibm.com> Randy Bush wrote: >> One known benefit of source routing is support for edge-based multipath >> routing > > on the real internet, s/known/theoretical/ > > btw, i am not against source routing. but i am strongly for reality based > discussion. on that line, do folk have more minimal proposals for plugging > the rthdr0 hole? Use uRPF as per BCP38, that will at least keep it in your internal network, thus filter the header at the borders and some other key locations and presto, see also my flamishbait at: http://lists.cluenet.de/pipermail/ipv6-ops/2007-May/001344.html Greets, Jeroen -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 311 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070515/70b75be8/signature.bin From kelsayed at gmail.com Tue May 15 02:40:18 2007 From: kelsayed at gmail.com (Khaled Elsayed) Date: Tue, 15 May 2007 12:40:18 +0300 Subject: [e2e] I got lost in opportunistic scheduling. In-Reply-To: <4648C666.9000607@web.de> References: <4616E722.3070402@web.de> <46487D95.4040104@web.de> <7.0.1.0.2.20070514143809.02813318@antd.nist.gov> <4648C666.9000607@web.de> Message-ID: <46498002.9000903@gmail.com> Detlef, There are some papers that discuss the relation between OS at MAC/PHY and TCP. For example check http://www.isr.umd.edu/~baras/publications/reports/2002/SrinivasanB_TR_2002-48.htm But there are also more recent stuff. I think that implementation of pure OS without some compensation for users with consistent bad channels does not make sense. I have some results on that for RT services that was published in MSWIM 2004. E-mail me if interested. Khaled From avg at kotovnik.com Tue May 15 03:44:43 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Tue, 15 May 2007 03:44:43 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: Message-ID: "I need source routing" is an euphemism for "my TE sucks". The fundamental problem with SR is that endpoints do not have information about network topology necessary for making intelligent path choices. It is as simple as that. If you really want end hosts (and not gateways which actually know which trunks are up and which are down and what the preferences are because they do the little pesky things like ISIS, OSPF and BGP) to control which path is taken, use the gawd-given TOS field to mark the packets. And add some rules to the gateway route maps. This way you can have both path selection *and* ability to route around failures. I never ever in my long career as a backbone engineer had any legitimate need to use SR options. As a network hardware and software designer I spent quite a few of my grey cells trying to figure how to handle the damn options fast enough in silicon so as to prevent script kiddies from DDOSing the boxes to death. So, here we are - having a lot of crud (which, by the way, most vendors get wrong, which never seems to bother anyone because nobody actually uses it) in the fast path because somebody somewhere thought that source routing is a neat trick. Oh. And SR is not really a security problem simply because the first thing most real firewalls do is dropping all packets with these IP options. Simply put, SR is a Bad Idea. Just like not preserving port numbers in fragments, or doing ARP instead of simply programming NICs to map IP addresses to low-order bytes of MAC addresses. For a host stack designer these are mere annoyances (though if I had a buck every time I saw a buggy ARP implementation... heh), but working around these little cute design "features" at 10G makes life truly miserable. Keep It Simple. --vadim From detlef.bosau at web.de Tue May 15 04:19:06 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 15 May 2007 13:19:06 +0200 Subject: [e2e] It's all my fault In-Reply-To: <20070515065757.A61621@gds.best.vwh.net> References: <46452F80.9040801@reed.com><4645D5F1.5030801@psg.com> <46460728.4050006@reed.com><464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <70C6EFCDFC8AAD418EF7063CD132D0640485DDBF@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> <4648DF74.3040002@psg.com> <20070514233739.A61045@gds.best.vwh.net> <464901B9.9000303@web.de> <20070515065757.A61621@gds.best.vwh.net> Message-ID: <4649972A.7070203@web.de> Greg Skinner wrote: > On Tue, May 15, 2007 at 02:41:29AM +0200, Detlef Bosau wrote: > >> As you say: _Actually_. And in case of all servers being connected to >> the Internet. It?s not that long ago that I had only usenet access via >> uucp and therefore, anything worked with good ol? bang paths. For both >> purposes, routing to the correct "rnews" node you want to send your news >> to and avoiding cycles / "backward flooding" as well. >> > > Hmmm ... I'm not aware of people commonly using source routing to get > their news posted at a news server of their choice. Me neither :-) In my early usenet days, I did neither use IP nor did I use source routing. I made a phone / modem call with my PC to the neighbour node and sent/received my news batches via UUCP. And my neighbour node made phone calls (o.k., with an X.25 adaptor) to its neighbour nodes - and exchanged batches via UUCP. That?s were the term "store and forward" stems from: When your disk runs out of space, - store any incoming uucp batches on a quarter inch tape and if there?s no chance to deliver it via uucp, - forward the material to the reciever via your prefered parcel service. You may ridicule about this, but that?s the way I sent and received news up to 2000. Mail as well. And it?s highly efficient! My acutal ADSL provider still provides news access via UUCP, however this is UUCP over TCP and not the good ol? modem/POTS system :-) -- Detlef Bosau Galileistrasse 30 70565 Stuttgart Mail: detlef.bosau at web.de Web: http://www.detlef-bosau.de Mobile: +49 172 681 9937 From chk at pobox.com Tue May 15 05:33:55 2007 From: chk at pobox.com (Harald Koch) Date: Tue, 15 May 2007 08:33:55 -0400 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: <4649A8B3.7080000@pobox.com> Vadim Antonov wrote: > I never ever in my long career as a backbone engineer had any legitimate > need to use SR options. The LSRR option to traceroute was incredibly useful for debugging routing problems during CA*net's transition from the NSFnet to MCI back in 1994 (we were one of the first NSFnet "regionals" to leave, if I recall correctly). Of course, I only had a short career as a backbone engineer before leaping into network security, where I saw first-hand the "fun" that one could have with those same LSRR options. I straddle the fence on this one. Like so many other things in life, source routing is a tool that can be used for Good or Evil... -- Harald Koch chk at pobox.com From Jon.Crowcroft at cl.cam.ac.uk Tue May 15 06:22:29 2007 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Tue, 15 May 2007 14:22:29 +0100 Subject: [e2e] It's all my fault In-Reply-To: <4649A8B3.7080000@pobox.com> References: <4649A8B3.7080000@pobox.com> Message-ID: if we took ALL routing out of routers, life _would_ be simpler - you'd only have yourself to blame as an end user for shooting your self in the head when you can't reach somewere running path computation on boxed _designed_ to do computation and forwarding on boxeds designed to switch packets fast just sounds like a perfectly reasonable idea to me of course it would disempower a bunch of hardnosed "gurus" in ISPs and router vendors, which is why they complain so much about every time it when its discussed... given we havnt actually tried this approach (at the level where end users have access to it) since the old token/proteon ring and IBM stuff, I think claims about its legitimacy or otherwise are moot as you say, for debugging, it has been quite useful in the past to some people within the service... In missive <4649A8B3.7080000 at pobox.com>, Harald Koch typed: >>Vadim Antonov wrote: >>> I never ever in my long career as a backbone engineer had any legitimate >>> need to use SR options. >> >>The LSRR option to traceroute was incredibly useful for debugging >>routing problems during CA*net's transition from the NSFnet to MCI back >>in 1994 (we were one of the first NSFnet "regionals" to leave, if I >>recall correctly). Of course, I only had a short career as a backbone >>engineer before leaping into network security, where I saw first-hand >>the "fun" that one could have with those same LSRR options. >> >>I straddle the fence on this one. Like so many other things in life, >>source routing is a tool that can be used for Good or Evil... you know without those pesker users asking for IP adresses and routes and the ability to send data to each other the internet would be a whole lot Gooder and google would clearly do no Evil provably. just one big control plane, and no messin cheers jon From dpreed at reed.com Tue May 15 06:48:58 2007 From: dpreed at reed.com (David P. Reed) Date: Tue, 15 May 2007 09:48:58 -0400 Subject: [e2e] It's all my fault In-Reply-To: <4648E440.5080602@psg.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> <464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <4648E440.5080602@psg.com> Message-ID: <4649BA4A.8030400@reed.com> Randy Bush wrote: > > btw, i am not against source routing. but i am strongly for reality based > discussion. on that line, do folk have more minimal proposals for plugging > the rthdr0 hole? > Can we characterize the "hole" and the range of its impact (i.e. like any bug report, let's have an honest attempt to decide how important it is in the scheme of things). I doubt that the language is proper: I use "hole" in quotes because it's not a security hole anymore than the ability to send a packet to an arbitrary destination (the *core* function of IP) is a hole - if that packet triggers a vulnerability in that destination, it's not the addressability that is the hole... IP as a layer provides no guarantees that packets may not appear at the wrong place sometimes, or be delayed or duplicated (meaning that packets must be accepted with care, whereever they arrive!) So if RTHDR0 is a "hole" it is a hole in the so-called "firewall security model". But that security model has been thoroughly discredited as a mechanism for providing security against system vulnerabilities by years of experience. (IMO). The firewall security model is described by the first book about it (Bellovin and Cheswick) as something that helps one deal with systems that were not properly secured in the first place (in those days it was Unix boxes with wide open unsecured services like NFS). A "hole" in Swiss cheese is redundant. We have known for years how to do reliable authentication with various protocols based in cryptographic signature and limited-life keys. In fact, some of those ideas are well articulated in IPv6 by some darned thoughtful people. So if there are vulnerabilities exposed by the ability to route in a particular way, the long-term sensible solution is not to limit routing, but to use a standardized solution: proper authentication of those commands and requests that are not properly authenticated today. The biggest network layer hole today is the *dependency* on undebuggable address rewriting rules implemented by aggressive middleboxes that have been extended beyond their usefulness to actually be crucially part of a topological security model. The idea that a hotel can prevent botnets from operating by blocking magical port numbers or acting as a man-in-the-middle by pretending (as most do) that their port 25 server is at the IP address of my email server (this should be illegal wirefraud, if I had Jon Gilmore's cash, I would bring a case...). Note that I said *dependency* (as in addiction) above. The rewriting can be detected and prevented by end-to-end authentication. But what is problematic is how much of the Internet (and how much of the security community) has entered into the false belief state that firewalls are the core of Internet security. In fact, the Internet was designed at a time when it was already clear that an *Inter*-net would be of such a scope that one *could not* expect the network to provide security for the endpoints. Steve Kent and others worked hard (though NSA barred them from participating in the Internet project per se) to develop end-to-end security approaches that recognized the point that the catenet transport layer of the Internet was not the place to embed security - for the basic reason outlined in the "end to end argument" - security is inherently a concern of the endpoints among the endpoints - not something that a transport layer can even fully comprehend. Thus, in answer to your question - for any particular class of attacks that might be amplified by routing capabilities, one first should look to fix the actual vulnerabilities at the application or network management layer where those attacks manifest themselves. Some of those vulnerabilities remain despite known fixes. My bete noir is the arpspoofing and DHCP attacks that are based on protocols that should NEVER have been designed the way they are, without security. And in both cases, security mechanisms are known and available, but not deployed - instead the discredited "firewall" idea continues to patch around them, and then people get burnt by them in new places - i.e. Airport WiFi hotspots... > From dpreed at reed.com Tue May 15 07:04:37 2007 From: dpreed at reed.com (David P. Reed) Date: Tue, 15 May 2007 10:04:37 -0400 Subject: [e2e] It's all Farber's fault In-Reply-To: <200705142339.AAA15419@cisco.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> <464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <200705142047.VAA04296@cisco.com> <200705142339.AAA15419@cisco.com> Message-ID: <4649BDF5.5030201@reed.com> Lloyd Wood wrote: > > Actually, source routing comes out of the confluence of identity and location in address. If we had clear separation there, we wouldn't have source routing... > That's ridiculous. All of the arguments for source routing (or almost all) relate to the idea that its a good idea to move function out of the core of the net to enable flexibility. That is especially true of source routing. ARPA had requirements for catenets that involved *not routing certain traffic through certain subnetworks*, for example. To achieve that by creating 100's of millions of "classes of service" with different routing tables in the network for each class of service would have been impractical. Source routing is there as an option because (coupled with "control plane" or "knowledge plane" layer routing service) one can move the specialized routing needed by some traffic out of the network. At one point in the history of the Internet (when I joined the original team) there were strong arguments for making source routing the *primary* form of network routing. The arguments for it were captured in Jerry's and my note. The word "campus" was included in the title largely for a sort of "political" reason - the campus internetwork was a greenfield interop space concept (LANs were really new, and the idea of computer-computer messaging, rather than remote login, were biggest in campuses like MIT and Xerox PARC - you had to be there to understand the cultural distance between the LAN guys with 1500-byte packets like me, John Shoch, and Ken Pogran and the character-at-a-time remote login guys like Larry Roberts and the BBN-TIP guys - that's why it was so hard to split out functions that should have been part of the TELNET protocol from TCP, and functions that should have been part of a reliable stream from the basic datagram transport). From dpreed at reed.com Tue May 15 07:10:40 2007 From: dpreed at reed.com (David P. Reed) Date: Tue, 15 May 2007 10:10:40 -0400 Subject: [e2e] It's all my fault In-Reply-To: References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> <464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> Message-ID: <4649BF60.7080909@reed.com> Damien Miller wrote: > On Mon, 14 May 2007, David P. Reed wrote: > > > http://www.secdev.org/conf/IPv6_RH_security-csw07.pdf > > It is a simple consequence of the fact that you can stuff over 40 address > pairs into a RH0, and each pair causes a round trip. > > > A round trip is a security hole? Is every packet I send 1/80th of an attack? If so, if I send 80 packets without RH0, then that is equally bad! The issue here is that the network making a judgement about what packets should and should not be delivered as requested requires that the network be omniscient. If it were, it might as well figure out which packets I will send, send them, and then I need not bother to write the code to send them in the first place! Do time-sharing systems refuse to run code that implements sorting using a bubble sort? From dpreed at reed.com Tue May 15 07:20:48 2007 From: dpreed at reed.com (David P. Reed) Date: Tue, 15 May 2007 10:20:48 -0400 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: <4649C1C0.1020607@reed.com> If your view is that the Internet is designed to make 10G adapters run fast, or that IP headers should be used within AS's as native fiber-switch routing data instead of wrapping them with layer 2 headers as any sensible designer would, you would be right. God did not say that IP was the layer 2 routing protocol. In fact, the Internet was designed to join *heterogeneous* networks at layer 3. Lunatic microoptimizers have decided that they ought to try to eliminate diversity and heterogeneity in the Internet. They somehow think that God anointed the backbone to be in charge of the proper behavior of their users. So ATT, so 1920's. Vadim Antonov wrote: > "I need source routing" is an euphemism for "my TE sucks". > > The fundamental problem with SR is that endpoints do not have information > about network topology necessary for making intelligent path choices. It > is as simple as that. > > If you really want end hosts (and not gateways which actually know which > trunks are up and which are down and what the preferences are because they > do the little pesky things like ISIS, OSPF and BGP) to control which path > is taken, use the gawd-given TOS field to mark the packets. And add some > rules to the gateway route maps. This way you can have both path > selection *and* ability to route around failures. > > I never ever in my long career as a backbone engineer had any legitimate > need to use SR options. As a network hardware and software designer I > spent quite a few of my grey cells trying to figure how to handle the damn > options fast enough in silicon so as to prevent script kiddies from > DDOSing the boxes to death. > > So, here we are - having a lot of crud (which, by the way, most vendors > get wrong, which never seems to bother anyone because nobody actually uses > it) in the fast path because somebody somewhere thought that source > routing is a neat trick. > > Oh. And SR is not really a security problem simply because the first thing > most real firewalls do is dropping all packets with these IP options. > > Simply put, SR is a Bad Idea. Just like not preserving port numbers in > fragments, or doing ARP instead of simply programming NICs to map IP > addresses to low-order bytes of MAC addresses. For a host stack designer > these are mere annoyances (though if I had a buck every time I saw a buggy > ARP implementation... heh), but working around these little cute design > "features" at 10G makes life truly miserable. > > Keep It Simple. > > --vadim > > > From dpreed at reed.com Tue May 15 07:57:24 2007 From: dpreed at reed.com (David P. Reed) Date: Tue, 15 May 2007 10:57:24 -0400 Subject: [e2e] Time for a new Internet Protocol Message-ID: <4649CA54.1050000@reed.com> A motivation for TCP and then IP, TCP/IP, UDP/IP, RTP/IP, etc. was that network vendors had too much control over what could happen inside their networks. Thus, IP was the first "overlay network" designed from scratch to bring heterogeneous networks into a common, world-wide "network of networks" (term invented by Licklider and Taylor in their prescient paper, The Computer as a Communications Device). By creating universal connectivity, with such properties as allowing multitudinous connections simultaneously between a node and its peers, an extensible user-layer naming system called DNS, and an ability to invent new end-to-end protocols, gradually a new ecology of computer mediated communications evolved, including the WWW (dependent on the ability to make 100 "calls" within a few milliseconds to a variety of hosts), email (dependent on the ability to deploy end-system server applications without having to ask the "operator" for permission for a special 800 number that facilitates public addressability). Through a series of tragic events (including the dominance of routerheads* in the network community) the Internet is gradually being taken back into the control of providers who view their goal as limiting what end users can do, based on the theory that any application not invented by the pipe and switch owners is a waste of resources. They argue that "optimality" of the network is required, and that any new application implemented at the edges threatens the security and performance they pretend to provide to users. Therefore, it is time to do what is possible: construct a new overlay network that exploits the IP network just as the IP network exploited its predecessors the ARPANET and ATT's longhaul dedicated links and new technologies such as LANs. I call for others to join me in constructing the next Internet, not as an extension of the current Internet, because that Internet is corrupted by people who do not value innovation, connectivity, and the ability to absorb new ideas from the user community. The current IP layer Internet can then be left to be "optimized" by those who think that 100G connections should drive the end user functionality. We can exploit the Internet of today as an "autonomous system" just as we built a layer on top of Ethernet and a layer on top of the ARPANET to interconnect those. To save argument, I am not arguing that the IP layer could not evolve. I am arguing that the current research community and industry community that support the IP layer *will not* allow it to evolve. But that need not matter. If necessary, we can do this inefficiently, creating a new class of routers that sit at the edge of the IP network and sit in end user sites. We can encrypt the traffic, so that the IP monopoly (analogous to the ATT monopoly) cannot tell what our layer is doing, and we can use protocols that are more aggressively defensive since the IP layer has indeed gotten very aggressive in blocking traffic and attempting to prevent user-to-user connectivity. Aggressive defense is costly - you need to send more packets when the layer below you is trying to block your packets. But DARPA would be a useful funder, because the technology we develop will support DARPA's efforts to develop networking technologies that work in a net-centric world, where US forces partner with temporary partners who may provide connectivity today, but should not be trusted too much. One model is TOR, another is Joost. Both of these services overlay rich functions on top of the Internet, while integrating servers and clients into a full Internet on top of today's Internets. * routerheads are the modern equivalent of the old "bellheads". The problem with bellheads was that they believed that the right way to build a communications system was to put all functions into the network layer, and have that layer controlled by a single monopoly, in order to "optimize" the system. Such an approach reminds one of the argument for the corporate state a la Mussolini: the trains run on time. Today's routerheads believe that the Internet is created by the fibers and pipes, rather than being an end-to-end set of agreements that can layer on top of any underlying mechanism. Typically they work for backbone ISPs or Router manufacturers as engineers, or in academic circles they focus on running hotrod competitions for the fastest file transfer between two points on the earth (carefully lining up fiber and switches between specially tuned endpoints), or worse, running NS2 simulations that demonstrate that it is possible to stand on one's head while singing the National Anthem to get another publication in some Springer-Verlag journal. From Jon.Crowcroft at cl.cam.ac.uk Tue May 15 01:51:00 2007 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Tue, 15 May 2007 09:51:00 +0100 Subject: [e2e] Collaboration on Future Internet Architectures In-Reply-To: References: Message-ID: In missive , Vadim Antonov typed: >>On Fri, 11 May 2007, Jon Crowcroft wrote: >> >>> In missive , Vadim Antonov typed: >>> >>> >>People who claim that increasing the density of raido nodes will increase >>> >>the per-node bandwidth (or at least leave it unchanged) are simply not >>> >>good with arithmetic. >>> >>> conversely, at geometry >>> >>> try a volume, not a plane. the number of alternate paths the the volume >>> goes up faster, and one can use lots of disperity tricks (path, code etc) >>> to make the alternates only have epsilon interference - if you then alternate >>> dynamic power over a short haul hop to spread the signal to a neighbourhood, >>> with dynamic coding for longer haul to get the message to the next neighbourhood, >>> you get capacity within epsilon*Nhops of N - conjecture: a sequence of >>> "knights moves" of 1 hop up, 2 hops forward can tile a volume in a systematic way and use the >>> scheme above (due to Tse) in a very easy to self organised fashion... >>You're talking about technology, not about scaling. The path and >>code dispersion tricks work just as well for nodes placed on some 2D >>surface. it aint always quite as simple - some technology changines the path loss exponent, and some deployments and technologies change the fading models that apply (wideband fading models for example might not quite carry over from ricean or inakagami, which alter your equations below (i.e. change the overal scaling) - but i agree that the dominant result in most cases is as you say here (certainly noise scaing with d is pretty hard to beat:) >>Regarding scaling, with volume density of nodes in the network d the >>received signal power is proportional to d^(2/3) while noise power is >>proportional to d. >> >>If transmitters emit any RF energy only during actual data transmission, >>the transmitter duty cycle (given some constant S/N required by the >>technology) goes down with increased density as 1/(d^(1/3)). >> >>The average path length (in hops) increases with density as d^(1/3). >> >>Thus the amount of bandwidth between fixed points in space depends on >>3D density of nodes as 1/(d^(2/3)). Which is even worse than 2D case. >> >>Note that this result does not depend on communication scheduling, >>antennae directionality, etc, etc, etc. Improvements in any of those >>are merely constants, and do not affect scaling properties of the system. >> >>> Imagine... >> >>I don't need to imagine when I can calculate. >> >>The saving grace of the smart dust is not in the density of motes, but >>rather in ability to shift to higher frequencies (optical and above) >>because reduced distances reduce high-frequency signal scattering in the >>atmosphere. (Actually, the same is true for macroscopic radio systems... >>with "atmosphere" being replaced with buildings, trees, fences, and such). >> >>--vadim >> cheers jon From jeroen at unfix.org Tue May 15 09:07:56 2007 From: jeroen at unfix.org (Jeroen Massar) Date: Tue, 15 May 2007 17:07:56 +0100 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <4649CA54.1050000@reed.com> References: <4649CA54.1050000@reed.com> Message-ID: <4649DADC.2020401@spaghetti.zurich.ibm.com> David P. Reed wrote: > A motivation for TCP and then IP, TCP/IP, UDP/IP, RTP/IP, etc. was that > network vendors had too much control over what could happen inside their > networks. [..] All nicely put, but can you define exactly what you currently can't do on your network, and what you want to be able to do with it? ISP's I use don't filter my traffic, except on the origin address which is quite logical. Some locations do have some restrictions, eg tcd.ie filters almost everything, but hey, that is their network and their policy, so not something I can complain about, the network is intended to be used for different purposes as set out by their policy, not for all kinds of fancy things that I want to be doing. Although when nicely asked I am pretty sure that some holes can be created under the disguise of it being research of course. Also circumventing it is of course quite easy if I really wanted to, as long as you can send a packet out and get one back you are done. > Therefore, it is time to do what is possible: construct a new overlay > network [...] Like http://www.isi.edu/xbone/ ? And of course don't forget about IPv6, which for a lot of end users today is still a tunneled mesh network which is mostly unfiltered. At the moment RH0 is getting filtered in a lot of places merely because of the DoS property and the argument 'be good to your neighbors'. > One model is TOR, another is Joost. Both of these services overlay > rich functions on top of the Internet, while integrating servers and > clients into a full Internet on top of today's Internets. Joost is very nice, especially when the p2p was not enabled yet. Now with the p2p network enabled it sucks up your connectivity. When you have only a small amount of upstream bandwidth (remember that home users have silly 1mbit/128kbit asymmetric links and not cool symmetric 1GE links to their campus offices), then you will have video for a couple of moments, but the second that you are going to provide back to the swarm of viewers, which you do as it is P2P based (Joost works akin to the so called VidTorrent project by some MIT folks), your upstream will be filled up, and your downstream gets stuck, as those Joost ack packet's won't come trough. Working congestion control would be very helpful in these cases, the app could of course limit the amount of data it is sending up too, but that takes the whole P2P idea out and folks would then simply limit the amount of traffic going up and the whole P2P net would fail... Greets, Jeroen -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 311 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070515/1fb6559b/signature.bin From avg at kotovnik.com Tue May 15 14:01:03 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Tue, 15 May 2007 14:01:03 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: Message-ID: On Tue, 15 May 2007, Jon Crowcroft wrote: > if we took ALL routing out of routers, life _would_ be simpler - That's cute, but does not address the reason why SR is nearly useless in the real life - namely that endpoints lack the up-to-date network topology information. To specify a path for a packet one has to know what paths are available and feasible. > running path computation on boxed _designed_ to do computation > and forwarding on boxeds designed to switch packets fast > just sounds like a perfectly reasonable idea to me Yep. It sounds good until you actually try to put together a working network based on that idea. Which immediately uncovers the aforementioned problem. > ISPs and router vendors, which is why they complain so much about every time > it when its discussed... It could be because they do have extensive real-life experience with backbone networking and large-scale routing, couldn't it? > given we havnt actually tried this approach (at the level where end > users have access to it) You're mistaken. It was tried, and was rightfully rejected because it sucked. (BTW, one of the most popular little toys I made was a thingie which did domain name-based e-mail routing over UUCP - not by tracking global topology maps a la pathalias, but by insertion of the next hop lookup step at every transit point. No more stuck e-mail trying to get along a precomputed path which has one hop down. That thingie was a smash hit in the place where phone lines used to be so notoriously flaky that every rain caused a singificant portion of them to get so bad that modems couldn't connect.) > since the old token/proteon ring and IBM stuff, I think claims about its > legitimacy or otherwise are moot See many token rings around nowadays? > as you say, for debugging, it has been quite useful in the past to some people > within the service... Ah, debugging. It makes zero engineering sense to put a feature used primarily in debugging into the expensive and highly optimized fast packet path. So what real routers typically do (with rare exceptions) is bouncing all packets with IP options to the slow path (i.e. software). And, of course, the fast path and slow path components use different forwarding tables, physically residing in different memories. Which sometimes get desynchronized. Or simply broken. When you have couple of thousand routers in your network this kind of things tend to happen to you now and then. When you use diagnostic tools relying on the same kind of packets as the payload traffic, you have much higher confidence that they show you what happens to the real payload. The very first encounter with a fried silicon switching engine tends to teach network engineers to use straightforward packet probes like ping and traceroute and avoid using fancy stuff in their day-to-day work. > you know without those pesker users asking for IP adresses > and routes and the ability to send data to each other > the internet would be a whole lot Gooder > and google would clearly do no Evil provably. It is useless to portrait network operator guys as control freaks. ISPs own their backbones, so it is *their* business decision to select routing policies which make economic sense to them. They have to make profit, or they are dead meat. Capitalism 101. The pesky customers pay them to get packets delivered, and the ISPs are keenly aware of that fact. If there were any significant number of customers absolutely positively wanting to control the paths their packets take, and willing to pay for that, ISPs would build networks supporting this functionality. The reality is, of course, that customers do not care about paths. They care about loss, end-to-end bandwidth and latency. So they actually pay money to ISPs to make routing decisions for them. This is called "division of labour". --vadim From avg at kotovnik.com Tue May 15 14:21:35 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Tue, 15 May 2007 14:21:35 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: <4649C1C0.1020607@reed.com> Message-ID: On Tue, 15 May 2007, David P. Reed wrote: > If your view is that the Internet is designed to make 10G adapters run > fast, or that IP headers should be used within AS's as native > fiber-switch routing data instead of wrapping them with layer 2 headers > as any sensible designer would, you would be right. In my view Internet should be designed to be a) fast, b) reliable and c) cheap. The extra crud in the protocols makes achieving these goals harder than it could be. > God did not say that IP was the layer 2 routing protocol. In fact, the > Internet was designed to join *heterogeneous* networks at layer 3. > Lunatic microoptimizers have decided that they ought to try to eliminate > diversity and heterogeneity in the Internet. See much diversity and heterogenity in the IP protocol? See much diversity and heterogenity in the Ethernet frames? 0800-45, heh. The lesson seems to be that simple technologies survive and prosper while over-engineeried everything-and-a-kitchen-sink designs like OSI fail. BTW, I didn't say anything about L2 framing. What I said is that ARP is an unnecessary complication, one more thing which can fail, can be attacked, and which introduces complexity into equipment and software (and *that* complexity costs real money). There's nothing God-given about that. You're skewering a straw man. --vadim From avg at kotovnik.com Tue May 15 14:39:28 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Tue, 15 May 2007 14:39:28 -0700 (PDT) Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <4649CA54.1050000@reed.com> Message-ID: On Tue, 15 May 2007, David P. Reed wrote: > A motivation for TCP and then IP, TCP/IP, UDP/IP, RTP/IP, etc. was that > network vendors had too much control over what could happen inside their > networks. Sorry to pop your cartoonish socialist view of the world, but network vendors have every right to exercise whatever control they think is appropriate over what happens inside THEIR networks. If you don't like what they're doing you have a choice of switching to another (more reasonable from your point of view) vendor, or building your own network, and competing with them. If you're right, you'll be able to deliver services people want at lower costs, and drive the control freaks out of the market. --vadim From avg at kotovnik.com Tue May 15 14:49:55 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Tue, 15 May 2007 14:49:55 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: Message-ID: On Tue, 15 May 2007, Lachlan Andrew wrote: > At last week's INFOCOM, Don Towsley and Peter Key pointed out that > selecting a small number of parallel paths, and regularly reselecting > based on performance (a la BitTorrent), is as good as selecting the > best path. As has been pointed out in this thread, that is much > better than BGP does. Mmmm... and that solves the problem of an end host not having an idea of what backbone topology is like - how? To select a path you have to know addresses of points along the path. This is all neat when you enter those points manually, but try to sell that to the actual users (who righfully think that the network should just work, like the AC power - you plug it in, you get your webs). --vadim From L.Wood at surrey.ac.uk Tue May 15 14:52:03 2007 From: L.Wood at surrey.ac.uk (Lloyd Wood) Date: Tue, 15 May 2007 22:52:03 +0100 Subject: [e2e] It's all my fault In-Reply-To: References: <4649C1C0.1020607@reed.com> Message-ID: <200705152152.WAA08907@cisco.com> At Tuesday 15/05/2007 14:21 -0700, Vadim Antonov wrote: >In my view Internet should be designed to be a) fast, b) reliable and c) >cheap. You can only have two. L. From avg at kotovnik.com Tue May 15 15:12:30 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Tue, 15 May 2007 15:12:30 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: <200705152152.WAA08907@cisco.com> Message-ID: On Tue, 15 May 2007, Lloyd Wood wrote: > >In my view Internet should be designed to be a) fast, b) reliable and c) > >cheap. > > You can only have two. I know. Looking back over the last decade or so, it seems that we got a good approximation:) --vadim From huitema at windows.microsoft.com Tue May 15 16:22:50 2007 From: huitema at windows.microsoft.com (Christian Huitema) Date: Tue, 15 May 2007 16:22:50 -0700 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: <70C6EFCDFC8AAD418EF7063CD132D0640485E48D@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> > To select a path you have to know addresses of points along the path. Not really, or at least not always, if the hosts are multi-homed, or if the information is present on several different hosts (as in bit-torrent). Each pair of source-destination addresses determines a path in the network: the source addresses determines the entry point, the destination address determines the exit, and the network determines the path in between. Hosts don't need to understand the network topology to assess which pairs provide the better service. -- Christian Huitema From end2end at rsuc.gweep.net Tue May 15 17:18:57 2007 From: end2end at rsuc.gweep.net (Joe Provo) Date: Tue, 15 May 2007 20:18:57 -0400 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <4649CA54.1050000@reed.com> References: <4649CA54.1050000@reed.com> Message-ID: <20070516001857.GA84305@gweep.net> On Tue, May 15, 2007 at 10:57:24AM -0400, David P. Reed wrote: [snip] > Therefore, it is time to do what is possible: construct a new overlay > network that exploits the IP network just as the IP network exploited > its predecessors the ARPANET and ATT's longhaul dedicated links and new > technologies such as LANs. > > I call for others to join me in constructing the next Internet, not as > an extension of the current Internet, because that Internet is corrupted > by people who do not value innovation, connectivity, and the ability to > absorb new ideas from the user community. You are confusing commoditization with corruption. While disheartening, the dollars to make the Internet go have to sustainably come from somewhere. Overlay networks exist; many modern Internet applications, some of which you cite, create networks at that layer. Many 'normal' applications are also driven over all sorts of encapsulations and tunnels as needed to acheive the goals of the end users. Assume the commoditized Internet is your foundation and move up the stack. Painting the efforts as combative is a waste of energy. While I still sting over the 'september that never ended', I have no faith that we would have come as far as we have without the commoditization drive. Cheers, Joe -- RSUC / GweepNet / Spunk / FnB / Usenix / SAGE From helbakou at nortel.com Tue May 15 18:28:14 2007 From: helbakou at nortel.com (Hesham Elbakoury) Date: Tue, 15 May 2007 20:28:14 -0500 Subject: [e2e] Banwidth Measurements Algorithms Message-ID: > I am looking for algorithms that are used to measure the capacity and > available bandwidth Of every Link in the path from a given source and > destination. > > Can you please provide me with pointers to such algorithms. > > Thanks > > Hesham > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070515/c8c35c78/attachment.html From djm at mindrot.org Tue May 15 18:33:29 2007 From: djm at mindrot.org (Damien Miller) Date: Wed, 16 May 2007 11:33:29 +1000 (EST) Subject: [e2e] It's all my fault In-Reply-To: <4649BF60.7080909@reed.com> References: <46452F80.9040801@reed.com> <4645D5F1.5030801@psg.com> <46460728.4050006@reed.com> <464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <4649BF60.7080909@reed.com> Message-ID: On Tue, 15 May 2007, David P. Reed wrote: > Damien Miller wrote: > > On Mon, 14 May 2007, David P. Reed wrote: > > > > http://www.secdev.org/conf/IPv6_RH_security-csw07.pdf > > > > It is a simple consequence of the fact that you can stuff over 40 address > > pairs into a RH0, and each pair causes a round trip. > > > A round trip is a security hole? Is every packet I send 1/80th of an attack? Are you being facetious or did you not read the presentation? An 80x traffic amplification on each packet yields a wonderful denial of service. The paper demonstrates an attacker keeping hosts/gateways occupied for *30 seconds* with a single packet. Do you regard this as a desirable behaviour? > If so, if I send 80 packets without RH0, then that is equally bad! To the victim there is no difference. However the cost to the attacker is 80x higher. > The issue here is that the network making a judgement about what > packets should and should not be delivered as requested requires that > the network be omniscient. If it were, it might as well figure out > which packets I will send, send them, and then I need not bother to > write the code to send them in the first place! I'm not sure that I parse this, or what relevance it has. The networks that I use daily are neither omniscient nor support source routing and they seem to function just fine. > Do time-sharing systems refuse to run code that implements sorting using a > bubble sort? Bad analogy, an inefficient algorithm doesn't (shouldn't) waste others' timeslots. -d From avg at kotovnik.com Tue May 15 18:39:02 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Tue, 15 May 2007 18:39:02 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: <70C6EFCDFC8AAD418EF7063CD132D0640485E48D@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> Message-ID: On Tue, 15 May 2007, Christian Huitema wrote: > > To select a path you have to know addresses of points along the path. > > Not really, or at least not always, if the hosts are multi-homed, or if > the information is present on several different hosts (as in > bit-torrent). Each pair of source-destination addresses determines a > path in the network: the source addresses determines the entry point, > the destination address determines the exit, and the network determines > the path in between. Hosts don't need to understand the network topology > to assess which pairs provide the better service. Yep. But that does not have anything to do with Source Routing. The network gateways determine which paths are taken. The hosts determine which end-points they're talking with. When hosts are multihomed you sometimes can control the exit point for the originated connections... but a cursory look at the existing application software shows pretty clearly that only a tiny fraction of it actually supports egress interface selection. And that STILL does not let you control the path of the packets - only to choose one of the few paths selected for you by the ISPs. --vadim From tim at ivisit.com Tue May 15 19:17:59 2007 From: tim at ivisit.com (Tim Dorcey) Date: Tue, 15 May 2007 19:17:59 -0700 Subject: [e2e] It's all my fault In-Reply-To: <70C6EFCDFC8AAD418EF7063CD132D0640485E48D@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> Message-ID: <00ae01c79760$6d9b7e30$0300a8c0@int.eyematic.com> > the path in between. Hosts don't need to understand the > network topology > to assess which pairs provide the better service. So in the long run I guess all paths provide identical service, as any high performing route immediately attracts additional traffic. That could be a surprise for an operator who thought their trans-Pacific cable upgrade would serve them for a while. Tim -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 1636 bytes Desc: not available Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070515/1e0329d5/winmail.bin From jsj at ieee.org Tue May 15 19:52:52 2007 From: jsj at ieee.org (Scott Johnson) Date: Tue, 15 May 2007 22:52:52 -0400 Subject: [e2e] Banwidth Measurements Algorithms In-Reply-To: References: Message-ID: <464A7204.8010105@ieee.org> Hello Hesham, May I suggest pchar http://www.kitchenlab.org/www/bmah/Software/pchar/ Hesham Elbakoury wrote: > > I am looking for algorithms that are used to measure the capacity and > available bandwidth Of every Link in the path from a given source and > destination. > > Can you please provide me with pointers to such algorithms. > > Thanks > > Hesham > -- Regards, Scott Johnson jsj at ieee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070515/e28f94aa/attachment.html From calvert at netlab.uky.edu Tue May 15 20:55:21 2007 From: calvert at netlab.uky.edu (Ken Calvert) Date: Tue, 15 May 2007 23:55:21 -0400 (EDT) Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: > The reality is, of course, that customers do not care about paths. They > care about loss, end-to-end bandwidth and latency. So they actually pay > money to ISPs to make routing decisions for them. This is called "division > of labour". The conflation of routing and forwarding in IP constrains the customer-provider relationship to the first hop, so the customer is stuck with whatever choice the ISP makes for all paths, no matter what. And the fact that identity is entangled with location keeps the cost of "voting with one's wallet" artificially high. Allowing source routing at the level of transit providers shifts the balance of power back toward the user. (See Xiaowei Yang's thesis.) And it's not that millions of users want to specify the path their packets follow. It's really about the interesting possibilities that cannot even be contemplated because of the lack of such a mechanism (and others needed to make it feasible). KC -- Ken Calvert, Associate Professor Lab for Advanced Networking calvert at netlab.uky.edu University of Kentucky Tel: +1.859.257.6745 Hardymon Building, 2nd Floor Fax: +1.859.323.1971 301 Rose Street http://www.cs.uky.edu/~calvert/ Lexington, KY 40506-0495 From fergdawg at netzero.net Tue May 15 21:28:26 2007 From: fergdawg at netzero.net (Fergie) Date: Wed, 16 May 2007 04:28:26 GMT Subject: [e2e] It's all my fault Message-ID: <20070515.212832.784.350315@webmail24.lax.untd.com> An embedded and charset-unspecified text was scrubbed... Name: not available Url: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070516/b5277e9a/attachment.ksh From Jon.Crowcroft at cl.cam.ac.uk Tue May 15 23:52:12 2007 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Wed, 16 May 2007 07:52:12 +0100 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: so your point about TE in previous message was on the money in one sense - customers (oops, sorry end users) bypassing the routes chosen by ISPs who are doin TE may undermine its effectiveness but TE typically operates to optimise for 1 objective function (e.g. loss or throughput) and much measurement shows that even within an ISP and certainly interdomain, the customer can (and some want to) optimise for a different goal which IP and BGP doesnt give ISPs tools to do - e.g. delay or reliability I might agree (loose/lose) source routing might not be the tool for this. how about loose destination routing? I am not just making a joke here- two runaway successes in the intenret, mobile IP and multicast (home agents and PIM RPs), both allow the user to control how traffic _reaches_ them - i think this is a more useful feature than controlling how I can send traffic to someone via someone, neither of whom might want my traffic anywhere near them - one could imagine a number of DoS mitagating tricks if some technique for loose destiantion routeing were devised - and one could use it for the debugging mentioned by someone (just as one uses traceroute and route view servers to gain multiple viewpoints on the net over time...only more directly part of the IP routing "service") LSR: loneliniess of the long distance runner or goalkeepers fear of the penalty? -----aside: oh routing on end systems - Xorp and Quagga work really quite well, and at least in our experience, xorp might end up being quite comercially viable in terms of robust code and interworking... ---- In missive , Vadim Antonov typ ed: >>On Tue, 15 May 2007, Jon Crowcroft wrote: >> >>> if we took ALL routing out of routers, life _would_ be simpler - >> >>That's cute, but does not address the reason why SR is nearly useless in >>the real life - namely that endpoints lack the up-to-date network topology >>information. To specify a path for a packet one has to know what paths >>are available and feasible. >> >>> running path computation on boxed _designed_ to do computation >>> and forwarding on boxeds designed to switch packets fast >>> just sounds like a perfectly reasonable idea to me >> >>Yep. It sounds good until you actually try to put together a working >>network based on that idea. Which immediately uncovers the >>aforementioned problem. >> >>> ISPs and router vendors, which is why they complain so much about every time >>> it when its discussed... >> >>It could be because they do have extensive real-life experience with >>backbone networking and large-scale routing, couldn't it? >> >>> given we havnt actually tried this approach (at the level where end >>> users have access to it) >>You're mistaken. It was tried, and was rightfully rejected because it >>sucked. (BTW, one of the most popular little toys I made was a thingie >>which did domain name-based e-mail routing over UUCP - not by tracking >>global topology maps a la pathalias, but by insertion of the next hop >>lookup step at every transit point. No more stuck e-mail trying to get >>along a precomputed path which has one hop down. That thingie was a smash >>hit in the place where phone lines used to be so notoriously flaky that >>every rain caused a singificant portion of them to get so bad that modems >>couldn't connect.) um, yes, well it sucked because of the fast v. slow path router hardware (it was an early deployment optioon for IP multicast but reversion to other tunneling technqiuues quickly happned when the impact on unicast of having the main processor deal with all the audio/video streaming traffic that Van et al were doing back in 1988 was felt...) - but thats not a network architectural objection its a router hardware design limitation...(and not common to all router hardware designs and not therefore inevitable >>> since the old token/proteon ring and IBM stuff, I think claims about its >>> legitimacy or otherwise are moot >>See many token rings around nowadays? ah, well no - but nor do we see slotted rings which were cheap as ethernet and gave resource guarantees- the best doesnt always win:) >>> as you say, for debugging, it has been quite useful in the past to some people >>> within the service... >>Ah, debugging. It makes zero engineering sense to put a feature used >>primarily in debugging into the expensive and highly optimized fast packet >>path. So what real routers typically do (with rare exceptions) is >>bouncing all packets with IP options to the slow path (i.e. software). v. good point - >>And, of course, the fast path and slow path components use different >>forwarding tables, physically residing in different memories. Which >>sometimes get desynchronized. Or simply broken. When you have couple of >>thousand routers in your network this kind of things tend to happen to you >>now and then. >> >>When you use diagnostic tools relying on the same kind of packets as the >>payload traffic, you have much higher confidence that they show you what >>happens to the real payload. The very first encounter with a fried silicon >>switching engine tends to teach network engineers to use straightforward >>packet probes like ping and traceroute and avoid using fancy stuff in >>their day-to-day work. >>> you know without those pesker users asking for IP adresses >>> and routes and the ability to send data to each other >>> the internet would be a whole lot Gooder >>> and google would clearly do no Evil provably. >>It is useless to portrait network operator guys as control freaks. ISPs >>own their backbones, so it is *their* business decision to select routing >>policies which make economic sense to them. They have to make profit, or >>they are dead meat. Capitalism 101. sure - but if we do build an internet that offers choice (like the phone net does) e.g. because of regulation (oops - sorry, government: bad word in the US:)... >>The pesky customers pay them to get packets delivered, and the ISPs are >>keenly aware of that fact. If there were any significant number of >>customers absolutely positively wanting to control the paths their packets >>take, and willing to pay for that, ISPs would build networks supporting >>this functionality. see above on TE objectives versus customer SLA needs >>The reality is, of course, that customers do not care about paths. They >>care about loss, end-to-end bandwidth and latency. So they actually pay >>money to ISPs to make routing decisions for them. This is called "division >>of labour". yes - i am not trying to disturb that completely cheers jon From dirk.trossen at bt.com Wed May 16 00:04:53 2007 From: dirk.trossen at bt.com (dirk.trossen@bt.com) Date: Wed, 16 May 2007 08:04:53 +0100 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <4649CA54.1050000@reed.com> Message-ID: <54EF0EE260D79F4085EC94FB6C9AA56202F9EFC9@E03MVZ1-UKDY.domain1.systemhost.net> David, I wonder if such almost revolutionary tone is both helpful and effective in reaching the goal (or anything for that matter) you're promoting. Not only do I believe (more hope) that the intention back then, when constructing IP, TCP, UDP, ..., was not 'to beat Mother Bell's control ambitions' but to truly enable end user innovation (driven by the true belief that this would benefit everybody), I would also argue that times do have changed since then. Change of fundamentals in the Internet is today more of an educational process than ever. It might be driven by technology, certainly not only though, but it certainly includes more than ever proper education beyond the pure technology community and the consideration for the concerns of everybody involved. It isn't a technology exercise anymore within a governmentally funded research community that, over the course of some twenty years, will then turn into a fundamental piece of societial life. It IS part of the societal life. So advocating changes needs to take into account the different concerns, also the ones of the 'routerheads' and the 'control freaks', if you will, in order to be successful. So it is not the goal that I'm questioning (you know how much I subscribe to end user driven innovation), it is your, to me, ineffective and confrontational method that I fear will turn out to be wasteful rather than fruitful. What the technology community CAN provide is the ammunition for this educational process, the proof that end user innovation is indeed enabled, for the good of everybody involved (and point our alternatives for the ones that seemingly will need to change). BTW, as you know I recently have joined a company you might characterize as being on the 'controlling end' of the spectrum, coming from an end user type of company. But believe me that I would have not joined if I didn't believe such education is possible. It isn't all black and white (us - whoever that is - against them). Dirk Dirk Trossen Chief Researcher BT Group Chief Technology Office pp 69, Sirius House Adastral Park, Martlesham Ipswich, Suffolk IP5 3RE UK e-mail: dirk.trossen at bt.com phone: +44(0) 7918711695 __________________________________________ British Telecommunications plc Registered office: 81 Newgate Street London EC1A 7AJ Registered in England no. 1800000 This electronic message contains information from British Telecommunications plc which may be privileged and confidential. The information is intended to be for the use of the individual(s) or entity named above. If you are not the intended recipient, be aware that any disclosure, copying, distribution or use of the contents of this information is prohibited. If you have received this electronic message in error, please notify us by telephone or email (to the number or address above) immediately. Activity and use of the British Telecommunications plc email system is monitored to secure its effective operation and for other lawful business purposes. Communications using this system will also be monitored and may be recorded to secure effective operation and for other lawful business purposes. > -----Original Message----- > From: end2end-interest-bounces at postel.org > [mailto:end2end-interest-bounces at postel.org] On Behalf Of > David P. Reed > Sent: Tuesday, May 15, 2007 3:57 PM > To: end2end-interest list > Subject: [e2e] Time for a new Internet Protocol > > A motivation for TCP and then IP, TCP/IP, UDP/IP, RTP/IP, > etc. was that network vendors had too much control over what > could happen inside their networks. > > Thus, IP was the first "overlay network" designed from > scratch to bring heterogeneous networks into a common, > world-wide "network of networks" > (term invented by Licklider and Taylor in their prescient > paper, The Computer as a Communications Device). By creating > universal connectivity, with such properties as allowing > multitudinous connections simultaneously between a node and > its peers, an extensible user-layer naming system called DNS, > and an ability to invent new end-to-end protocols, gradually > a new ecology of computer mediated communications evolved, > including the WWW (dependent on the ability to make 100 "calls" > within a few milliseconds to a variety of hosts), email > (dependent on the ability to deploy end-system server > applications without having to ask the "operator" for > permission for a special 800 number that facilitates public > addressability). > > Through a series of tragic events (including the dominance of > routerheads* in the network community) the Internet is > gradually being taken back into the control of providers who > view their goal as limiting what end users can do, based on > the theory that any application not invented by the pipe and > switch owners is a waste of resources. They argue that > "optimality" of the network is required, and that any new > application implemented at the edges threatens the security > and performance they pretend to provide to users. > > Therefore, it is time to do what is possible: construct a new > overlay network that exploits the IP network just as the IP > network exploited its predecessors the ARPANET and ATT's > longhaul dedicated links and new technologies such as LANs. > > I call for others to join me in constructing the next > Internet, not as an extension of the current Internet, > because that Internet is corrupted by people who do not value > innovation, connectivity, and the ability to absorb new ideas > from the user community. > > The current IP layer Internet can then be left to be > "optimized" by those who think that 100G connections should > drive the end user functionality. We can exploit the > Internet of today as an "autonomous system" just as we built > a layer on top of Ethernet and a layer on top of the ARPANET > to interconnect those. > > To save argument, I am not arguing that the IP layer could > not evolve. > I am arguing that the current research community and industry > community that support the IP layer *will not* allow it to evolve. > > But that need not matter. If necessary, we can do this > inefficiently, > creating a new class of routers that sit at the edge of the > IP network > and sit in end user sites. We can encrypt the traffic, so > that the IP > monopoly (analogous to the ATT monopoly) cannot tell what our > layer is doing, and we can use protocols that are more > aggressively defensive since the IP layer has indeed gotten > very aggressive in blocking traffic and attempting to prevent > user-to-user connectivity. > > Aggressive defense is costly - you need to send more packets when the > layer below you is trying to block your packets. But DARPA > would be a > useful funder, because the technology we develop will support > DARPA's efforts to develop networking technologies that work > in a net-centric world, where US forces partner with > temporary partners who may provide connectivity today, but > should not be trusted too much. > > One model is TOR, another is Joost. Both of these services overlay > rich functions on top of the Internet, while integrating > servers and clients into a full Internet on top of today's Internets. > > * routerheads are the modern equivalent of the old "bellheads". The > problem with bellheads was that they believed that the right > way to build a communications system was to put all functions > into the network layer, and have that layer controlled by a > single monopoly, in order to "optimize" the system. Such an > approach reminds one of the argument for > the corporate state a la Mussolini: the trains run on time. Today's > routerheads believe that the Internet is created by the > fibers and pipes, rather than being an end-to-end set of > agreements that can layer > on top of any underlying mechanism. Typically they work for > backbone > ISPs or Router manufacturers as engineers, or in academic > circles they focus on running hotrod competitions for the > fastest file transfer between two points on the earth > (carefully lining up fiber and switches between specially > tuned endpoints), or worse, running NS2 simulations that > demonstrate that it is possible to stand on one's head while > singing the National Anthem to get another publication in > some Springer-Verlag journal. > > > > From avg at kotovnik.com Wed May 16 00:11:17 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Wed, 16 May 2007 00:11:17 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: Message-ID: On Tue, 15 May 2007, Ken Calvert wrote: > Allowing source routing at the level of transit providers > shifts the balance of power back toward the user. (See > Xiaowei Yang's thesis.) Are we living on the same planet? Do you seriously think that any ISP would be interested in purchasing equipment or software which would let users to get the best of the "balance of power" (or, to put it more bluntly, ability to screw ISP's traffic engineering and business arrangements with peers)? The reality is that ISPs are here to make money. Anything which doesn't look like it's good to their bottom lines is not going to be deployed. > It's really about the interesting possibilities that cannot even be > contemplated because of the lack of such a mechanism (and others needed > to make it feasible). Oh, surely one does not need actual deployment to contemplate all the interesting possibilities. There's a lot of wonderful ideas floating around. A tiny percentage of them may even make business sense. One reason why I left networking is because packet delivery is ultimately boring. There's a good enough technology which more-less works. Years ago I figured out how to build backbones with arbitrarily large capacity, there are no more technological challenges in that. This market is all about price/performance now, a commodity market. --vadim From fergdawg at netzero.net Wed May 16 00:17:34 2007 From: fergdawg at netzero.net (Fergie) Date: Wed, 16 May 2007 07:17:34 GMT Subject: [e2e] It's all my fault Message-ID: <20070516.001750.784.350796@webmail24.lax.untd.com> An embedded and charset-unspecified text was scrubbed... Name: not available Url: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070516/a6eb7867/attachment.ksh From Jon.Crowcroft at cl.cam.ac.uk Wed May 16 00:50:09 2007 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Wed, 16 May 2007 08:50:09 +0100 Subject: [e2e] It's all my fault In-Reply-To: <20070516.001750.784.350796@webmail24.lax.untd.com> References: <20070516.001750.784.350796@webmail24.lax.untd.com> Message-ID: which reminds me, BGP is totally backwards if you want to control qos, and yet MPLS which has the right model for path engineering has the wrong model for interdomain how do we reconcile needs (complexity) of interdomain relationships in terms of controling transit traffic etc , and needs for engineering paths for sources - ones a sink and the other a source based activity - i know there;'s lots of BGP workaround hacks (esp since various VOIP things make demands) but are there any architecturally pleasant ideas around? In missive <20070516.001750.784.350796 at webmail24.lax.untd.com>, "Fergie" typed: >>Wait, wait. >>Let's take a bit better effort to define what we're talking >>about here. >> >>Traffic Engineering these days takes one of two forms; one >>being the ability to multi-home and propagate a particular >>preference in the [forwarding|reverse] path (given certain >>parameters), and secondly the ability to establish paths via >>some MPLS VPN magic (primarily). >> >>I think we need to be a little more specific when talk about TE, >>given the architectural implications. >> >>- ferg >> >> >>-- Jon Crowcroft wrote: >> >>so your point about TE in previous message was on the money >>in one sense - customers (oops, sorry end users) bypassing the routes >>chosen by ISPs who are doin TE may undermine its effectiveness >> >>but TE typically operates to optimise for 1 objective function (e.g. >>loss or >>throughput) and much measurement shows that even within an ISP and >>certainly >>interdomain, the customer can (and some want to) optimise for a >>different goal >>which IP and BGP doesnt give ISPs tools to do - e.g. delay or reliabili= >>ty >> >>I might agree (loose/lose) source routing might not be the tool for >>this. how >>about loose destination routing? I am not just making a joke here- two >>runaway >>successes in the intenret, mobile IP and multicast (home agents and PIM >>RPs), = >> >>both allow the user to >>control how traffic _reaches_ them - i think this is a more useful >>feature than >>controlling how I can send traffic to someone via someone, neither of >>whom might >>want my traffic anywhere near them - one could imagine a number of DoS >>mitagating >>tricks if some technique for loose destiantion routeing were devised - >>and one >>could use it for the debugging mentioned by someone (just as one uses >>traceroute >>and route view servers to gain multiple viewpoints on the net over >>time...only >>more directly part of the IP routing "service") >> >>LSR: >>loneliniess of the long distance runner >>or >>goalkeepers fear of the penalty? >> >>-----aside: >>oh routing on end systems - Xorp and Quagga work really quite well, and >>at least >>in our experience, xorp might end up being quite comercially viable in >>terms of >>robust code and interworking... >> >>---- >> >>In missive >>, Vadim >>Antonov typ >>ed: >> >> >>On Tue, 15 May 2007, Jon Crowcroft wrote: >> >> >> >>> if we took ALL routing out of routers, life _would_ be simpler - = >> >> >> >> >>That's cute, but does not address the reason why SR is nearly useless= >> in >> >>the real life - namely that endpoints lack the up-to-date network >>topology >> >>information. To specify a path for a packet one has to know what pat= >>hs >> >>are available and feasible. >> >> >> >>> running path computation on boxed _designed_ to do computation >> >>> and forwarding on boxeds designed to switch packets fast >> >>> just sounds like a perfectly reasonable idea to me >> >> >> >>Yep. It sounds good until you actually try to put together a working >> >>network based on that idea. Which immediately uncovers the = >> >> >>aforementioned problem. >> >> >> >>> ISPs and router vendors, which is why they complain so much about >>every time >> >>> it when its discussed... >> >> >> >>It could be because they do have extensive real-life experience with = >>= >> >> >>backbone networking and large-scale routing, couldn't it? >> >> = >> >> >>> given we havnt actually tried this approach (at the level where end= >> >> >>> users have access to it) >> = >> >> >>You're mistaken. It was tried, and was rightfully rejected because it= >> >> >>sucked. (BTW, one of the most popular little toys I made was a thing= >>ie >> >>which did domain name-based e-mail routing over UUCP - not by trackin= >>g >> >>global topology maps a la pathalias, but by insertion of the next hop= >> >> >>lookup step at every transit point. No more stuck e-mail trying to g= >>et >> >>along a precomputed path which has one hop down. That thingie was a >>smash >> >>hit in the place where phone lines used to be so notoriously flaky th= >>at >> >>every rain caused a singificant portion of them to get so bad that >>modems >> >>couldn't connect.) >> >>um, yes, well it sucked because of the fast v. slow path router >>hardware (it was >>an early deployment optioon for IP multicast but reversion to other >>tunneling >>technqiuues quickly happned when the impact on unicast of having the mai= >>n >>processor deal with all the audio/video streaming traffic that Van et >>al were >>doing back in 1988 was felt...) - but thats not a network architectural >>objection >>its a router hardware design limitation...(and not common to all router >>hardware >>designs and not therefore inevitable >> = >> >> >>> since the old token/proteon ring and IBM stuff, I think claims >>about its >> >>> legitimacy or otherwise are moot >> = >> >> >>See many token rings around nowadays? >> >>ah, well no - but nor do we see slotted rings which were cheap as >>ethernet and >>gave resource guarantees- the best doesnt always win:) >> >> >>> as you say, for debugging, it has been quite useful in the past to >>some people = >> >> >>> within the service... >> = >> >> >>Ah, debugging. It makes zero engineering sense to put a feature used= >> = >> >> >>primarily in debugging into the expensive and highly optimized fast >>packet = >> >> >>path. So what real routers typically do (with rare exceptions) is = >> >> >>bouncing all packets with IP options to the slow path (i.e. software)= >>. >> >>v. good point - >> >> >>And, of course, the fast path and slow path components use different >> >>forwarding tables, physically residing in different memories. Which >> >>sometimes get desynchronized. Or simply broken. When you have couple= >> of >> >>thousand routers in your network this kind of things tend to happen >>to you >> >>now and then. >> >> >> >>When you use diagnostic tools relying on the same kind of packets as >>the = >> >> >>payload traffic, you have much higher confidence that they show you >>what = >> >> >>happens to the real payload. The very first encounter with a fried >>silicon = >> >> >>switching engine tends to teach network engineers to use straightforw= >>ard >> >>packet probes like ping and traceroute and avoid using fancy stuff in= >> = >> >> >>their day-to-day work. >> = >> >> >>> you know without those pesker users asking for IP adresses = >> >> >>> and routes and the ability to send data to each other >> >>> the internet would be a whole lot Gooder >> >>> and google would clearly do no Evil provably. >> >> >>It is useless to portrait network operator guys as control freaks. IS= >>Ps = >> >> >>own their backbones, so it is *their* business decision to select >>routing = >> >> >>policies which make economic sense to them. They have to make >>profit, or = >> >> >>they are dead meat. Capitalism 101. >> >>sure - but if we do build an internet that offers choice (like the >>phone net >>does) e.g. because of regulation (oops - sorry, government: bad word in = >>the >>US:)... >> >> >>The pesky customers pay them to get packets delivered, and the ISPs a= >>re = >> >> >>keenly aware of that fact. If there were any significant number of = >> >> >>customers absolutely positively wanting to control the paths their >>packets = >> >> >>take, and willing to pay for that, ISPs would build networks supporti= >>ng = >> >> >>this functionality. >> >>see above on TE objectives versus customer SLA needs >> >> >>The reality is, of course, that customers do not care about paths. Th= >>ey >> >>care about loss, end-to-end bandwidth and latency. So they actually p= >>ay >> >>money to ISPs to make routing decisions for them. This is called >>"division = >> >> >>of labour". >> = >> >> yes - i am not trying to disturb that completely >> >> cheers >> >> jon >> >> >> >>-- >>"Fergie", a.k.a. Paul Ferguson >> Engineering Architecture for the Internet >> fergdawg(at)netzero.net >> ferg's tech blog: http://fergdawg.blogspot.com/ >> cheers jon From fergdawg at netzero.net Wed May 16 00:58:51 2007 From: fergdawg at netzero.net (Fergie) Date: Wed, 16 May 2007 07:58:51 GMT Subject: [e2e] It's all my fault Message-ID: <20070516.005900.784.350849@webmail24.lax.untd.com> An embedded and charset-unspecified text was scrubbed... Name: not available Url: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070516/ef384cf3/attachment.ksh From dirk.trossen at bt.com Wed May 16 02:25:31 2007 From: dirk.trossen at bt.com (dirk.trossen@bt.com) Date: Wed, 16 May 2007 10:25:31 +0100 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <4649CA54.1050000@reed.com> Message-ID: <54EF0EE260D79F4085EC94FB6C9AA56202F9F16C@E03MVZ1-UKDY.domain1.systemhost.net> [RESENT - this time without signature - my apologies, still had to find the off-by-default setting] David, I wonder if such almost revolutionary tone is both helpful and effective in reaching the goal (or anything for that matter) you're promoting. Not only do I believe (more hope) that the intention back then, when constructing IP, TCP, UDP, ..., was not 'to beat Mother Bell's control ambitions' but to truly enable end user innovation (driven by the true belief that this would benefit everybody), I would also argue that times do have changed since then. Change of fundamentals in the Internet is today more of an educational process than ever. It might be driven by technology, certainly not only though, but it certainly includes more than ever proper education beyond the pure technology community and the consideration for the concerns of everybody involved. It isn't a technology exercise anymore within a governmentally funded research community that, over the course of some twenty years, will then turn into a fundamental piece of societial life. It IS part of the societal life. So advocating changes needs to take into account the different concerns, also the ones of the 'routerheads' and the 'control freaks', if you will, in order to be successful. So it is not the goal that I'm questioning (you know how much I subscribe to end user driven innovation), it is your, to me, ineffective and confrontational method that I fear will turn out to be wasteful rather than fruitful. What the technology community CAN provide is the ammunition for this educational process, the proof that end user innovation is indeed enabled, for the good of everybody involved (and point our alternatives for the ones that seemingly will need to change). BTW, as you know I recently have joined a company you might characterize as being on the 'controlling end' of the spectrum, coming from an end user type of company. But believe me that I would have not joined if I didn't believe such education is possible. It isn't all black and white (us - whoever that is - against them). Dirk > -----Original Message----- > From: end2end-interest-bounces at postel.org > [mailto:end2end-interest-bounces at postel.org] On Behalf Of > David P. Reed > Sent: Tuesday, May 15, 2007 3:57 PM > To: end2end-interest list > Subject: [e2e] Time for a new Internet Protocol > > A motivation for TCP and then IP, TCP/IP, UDP/IP, RTP/IP, > etc. was that network vendors had too much control over what > could happen inside their networks. > > Thus, IP was the first "overlay network" designed from > scratch to bring heterogeneous networks into a common, > world-wide "network of networks" > (term invented by Licklider and Taylor in their prescient > paper, The Computer as a Communications Device). By creating > universal connectivity, with such properties as allowing > multitudinous connections simultaneously between a node and > its peers, an extensible user-layer naming system called DNS, > and an ability to invent new end-to-end protocols, gradually > a new ecology of computer mediated communications evolved, > including the WWW (dependent on the ability to make 100 "calls" > within a few milliseconds to a variety of hosts), email > (dependent on the ability to deploy end-system server > applications without having to ask the "operator" for > permission for a special 800 number that facilitates public > addressability). > > Through a series of tragic events (including the dominance of > routerheads* in the network community) the Internet is > gradually being taken back into the control of providers who > view their goal as limiting what end users can do, based on > the theory that any application not invented by the pipe and > switch owners is a waste of resources. They argue that > "optimality" of the network is required, and that any new > application implemented at the edges threatens the security > and performance they pretend to provide to users. > > Therefore, it is time to do what is possible: construct a new > overlay network that exploits the IP network just as the IP > network exploited its predecessors the ARPANET and ATT's > longhaul dedicated links and new technologies such as LANs. > > I call for others to join me in constructing the next > Internet, not as an extension of the current Internet, > because that Internet is corrupted by people who do not value > innovation, connectivity, and the ability to absorb new ideas > from the user community. > > The current IP layer Internet can then be left to be > "optimized" by those who think that 100G connections should > drive the end user functionality. We can exploit the > Internet of today as an "autonomous system" just as we built > a layer on top of Ethernet and a layer on top of the ARPANET > to interconnect those. > > To save argument, I am not arguing that the IP layer could > not evolve. > I am arguing that the current research community and industry > community that support the IP layer *will not* allow it to evolve. > > But that need not matter. If necessary, we can do this > inefficiently, > creating a new class of routers that sit at the edge of the > IP network > and sit in end user sites. We can encrypt the traffic, so > that the IP > monopoly (analogous to the ATT monopoly) cannot tell what our > layer is doing, and we can use protocols that are more > aggressively defensive since the IP layer has indeed gotten > very aggressive in blocking traffic and attempting to prevent > user-to-user connectivity. > > Aggressive defense is costly - you need to send more packets when the > layer below you is trying to block your packets. But DARPA > would be a > useful funder, because the technology we develop will support > DARPA's efforts to develop networking technologies that work > in a net-centric world, where US forces partner with > temporary partners who may provide connectivity today, but > should not be trusted too much. > > One model is TOR, another is Joost. Both of these services overlay > rich functions on top of the Internet, while integrating > servers and clients into a full Internet on top of today's Internets. > > * routerheads are the modern equivalent of the old "bellheads". The > problem with bellheads was that they believed that the right > way to build a communications system was to put all functions > into the network layer, and have that layer controlled by a > single monopoly, in order to "optimize" the system. Such an > approach reminds one of the argument for > the corporate state a la Mussolini: the trains run on time. Today's > routerheads believe that the Internet is created by the > fibers and pipes, rather than being an end-to-end set of > agreements that can layer > on top of any underlying mechanism. Typically they work for > backbone > ISPs or Router manufacturers as engineers, or in academic > circles they focus on running hotrod competitions for the > fastest file transfer between two points on the earth > (carefully lining up fiber and switches between specially > tuned endpoints), or worse, running NS2 simulations that > demonstrate that it is possible to stand on one's head while > singing the National Anthem to get another publication in > some Springer-Verlag journal. > > > > From detlef.bosau at web.de Wed May 16 03:27:30 2007 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 16 May 2007 12:27:30 +0200 Subject: [e2e] I got lost in opportunistic scheduling. In-Reply-To: <46498002.9000903@gmail.com> References: <4616E722.3070402@web.de> <46487D95.4040104@web.de> <7.0.1.0.2.20070514143809.02813318@antd.nist.gov> <4648C666.9000607@web.de> <46498002.9000903@gmail.com> Message-ID: <464ADC92.3010302@web.de> Khaled Elsayed wrote: > Detlef, > > There are some papers that discuss the relation between OS at MAC/PHY > and TCP. For example check > http://www.isr.umd.edu/~baras/publications/reports/2002/SrinivasanB_TR_2002-48.htm > > But there are also more recent stuff. As I see, I got this paper before but did not pay enough attention to it. Some questions. 1. As far as I see, OS/multiuser diversity is yet employed in the Qualcomm 1xEV-DO wireless system. (Is there a name for it, which one can say within a lifetime? *got scared*) Are there other systems which employ OS? 2. Qualc..., say C3P0, it?s shorter :-), seems to have a _common_ downlink and _dedicated_ uplinks. Is this correct? In the paper, the uplinks are said to be "asynchronous circuit-switched". What does that mean? (I already got criticism this year because I talked about packet swichting in wireless networks, because packet switching were not restricted to wireless networks or something like that.... I didn?t understand it.) Does it mean, we have dedicated TDM channels with something like HDLC on it to enable packet transport? 3. Somwehere in the paper, a reliable link control / reliable link protocol is mentioned. What kind of protocol is used here? Something like RLPv3, which fragments L3 packets into pieces of e.g. about 20 bytes? Or do we deal with IP packets directly? Particularly, does C3P0 emply any ARQ? Or does it only use FEC? 4. If C3P0 emplys ARQ and some RLP, does it use sliding window? Or does it use stop?n wait? 5. If C3P0 only uses FEC, one could be somewhat extreme and don?t really use even that, at least don?t use extreme spreading but one could rely upon OS only. One goal of OS is to avoid errors in advance rather then correct them afterwards. So one could work with only little error correction capability intentionally. Or one could work with extensive adaptation of channel coding / puncturing. How is this done in C3P0? > > I think that implementation of pure OS without some compensation for > users with consistent bad channels does not make sense. I have some > results on that for RT services that was published in MSWIM 2004. > E-mail me if interested. Of course, I?m interested! Question, somewhat provoking: Does it make sense to combine OS and RT services? Somewhere on David Tse?s "talk" (he has so many slidesets of this one talk on his homepage that I wonder if he ever gave another one =8-)) we learn something about voice vs. data. (Unfortunately it?s not mentioned on the slides _what_ we learn =8-)) But I think it hardly makes sense to mixup voice and data, because data requires data integrity and voice requires time integrity _AND_ can tolerate errors. In data services, you either have corrupted packets or you have error free packets. Is there a way to allow for "half correct packets"? So, at least at this moment, I think, data services and _error_ _tolerant_ RT services should be done seperately. Or what do you think? Detlef From Mikael.Latvala at nokia.com Wed May 16 04:33:03 2007 From: Mikael.Latvala at nokia.com (Mikael.Latvala@nokia.com) Date: Wed, 16 May 2007 14:33:03 +0300 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: <326A98184EC61140A2569345DB3614C10511D7C3@esebe105.NOE.Nokia.com> > This market is all about >price/performance now, a commodity market. > >--vadim > And how to keep competitors at distance (= use of restrictive NATs and overly paranoiac firewalls). /Mikael From pekka.nikander at nomadiclab.com Wed May 16 04:45:15 2007 From: pekka.nikander at nomadiclab.com (Pekka Nikander) Date: Wed, 16 May 2007 14:45:15 +0300 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <4649CA54.1050000@reed.com> References: <4649CA54.1050000@reed.com> Message-ID: David, A couple of comments to two of your messages, also related to Vadim's "observations" about your economics-related undertones. On 15 May 2007, at 16:48, David P. Reed wrote: > [Source routing] is not a security hole anymore than the ability to > send a packet to an arbitrary destination (the *core* function of > IP) is a hole - ... I agree. But the I simultaneously claim that given the current economic reality, the core function of IP, the ability to send a packet to an arbitrary destination, *is* indeed a "security hole". If we had a different economic model, it might not be such a "hole", and certainly it was the "right" communication model in the early days of the Internet. Given the prevalence of botnets and that almost everyone has a flat rate and therefore doesn't really care about outgoing traffic, the very communication model has become a "security hole". Or, rather, it is the primary technological contributor to making it easy for the naturally-selfish fraction of the human society to behave in a socially undesirable manner. The Internet is no more a village, and therefore we are suffering from our own specific form of the tragedy of the commons. > ... if that packet triggers a vulnerability in that destination, > it's not the addressability that is the hole... The "hole" is that mere appearance of the packet at the destination is a "vulnerability", given that it is economically feasible to send hugely larger amounts of packets than the destination can easily handle. The real question is about balancing the desires of the prospective senders and receivers. The current network design gives all power to the sender: any sender can send any sh*t to any receiver, independent of whether the receiver wants to have it or not, at *almost* no cost. In practical terms, the marginal cost of receiving more packets (including filtering undesired packets out) is hugely larger than the marginal cost of sending some more (unwanted) packets. On 15 May 2007, at 17:57, David P. Reed wrote: > A motivation for TCP and then IP, TCP/IP, UDP/IP, RTP/IP, etc. was > that network vendors had too much control over what could happen > inside their networks. And then gave the control to the users, assuming that all the users have aligned interests. That works in a village but not in the current urban sprawl Internet with its ghettos and poverty. If we attempt to do the power shift again (and I agree we should!), we have to aim for a socially more sustainable form of networking. A form of networking that balances the power both between the end-user and the connectivity provider, and between the sender and the receiver. A network that is traffic-neutral but still gives the ISPs enough of technical knobs to be able to compete through operational excellence. A network that requires consent both from the sender and receiver before any (larger amounts) of traffic get through, in a manner that leaves the control to the end-users. > Through a series of tragic events (including the dominance of > routerheads* in the network community) the Internet is gradually > being taken back into the control of providers who view their goal > as limiting what end users can do, based on the theory that any > application not invented by the pipe and switch owners is a waste > of resources. They argue that "optimality" of the network is > required, and that any new application implemented at the edges > threatens the security and performance they pretend to provide to > users. That's not a series of tragic events but economics 101. It is natural that the service providers will always try to achieve market positions where they can set prices based on the end-user utility rather than the production cost. From the social utility point of view, price differentiation is a thorny question. In one hand, as long as it leads to a situation where the same service is *also* offered at a cost which is lower than the average production cost; e.g., the way airlines sell the cheapest seats at just slightly higher than the marginal cost, we can see the overall social utility increasing. On the other hand, if it allows the service providers to set the average prices above the average production cost, the only net effect is that the service providers will become richer and the general public will suffer. More complications are caused by the extremely minimal marginal costs that we are discussing here. Indeed, from a social utility point of view one might argue that the optimal network load is one where all links are fully utilised but nowhere there is any congestion or queues. If you/we are really going to create a new Internet Protocol, I challenge you/us to create one that make such a load goal more achievable than the current situation. :-) > I call for others to join me in constructing the next Internet, not > as an extension of the current Internet, because that Internet is > corrupted by people who do not value innovation, connectivity, and > the ability to absorb new ideas from the user community. I would not call the currently prevailing tendency of many (but not all) people of trying to maximise their monetary income as an opposite of valuing innovation, connectivity, and new user-originated ideas. As Vadim wrote, the ISPs must live under pretty harsh competition conditions. That doesn't mean that many people working for the ISPs might still privately value innovation, connectivity, and new ideas very much. Another aspect here are those individuals that behave in an antisocial way (spammers etc). That is a major cause driving the ISPs towards closing the Internet. So, it is not only about greed, but also about the network design and the misaligned balance of power created by the very paradigm. > But that need not matter. If necessary, we can do this > inefficiently, creating a new class of routers that sit at the edge > of the IP network and sit in end user sites. We can encrypt the > traffic, so that the IP monopoly (analogous to the ATT monopoly) > cannot tell what our layer is doing, and we can use protocols that > are more aggressively defensive since the IP layer has indeed > gotten very aggressive in blocking traffic and attempting to > prevent user-to-user connectivity. Already a couple of years ago I came to the conclusion that in the longer run the only economically sensitive QoS policy is to charge less for the traffic whose QoS requirements the users willingly tell the operator. If you make "best effort" the most expensive traffic class (and charge it a market price), you create a natural incentive for your users to tell their real QoS requirements to you, allowing you to actually better serve your customers than you otherwise would be. Turn the information asymmetries to your benefit instead of trying to fight against them. --Pekka Nikander a wannabe-economist routerhead From jnc at mercury.lcs.mit.edu Wed May 16 05:05:28 2007 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 16 May 2007 08:05:28 -0400 (EDT) Subject: [e2e] It's all my fault Message-ID: <20070516120528.1CB2C86ADD@mercury.lcs.mit.edu> > From: "Fergie" First, let's all remember to differentiate between "user control of paths" as an architectural concept, and "source-routing" the current specific engineering mechanism. Just because there may be specific problems with the latter, that doesn't necessarily say anything about the former. And "user-control of paths" doesn't even imply any form of classic source routes; i.e. large numbers of addressses in the headers. > So, if source-routing is a "desired" option, how can I ensure that > the "source" is valid? > In other words, if it is used for malicious purposes, how can I trace > it back to it's "real" source? > This is a major issue for me, from a security perspective. Well, but this is a problem with ordinary datagram traffic, too, right? I don't recall offhand the specific details of how that particular source-routing mechanism works, but if it has the original source in some easily-findable location in the packet header, whatever mechanism works for normal datagrams should also work with this, no? If the point is that it's more expensive to find this address in the header, then we get to a different problem. Any time the protocol includes a less-used mechanism X, and people building/buying boxes decide to buy boxes that implement X in a way such that if a large %-age of traffic suddenly starts using it, the box keels over, then that's just an attack vector for some pond-scum. The problem is that any protocol design is going to include less-common, but more expensive to process, forms of traffic; i.e. control-plane stuff. Any of them, over-used, becomes an attack vector. You can't simply delete things from the protocol (or implementation) any time one of them is used as an attack vector. Doing that leads to all sorts of operational/diagnostic tools (e.g. ICMP, and Path-MTU-Discovery) suddenly no longer working. I don't know if anyone has attacked TTL-Expired yet (causing the target to have to send out bazillions of Time-Expired ICMP messages), but no doubt someone will at some point, at which point "traceroute" will stop working. What's next, attacking via trying to open a BGP connection? The answer is not to delete less-used stuff, it's to i) make sure we can track down the pond-scum and lock them in a room and throw the room away (which is, alas, not under our control), and ii) build boxes that respond reasonably to these sorts of attacks. That means that they have to a) rate-limit things so that they can't sink the box, and b) include mechanisms so that when one of those things starts happening, we can track down the source(s) easily, and turn them off. Noel From calvert at netlab.uky.edu Wed May 16 05:40:51 2007 From: calvert at netlab.uky.edu (Ken Calvert) Date: Wed, 16 May 2007 08:40:51 -0400 (EDT) Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: >> Allowing source routing at the level of transit providers >> shifts the balance of power back toward the user. (See >> Xiaowei Yang's thesis.) > > Are we living on the same planet? Do you seriously think that any ISP > would be interested in purchasing equipment or software which would let > users to get the best of the "balance of power" (or, to put it more > bluntly, ability to screw ISP's traffic engineering and business > arrangements with peers)? No, I don't expect this would be deployed by current providers. It requires a fairly complete re-thinking of the architecture because other mechanisms are needed to make it work. It will only be deployed if, as was suggested, it (i.e. a new architecture) grows in parallel and eventually succeeds because it is more attractive in some way. It would take a long time, if it happens. What's needed are mechanisms that separate concerns to allow the different parties in the "tussle" to implement their own policies. It is not obvious (to me -- but I'm neither an ISP nor an economist) that allowing competition among transit providers precludes traffic engineering a priori. >> It's really about the interesting possibilities that cannot even be >> contemplated because of the lack of such a mechanism (and others needed >> to make it feasible). > > Oh, surely one does not need actual deployment to contemplate all the > interesting possibilities. You are right, one can contemplate. But as soon as one starts talking about it, lots of folks with a firm grasp of the status quo start saying it'll never work, there's no market for it, etc. The only way to overcome that is to build something and use it (what I think Reed was talking about). > boring. There's a good enough technology which more-less works. Years ago > I figured out how to build backbones with arbitrarily large capacity, > there are no more technological challenges in that. This market is all > about price/performance now, a commodity market. It's a commodity market because for the vast majority of customers, the denominator of price/performance is essentially fixed. KC -- Ken Calvert, Associate Professor Lab for Advanced Networking calvert at netlab.uky.edu University of Kentucky Tel: +1.859.257.6745 Hardymon Building, 2nd Floor Fax: +1.859.323.1971 301 Rose Street http://www.cs.uky.edu/~calvert/ Lexington, KY 40506-0495 From avg at kotovnik.com Wed May 16 15:38:12 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Wed, 16 May 2007 15:38:12 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: Message-ID: On Wed, 16 May 2007, Ken Calvert wrote: > You are right, one can contemplate. But as soon as one > starts talking about it, lots of folks with a firm grasp of > the status quo start saying it'll never work, there's no > market for it, etc. Here you see the division between engineers who design and build things which make economic sense and academics who think of pure Platonic technology existing in economic vacuum. And, yes, network engineers do have to deal with the spillover of bad ideas from academia - things which would never get into protocols and designs if somebody took trouble to evaluate their economic feasibility before sneaking them into standards. > The only way to overcome that is to build something and use it (what I > think Reed was talking about). Yep. *Build* something. To do that you need way more than a neat idea - you need capital, you need business plan, you need customers who actually wish to buy the product. You need to spend years of your life working like hell. And if you were wrong, you don't get anything for your troubles. Or you may convince some bureaucrats in DC to give you lots of money they have taken from us under the threat of jailtime and violence so you can play with your pet idea. A lot of people resent that, you know? As long as you want to go the first route I can only wish the best luck, and offer some advice - do not talk much about the ideas you intend to implement, there's a lot of sharks in this water. If your plan is to organize another federally funded playpen, I (and other engineering people) will do everything to shoot the proposed neat idea down, before it becomes another excuse for looting more from us. I'm all for discussing various neat tricks and gimmicks as a pure mental excercise, contemplating possibilities, and such. But I draw the line when someone starts to talk about implementing his ideas at my expense. I have neat ideas of my own - and wish to spend my resources on playing with them. Did I make my position clear? --vadim PS. Sorry for the off-topic. From jnc at mercury.lcs.mit.edu Wed May 16 16:49:31 2007 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 16 May 2007 19:49:31 -0400 (EDT) Subject: [e2e] It's all my fault Message-ID: <20070516234931.EA264872D5@mercury.lcs.mit.edu> > From: Vadim Antonov > the division between engineers who design and build things which make > economic sense and academics who think of pure Platonic technology > existing in economic vacuum. ... network engineers do have to deal with > the spillover of bad ideas from academia > ... > *Build* something. To do that you need way more than a neat idea - you > need capital, you need business plan, you need customers who actually > wish to buy the product. > ... > Or you may convince some bureaucrats in DC to give you lots of money > they have taken from us under the threat of jailtime and violence so > you can play with your pet idea. A lot of people resent that, you know? Vadim, you ought to consider that all this neat packet networking stuff only exists now because for many years (during Baran's first RAND work ca. 1960-64, then during the ARPANet development in the late 60's-early-70's, and then the early internetwork work in the 1975-1982 time-frame) this stuff was all funded by "bureaucrats in DC". There was *no* commercial market for any of this stuff back then, so there was no other way to make it happen. (A fact of which I am well aware, because I was one of the first people - maybe the first, actually - to make money selling IP routers commercially - and that was in 1984 or so, almost 10 years after the bureacrats starting putting money into TCP/IP.) In fact, to add a nice topping of irony, many commercial communications people of the day (circa 1980) said much the same things about TCP/IP (which you seem to like) that you are now saying about other efforts: I distinctly recall the TCP/IP people being told to "roll up our toy academic network" (and yes, they explicitly and definitely used the work "academic") and go home. So you might want to remember that when you dump on these new "academic" ideas. Noel From r.gold at cs.ucl.ac.uk Wed May 16 16:52:38 2007 From: r.gold at cs.ucl.ac.uk (Richard Gold) Date: Thu, 17 May 2007 00:52:38 +0100 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: <464B9946.4080504@cs.ucl.ac.uk> > I'm all for discussing various neat tricks and gimmicks as a pure mental > excercise, contemplating possibilities, and such. But I draw the line > when someone starts to talk about implementing his ideas at my expense. I > have neat ideas of my own - and wish to spend my resources on playing with > them. Would you have said the same thing about the ARPANET in 1968? Cheers, Richard From jtw at ISI.EDU Wed May 16 17:10:13 2007 From: jtw at ISI.EDU (John Wroclawski) Date: Wed, 16 May 2007 17:10:13 -0700 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: At 3:38 PM -0700 5/16/07, Vadim Antonov wrote: >On Wed, 16 May 2007, Ken Calvert wrote: > >> You are right, one can contemplate. But as soon as one >> starts talking about it, lots of folks with a firm grasp of >> the status quo start saying it'll never work, there's no >> market for it, etc. > >Here you see the division between engineers who design and build things >which make economic sense and academics who think of pure Platonic >technology existing in economic vacuum. Vadim, This and your previous notes miss the point. The people mentioned in this conversation who are proposing user-driven path selection models are, as a general rule, also proposing that the providers benefit when they are selected - ie, some economic or payment mechanism. The reason a provider might appreciate this is precisely to get *away* from the commodity business that packet delivery (to use your words) is today. If there is no technical mechanism for attracting users with enhanced service and receiving a benefit for doing so, there is no hope of being anything other than a commodity. If there is, there is. The fact that the current Internet design largely decouples many providers from their ultimate customers, economically and technically, is both a strength and a weakness. What it is not is the only possible, or economically astute, answer. Cheers, John From dpreed at reed.com Wed May 16 18:18:06 2007 From: dpreed at reed.com (David P. Reed) Date: Wed, 16 May 2007 21:18:06 -0400 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: <464BAD4E.9020403@reed.com> The "ivory tower academic card" is like the "race card" or the "government is inefficient, private companies are always better card" or any of the other obnoxious prejudices that stand in the way of actually having to think about something that is subtle or different. In any case, I'm an academic now, but I have been the chief architect and responsible hands-on manager of about $10 billion worth of products that were totally new to the market earlier in my career. And in the academic world I hacked a few things that are still alive today, along with people who were still in school. I put on my trousers the same as anyone else, however, and some of the best ideas I've ever encountered have come from children asking "why" or "why not". Posing as someone of consequence adds nothing to one's contributions to the world. Contribute, and fight for your good ideas. I recommend listening to others rather than sneering at them because they aren't as puissant as you imagine yourself to be. Make your own judgment *after* you understand their point, not based on feeling superior. From dpsmiles at MIT.EDU Wed May 16 18:39:01 2007 From: dpsmiles at MIT.EDU (Durga Prasad Pandey) Date: Wed, 16 May 2007 21:39:01 -0400 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: <5CCF06B9-B634-49A3-985E-43AB8D9D3C48@MIT.EDU> Vadim, If your intention was to be purely provocative, from what you have written below, you sure did succeed. Being a network engineer, you cannot be unaware of the tremendous impact academic research has, and will continue to have in every field, and most strikingly in networking. Would you consider the Internet to be one of the greatest inventions of the last century? (In my opinion its number 1, and I'd be surprised if it wasn't one of the top inventions in your list too). Now if you are really unaware of the academia's contributions to the creation and design of the Internet, and of the "DC folks" support and participation that made it possible, may I present a recommended reading for you: http://www.amazon.com/Dream-Machine-Licklider-Revolution-Computing/dp/ 0670899763 Further, let me be bold enough to propose an analogy. The role of of genetic mutations in evolution is widely recognized. Similar is the role of new ideas in science and technology, that you so happily seem to ridicule. You might argue that a good percentage of what gets published doesn't get implemented, especially immediately. Yes, there are papers whose ideas are just not very good, or practical. But what does end up working out and making a difference could look very "Platonic technology"ish in the short term. Lots of wiser and more experienced people on this forum and elsewhere will tell you how their ideas were ridiculed at first, and then turned out to be staggering successes. Regarding source routing, may I recommend Clark et al's Tussle paper if you haven't read it yet: http://www.sigcomm.org/sigcomm2002/papers/ tussle.pdf where they make a really interesting analysis of how the Internet represents a constant tussle among different stakeholders. One of the examples they quote is that of source routing. Source routing represents choice for a customer of routing their packets through providers they like. It doesn't seem to make economic sense for ISPs under current business models, especially when the consumer is a residential user - that situation and model might change in the future. Perhaps source routing, as currently defined needs to change. But if you're arguing for ISPs constraining any choice the user has with respect to how their data flows depending on their short term economic thinking, I think you aren't making a good argument. The basis of economics is the concept of customer and utility, and technologists who ignore this might well get swept away. Yes, your position is quite explicit. :) Durga On May 16, 2007, at 6:38 PM, Vadim Antonov wrote: > > On Wed, 16 May 2007, Ken Calvert wrote: > >> You are right, one can contemplate. But as soon as one >> starts talking about it, lots of folks with a firm grasp of >> the status quo start saying it'll never work, there's no >> market for it, etc. > > Here you see the division between engineers who design and build > things > which make economic sense and academics who think of pure Platonic > technology existing in economic vacuum. > > And, yes, network engineers do have to deal with the spillover of bad > ideas from academia - things which would never get into protocols and > designs if somebody took trouble to evaluate their economic > feasibility > before sneaking them into standards. > >> The only way to overcome that is to build something and use it >> (what I >> think Reed was talking about). > > Yep. *Build* something. To do that you need way more than a neat > idea - > you need capital, you need business plan, you need customers who > actually > wish to buy the product. You need to spend years of your life > working like > hell. And if you were wrong, you don't get anything for your > troubles. > > Or you may convince some bureaucrats in DC to give you lots of > money they > have taken from us under the threat of jailtime and violence so you > can > play with your pet idea. A lot of people resent that, you know? > > As long as you want to go the first route I can only wish the best > luck, > and offer some advice - do not talk much about the ideas you intend to > implement, there's a lot of sharks in this water. > > If your plan is to organize another federally funded playpen, I > (and other > engineering people) will do everything to shoot the proposed neat idea > down, before it becomes another excuse for looting more from us. > > I'm all for discussing various neat tricks and gimmicks as a pure > mental > excercise, contemplating possibilities, and such. But I draw the line > when someone starts to talk about implementing his ideas at my > expense. I > have neat ideas of my own - and wish to spend my resources on > playing with > them. > > Did I make my position clear? > > --vadim > > PS. Sorry for the off-topic. > From randy at psg.com Wed May 16 18:42:04 2007 From: randy at psg.com (Randy Bush) Date: Wed, 16 May 2007 15:42:04 -1000 Subject: [e2e] It's all my fault In-Reply-To: <464BAD4E.9020403@reed.com> References: <464BAD4E.9020403@reed.com> Message-ID: <464BB2EC.7010502@psg.com> David P. Reed wrote: > The "ivory tower academic card" is like the "race card" quite unlike the "evil greedy providers and vendors" card. end to end hypocrisy, eh? From avg at kotovnik.com Wed May 16 19:33:03 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Wed, 16 May 2007 19:33:03 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: <20070516234931.EA264872D5@mercury.lcs.mit.edu> Message-ID: On Wed, 16 May 2007, Noel Chiappa wrote: > Vadim, you ought to consider that all this neat packet networking stuff only > exists now because for many years (during Baran's first RAND work ca. 1960-64, > then during the ARPANet development in the late 60's-early-70's, and then the > early internetwork work in the 1975-1982 time-frame) this stuff was all funded > by "bureaucrats in DC". You're making the common logical error known to economists as "What is seen and what is not seen" fallacy, first explained by Frederic Bastiat. Look it up. The fact that government hired a bright person to do some work does not mean that the very same person (or another person just as bright) wouldn't do the same if hired by a private company for the same wage (if govnernment didn't expropriate it earlier). In fact, the principle of storing and forwarding chunks of discrete data was in commercial use long before Baran's work - it was common since Victorian times in telegraph networks. The routers were people. (Well, so were the computers.) In fact, the Internet was impossible without transistors and minicomputers; and their availability is what make Internet possible. Somebody would've (re)discovered the basic ideas of store-and-forward networks anyway. The real role of the government in the history of the Internet was stalling, obfuscating, and supporting monopolism. If not for the government-propped monopoly of AT&T, we'd see widely available commercial data networks in early 80s, not in 90s - the technology was there. If you check the track record of DARPA's performance on other projects you'll see that nearly all of them were total disasters. > There was *no* commercial market for any of this stuff back then, Surely there was just as much demand for communications between people as there is now. I do not think human nature radically changed in the last half century. In fact, the market for "the Internet" was created not by availability of TCP/IP networks, but by the lowly BBSes, e-mail, and USENET. Neither of which depended on anything developed by DARPA-funded research. > (A fact of which I am well aware, because > I was one of the first people - maybe the first, actually - to make money > selling IP routers commercially - and that was in 1984 or so, almost 10 years > after the bureacrats starting putting money into TCP/IP.) There was no market for IP routers... why? May it be because there was no Intel with 8086 and no IBM with PCs, and no Bill Gates with Windows, and no Apple? Or because Ma Bell was sitting pretty enjoying monopoly it grabbed by legal maneuvers starting in 1879 and culminating in Kingsbury Commitment of 1913? Surely, it is hard to sell a network boxes when there's nothing to connect, right? And when you're prohibited by law from laying your own cables and putting up microwave towers? Claiming the impossibility of the Internet without the contribution of a particular research based on the ideas which were known for over a hundred years and only waited availaibility of technology to get implemented in automated form is, well, stretching the truth very thinly. > In fact, to add a nice topping of irony, many commercial communications people > of the day (circa 1980) said much the same things about TCP/IP (which you seem > to like) that you are now saying about other efforts: I distinctly recall the > TCP/IP people being told to "roll up our toy academic network" (and yes, they > explicitly and definitely used the work "academic") and go home. For all I know, that could result in some better-designed networks. Or may not. Or it could result in something totally different. What *is* a well-established fact is that monopoly in every area of human endeavour leads to stagnation. In any case, the telcos now own the network. Not academic people. And it is a vastly different network, too. With tons of band-aids and workarounds and ugly hacks needed to keep it running despite short-sighted decisions made by the original designers. > So you might want to remember that when you dump on these new "academic" > ideas. I'm not dumping on ideas - ideas can be good, or bad, or whatever. They're harmless, mostly. I'm against people calling for what amounts to armed robbery for the sake of grandioze communal projects. And against the attendant mythology of the academic priesthood in the Temple of Ideas at the service of the Benevolent Shepherds of The People. --vadim From avg at kotovnik.com Wed May 16 19:47:25 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Wed, 16 May 2007 19:47:25 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: Message-ID: On Wed, 16 May 2007, John Wroclawski wrote: > The people mentioned in this conversation who are proposing > user-driven path selection models are, as a general rule, also > proposing that the providers benefit when they are selected - ie, > some economic or payment mechanism. I'm not aware of anyone proposing anything remotely feasible economically in this space. All I hear is "it is a cool technology and we really really really want it". There's a simple criteria - if you can make a valid business plan out of it, there is a fair chance of the model making economic sense. Otherwise, well, think of something better. > The reason a provider might appreciate this is precisely to get > *away* from the commodity business that packet delivery (to use your > words) is today. Delivering bits from place A to place B is not something which allows for a wide diversity of service offerings. It either happens within given (price,reliability,bandwidth,latency) envelope, or it does not. So, how exactly source routing is going to push on any side of this envelope in any significant portion of cases? This field is littered with corpses of "premium service" pipe dreams. > The fact that the current Internet design largely decouples many > providers from their ultimate customers, economically and > technically, is both a strength and a weakness. What it is not is the > only possible, or economically astute, answer. I haven't seen any serious explanation on how exactly SR is going to benefit people who write the checks for the equipment. I'm not saying that it couldn't happen in principle - only that it does not seem to have any benefits within any non-handwaving scenario. --vadim From avg at kotovnik.com Wed May 16 19:55:49 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Wed, 16 May 2007 19:55:49 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: <464BB2EC.7010502@psg.com> Message-ID: On Wed, 16 May 2007, Randy Bush wrote: > David P. Reed wrote: > > The "ivory tower academic card" is like the "race card" > > quite unlike the "evil greedy providers and vendors" card. > > end to end hypocrisy, eh? > No, that was just plain statist demagoguery. I invoke the Goodwin law, at least in regard to esteemed Mr. Reed. I suppose it covers race card, too. At least I try to argue my position, and not by waving the length of my resume. --vadim From gds at best.com Wed May 16 20:55:33 2007 From: gds at best.com (Greg Skinner) Date: Thu, 17 May 2007 03:55:33 +0000 Subject: [e2e] It's all my fault In-Reply-To: ; from avg@kotovnik.com on Wed, May 16, 2007 at 07:33:03PM -0700 References: <20070516234931.EA264872D5@mercury.lcs.mit.edu> Message-ID: <20070517035533.A89046@gds.best.vwh.net> On Wed, May 16, 2007 at 07:33:03PM -0700, Vadim Antonov wrote: > On Wed, 16 May 2007, Noel Chiappa wrote: > > There was *no* commercial market for any of this stuff back then, > > Surely there was just as much demand for communications between people as > there is now. I do not think human nature radically changed in the last > half century. In fact, the market for "the Internet" was created not by > availability of TCP/IP networks, but by the lowly BBSes, e-mail, and > USENET. Neither of which depended on anything developed by DARPA-funded > research. DARPA-funded research provided computing resources upon which email, USENET, Unix, etc. were extended and popularized. --gregbo From kelsayed at gmail.com Thu May 17 00:06:13 2007 From: kelsayed at gmail.com (Khaled Elsayed) Date: Thu, 17 May 2007 10:06:13 +0300 Subject: [e2e] I got lost in opportunistic scheduling. In-Reply-To: <464ADC92.3010302@web.de> References: <4616E722.3070402@web.de> <46487D95.4040104@web.de> <7.0.1.0.2.20070514143809.02813318@antd.nist.gov> <4648C666.9000607@web.de> <46498002.9000903@gmail.com> <464ADC92.3010302@web.de> Message-ID: <464BFEE5.2030309@gmail.com> Detlef, I will address a subset of the questions, since I don't have answer for all the stuff :-) Detlef Bosau wrote: > > Some questions. > > 1. As far as I see, OS/multiuser diversity is yet employed in the > Qualcomm 1xEV-DO wireless system. (Is there a name for it, which one > can say within a lifetime? *got scared*) > > Are there other systems which employ OS? I think that HSDPA also have some form of MUD/OS. Mobile Wimax/802.16e OFDMA mode also has the ability to implement it but it is left for vendor implementation. > > 2. Qualc..., say C3P0, it?s shorter :-), seems to have a _common_ > downlink and _dedicated_ uplinks. Is this correct? In the paper, the > uplinks are said to be "asynchronous circuit-switched". What does that > mean? (I already got criticism this year because I talked about packet > swichting in wireless networks, because packet switching were not > restricted to wireless networks or something like that.... I didn?t > understand it.) Does it mean, we have dedicated TDM channels with > something like HDLC on it to enable packet transport? > > 3. Somwehere in the paper, a reliable link control / reliable link > protocol is mentioned. What kind of protocol is used here? Something > like RLPv3, which fragments L3 packets into pieces of e.g. about 20 > bytes? Or do we deal with IP packets directly? Particularly, does C3P0 > emply any ARQ? Or does it only use FEC? > > 4. If C3P0 emplys ARQ and some RLP, does it use sliding window? Or > does it use stop?n wait? > > 5. If C3P0 only uses FEC, one could be somewhat extreme and don?t > really use even that, at least don?t use extreme spreading but one > could rely upon OS only. One goal of OS is to avoid errors in advance > rather then correct them afterwards. So one could work with only > little error correction capability intentionally. Or one could work > with extensive adaptation of channel coding / puncturing. How is this > done in C3P0? > I am not very familiar with all HDR/EV-DO specifics. But I have seen some nice tutorials on the subject on IEEE Comm. magazine. >> >> I think that implementation of pure OS without some compensation for >> users with consistent bad channels does not make sense. I have some >> results on that for RT services that was published in MSWIM 2004. >> E-mail me if interested. > > > Of course, I?m interested! > > Question, somewhat provoking: Does it make sense to combine OS and RT > services? Somewhere on David Tse?s "talk" (he has so many slidesets of > this one talk on his homepage that I wonder if he ever gave another > one =8-)) we learn something about voice vs. data. (Unfortunately it?s > not mentioned on the slides _what_ we learn =8-)) > > But I think it hardly makes sense to mixup voice and data, because > data requires data integrity and voice requires time integrity _AND_ > can tolerate errors. In data services, you either have corrupted > packets or you have error free packets. Is there a way to allow for > "half correct packets"? > > So, at least at this moment, I think, data services and _error_ > _tolerant_ RT services should be done seperately. Or what do you think? > > Detlef I have seen some significant work combining OS and RT services. Whether it makes sense or not to use OS for both RT and NRT will be left to field deployments to tell. Research sometimes gives the impression that everything is possible and makes sense but deployments tell otherwise. Hlaf correct bursts (not necessarily packets) can still be useful for schemes likes HARQ :-) Regards, Khaled PS: I will send the paper in a separate mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070517/b3edbf95/attachment.html From jnc at mercury.lcs.mit.edu Thu May 17 05:22:20 2007 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 17 May 2007 08:22:20 -0400 (EDT) Subject: [e2e] It's all my fault Message-ID: <20070517122220.E09E686AEE@mercury.lcs.mit.edu> > From: Vadim Antonov > I'm not aware of anyone proposing anything remotely feasible > economically in this space. All I hear is "it is a cool technology and > we really really really want it". > There's a simple criteria - if you can make a valid business plan out > of it, there is a fair chance of the model making economic sense. > Otherwise, well, think of something better. One thing the IPv6 debacle taught us (or should have taught us) is that any major new protocol designs for use in the Internet need to have a viable deployment plan, and a key part of that deployment plan has to be an economic rationale, i.e. a very good idea of who is going to benefit and how, and how that will drive the deployment, because without economic incentives to deploy something, it won't get deployed. If that hasn't been said explicitly here, in part it's because I regard that observation as being on close to the same level as saying "2+2=4" - i.e. something so obvious that, like the air we breathe, it should not be necessary to bother to make explicit note of it. So IMO any actual protocol design effort these days needs to include people who are actually signed up to deploy it, so that they can be part of the design process to make sure that it has an economic benefit, etc, etc. (And maybe we ought to have a mandatory "Economic Considerations" section in standards, just like we have a "Security Considerations" section, but I digress.) But we're a long way from that in this discussion. The other reason it hasn't been talked about here is that before you can decide whether something is economically viable, you need to have a proposal to evaluate. Business plans for novel things, be they for an overnight delivery service (FedEx), or for near-instantaneous long-distance communication service (telegraph) have to start with a novel idea. And before you bother trying to see whether there's a viable business plan, you have to make sure your idea makes technical sense. I bet I could whip up a devastating business plan for an anti-gravity machine, or a zero-fuel engine - were such things technically feasible. This discussion is mostly about trying to evaluate the technical desirabilty and feasability of a new idea. Trying to turn it into deployed stuff is a long way down the road from here. > Delivering bits from place A to place B is not something which allows > for a wide diversity of service offerings. It either happens within > given (price,reliability,bandwidth,latency) envelope, or it does not. > So, how exactly source routing is going to push on any side of this > envelope in any significant portion of cases? Well, once you decide what it can do, and about how much it would cost to provide it, then if you can find someone who wants that service at that cost, you'll probably be able to find someone who is interested in selling it to them. Alternatively, if you can show people who are selling bandwidth how it can reduce their costs (e.g. through reducing operational overhead), you might be able to make an economic case for it that way. (Sometimes the product comes first - e.g. iPods, sometimes the demand - e.g. local area networks.) But all of these market investigations come after you have some idea of what the product/technology looks like. Noel From fkastenholz at comcast.net Thu May 17 06:40:23 2007 From: fkastenholz at comcast.net (Frank Kastenholz) Date: Thu, 17 May 2007 13:40:23 +0000 Subject: [e2e] It's all my fault Message-ID: <051720071340.4622.464C5B4700091BCB0000120E221348437396040108020A9B9C0E0500@comcast.net> -------------- Original message ---------------------- From: jnc at mercury.lcs.mit.edu (Noel Chiappa) > One thing the IPv6 debacle taught us (or should have taught us) is that any > major new protocol designs for use in the Internet need to have a viable > deployment plan, and a key part of that deployment plan has to be an economic > rationale, i.e. a very good idea of who is going to benefit and how, and how > that will drive the deployment, because without economic incentives to deploy > something, it won't get deployed. I'll disagree slightly. This is true if and only if what is being proposed will supplant something that already exists. Without getting into its merits, what Dave proposed is not something that requires replacing IPv4 with IPvDave -- it is something built on top of (or beside?) IPv4. The existing IPv4 base need not throw out everything and start over, so there is no need, a priori, to sell IPvDave to the IPv4 world and there is no need for a deployment plan and economic rationale and/or incentives to convince the IPv4 users to switch to IPvDave. Then, in the fullness of time, if it turns out that IPvDave is "better" than IPv4, and better "enough", then either - the world will evolve towards IPvDave, putting together the needed transition/interworking widgets using hillbilly-CS, or - the IPv4 world will take the good parts of IPvDave and incorporate them. Either way, Goodness Occurs. > If that hasn't been said explicitly here, in part it's because I regard that > observation as being on close to the same level as saying "2+2=4" - i.e. > something so obvious that, like the air we breathe, it should not be > necessary to bother to make explicit note of it. See RFC1669 -- "Market Viability as a IPng Criteria" which John Curran wrote in response to the IP:ng call for white papers back in 1994... > The other reason it hasn't been talked about here is that before you can > decide whether something is economically viable, you need to have a proposal > to evaluate. Business plans for novel things, be they for an overnight > delivery service (FedEx), or for near-instantaneous long-distance > communication service (telegraph) have to start with a novel idea. Again, I'll disagree -- the sort of truly novel ideas (FedEx, The Internet, etc) that you seem to be talking about here have pretty bad business plans and look like they'll be gigantic flops. This is precisely because they do not fit the existing world-view of things, so they do not fit the existing criteria for what makes a business proposition "look good". Usually these things end up tapping into a latent demand that was hidden not only from the business-plan-evaluation-gurus but also the very people who have that demand. I mean, in 1980, how many people would have said that they absolutely need to have a portable little widget that was a combined telephone/email-thing/picture-taker/music-player? f From faber at ISI.EDU Thu May 17 09:45:17 2007 From: faber at ISI.EDU (Ted Faber) Date: Thu, 17 May 2007 09:45:17 -0700 Subject: [e2e] It's all my fault In-Reply-To: References: <20070516234931.EA264872D5@mercury.lcs.mit.edu> Message-ID: <20070517164517.GA45601@hut.isi.edu> On Wed, May 16, 2007 at 07:33:03PM -0700, Vadim Antonov wrote: > The real role of the government in the history of the Internet was > stalling, obfuscating, and supporting monopolism. If not for the > government-propped monopoly of AT&T, we'd see widely available commercial > data networks in early 80s, not in 90s - the technology was there. Governments do not support natural monopolies like the telecom network, because they have no role in those monopolies. Significant economies of scale and high capital barriers to entry will shut other providers of similar services out of the market completely. Even without aggressive action by the providers, this leads directly to monopoly. That's a property of the market, not a government imposed attribute. Furthermore, all recorded cases of natural monopoly have evidenced aggressive action; providers in natural monopoly situations crush competitors and resist changes to their market. Why would they do differently? If US telecom were really deregulated tomorrow - no requirements to share infrastructure, no limits on size, no service requirements - there'd be one phone company in a decade at the most. It's hard to see how you can characterize this as monopoly protection. You couldn't lease a T1 before the government made AT&T lease you one - an action I'm surprised you don't characterize as the government stealing AT&T's capital. It's a lot more difficult to build a nationwide (to say nothing of worldwide) data network if you have to spend the capital to run the lines. Without the government(s) acting in direct conflict with monopoly interests by forcing access to the infrastructure and financing the development of the technology there would be no commercial Internet today. There might be one in decades, but it would cost more and be more constrained, IMHO. Now, I don't think that the government had a coordinated plan to create a new market, but without the (accidental) confluence of those actions, the Internet would be unlikely to emerge. -- Ted Faber http://www.isi.edu/~faber PGP: http://www.isi.edu/~faber/pubkeys.asc Unexpected attachment on this mail? See http://www.isi.edu/~faber/FAQ.html#SIG -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: not available Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070517/5cec63b5/attachment-0001.bin From randy at psg.com Thu May 17 10:55:21 2007 From: randy at psg.com (Randy Bush) Date: Thu, 17 May 2007 07:55:21 -1000 Subject: [e2e] It's all my fault In-Reply-To: <20070517122220.E09E686AEE@mercury.lcs.mit.edu> References: <20070517122220.E09E686AEE@mercury.lcs.mit.edu> Message-ID: <464C9709.4090807@psg.com> > One thing the IPv6 debacle taught us (or should have taught us) is that any > major new protocol designs for use in the Internet need to have a viable > deployment plan, and a key part of that deployment plan has to be an economic > rationale, i.e. a very good idea of who is going to benefit and how it's also very convenient if the benefit is gained close to where/by-whom the costs are incurred, a point often overlooked. randy From lachlan.andrew at gmail.com Tue May 15 09:35:24 2007 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Tue, 15 May 2007 09:35:24 -0700 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: On 15/05/07, Vadim Antonov wrote: > > "I need source routing" is an euphemism for "my TE sucks". > > The fundamental problem with SR is that endpoints do not have information > about network topology necessary for making intelligent path choices. It > is as simple as that. At last week's INFOCOM, Don Towsley and Peter Key pointed out that selecting a small number of parallel paths, and regularly reselecting based on performance (a la BitTorrent), is as good as selecting the best path. As has been pointed out in this thread, that is much better than BGP does. Of course, this only applies to long flows, but they account for an increasing majority of traffic. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From lachlan.andrew at gmail.com Tue May 15 09:54:25 2007 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Tue, 15 May 2007 09:54:25 -0700 Subject: [e2e] It's all my fault In-Reply-To: References: <4649A8B3.7080000@pobox.com> Message-ID: On 15/05/07, Jon Crowcroft wrote: > if we took ALL routing out of routers, life _would_ be simpler - > > you'd only have yourself to blame as an end user > for shooting your self in the head when you can't reach somewere > > running path computation on boxed _designed_ to do computation > and forwarding on boxeds designed to switch packets fast > just sounds like a perfectly reasonable idea to me I'm in favour of loose source routing, but don't agree that routers shouldn't route. The simplicity and distributed computation of having a "default next hop" would be lost, as would the ability to route around failures. Loose source routing gives the best of both worlds: users get to explore multiple paths and choose the best (as done explicitly by Akamai and implicity by BitTorrent), but don't have to specify the paths in more detail than they know. Regarding "denial of service", if we regard sending packets to where users want them to go as a "service", disabling source routing is itself a massive denial of service :) Fortunately, if source routing is in the standards and implemented by enough systems, then users can route around those denials of service, but if it is not even in the standards, then all seems lost. Perhaps a compromise would be to reduce the number of intermediate hops that can be specified from 40 to say 2. That reduces the "traffic multiplier" available for DoS, but allows users to select between a handful of paths. Two or three paths is enough diversity to get a "pretty good" route if the default BGP route is temporarily congested. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From lachlan.andrew at gmail.com Tue May 15 16:27:19 2007 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Tue, 15 May 2007 16:27:19 -0700 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: On 15/05/07, Vadim Antonov wrote: On Tue, 15 May 2007, Lachlan Andrew wrote: > > > selecting a small number of parallel paths, and regularly reselecting > > based on performance (a la BitTorrent), is as good as selecting the > > best path. > > Mmmm... and that solves the problem of an end host not having an idea of > what backbone topology is like - how? > > This is all neat when you enter those points manually, but try to sell > that to the actual users. I agree that the route selection should be automatic, but it doesn't need to be done by the network. I'm advocating very loose source routing here. The key to this is diversity. That diversity can come from knowing one host on each of several nearby ISPs, to "force" your traffic onto that AS. My understanding is that different ISPs will usually have quite different paths to the destination, primarily through their own networks (but I'll gladly be corrected). Given three stepping stones, which depend on your own location rather than that of your peer, you get three almost independent routes, which gives you p^3 chance of hitting temporary congestion, where p is the chance of any one link being congested. The "local stepping stones" could be signalled to individuals by DHCP the way DNS servers are, or be provided by a database like if ISPs don't want to advertise the competition. Also remember that most traffic is going from some infrastructure to a user (even if that infrastructure is a P2P session or some such). The efficiency gain comes if that infrastructure (not the user) has some control over its route, if it chooses to. My understanding was that this thread was about whether or not the standards *allow* source routing. Most home users will not want to bother, and should not be forced to. However, if Don Towsley is right that TE can be greatly simplified by a handful of big traffic generators doing their own load balancing, why should we take that existing functionality out of the standards? There is clearly a case for not *forcing* equipment vendors to put it in hardware, or carriers to enable it, the way existing IP options are often treated, but it seems useful for IPv6 to say "if you want to implement source routing, this is how it is done". If it is there, it can be tamed (say by limiting the number of hops that can be specified), but if it is not in the standard at all, content/service providers who have legitimate uses for it can't use it even if they choose to. Currently, I believe Akamai uses its servers as routers to implement its home-brew source routing; wouldn't it be better if companies could use routers as routers? $0.02, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From avg at kotovnik.com Thu May 17 13:58:23 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Thu, 17 May 2007 13:58:23 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: <20070517035533.A89046@gds.best.vwh.net> Message-ID: On Thu, 17 May 2007, Greg Skinner wrote: > DARPA-funded research provided computing resources upon which email, > USENET, Unix, etc. were extended and popularized. DARPA didn't create those resources from the thin air. They took it from somebody first. Your statement is an example of "What is seen and what is not seen" fallacy. --vadim From dpreed at reed.com Thu May 17 15:07:10 2007 From: dpreed at reed.com (David P. Reed) Date: Thu, 17 May 2007 18:07:10 -0400 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: <464CD20E.8010701@reed.com> Fair point. Do we want to get into a political debate about where those resources came from? Are taxes collected and then spent different qualitatively from profits taken and then invested? Both are involuntary redirections of resources based on the power of one set of humans over another. Is a democratic government morally better than a corporate entity created and protected by that government's police powers? Vadim Antonov wrote: > On Thu, 17 May 2007, Greg Skinner wrote: > > >> DARPA-funded research provided computing resources upon which email, >> USENET, Unix, etc. were extended and popularized. >> > > DARPA didn't create those resources from the thin air. They took it from > somebody first. Your statement is an example of "What is seen and what is > not seen" fallacy. > > --vadim > > > From day at std.com Thu May 17 15:14:37 2007 From: day at std.com (John Day) Date: Thu, 17 May 2007 18:14:37 -0400 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <54EF0EE260D79F4085EC94FB6C9AA56202F9F16C@E03MVZ1-UKDY.domain1.systemhost. net> References: <54EF0EE260D79F4085EC94FB6C9AA56202F9F16C@E03MVZ1-UKDY.domain1.systemhost. net> Message-ID: If I were to characterize how we arrived at what we have I would have to say it was not done by looking for a solution that benefited particular vested interests and we certainly weren't trying "to truly enable end user innovation (driven by the true belief that this would benefit everybody)." It was the furthest thing from our minds. (That view is revisionist history written much later.) In fact, if the truth be told, the only time there was any inclination in that direction, ARPA nipped it in the bud very quickly. (If they hadn't, it might be a very different Internet today. I believe that that event halted innovation in the Internet.) It always seemed that we were doing what we understood the problem was telling us was the right solution and let the chips fall where they may. I believe this is what is called "science." This is what we must return to. I have always thought that the weakness of telephone company strategies over the past 35 years was that they first looked out for their vested interest and tried to contort the answer to fit them. Some would call this engineering. I don't but some do. When this approach is taken, the problem generally has a way of asserting itself at great cost unless external regulation (appeal to governments for protection) is used to keep it at bay. In this situation, I have always recommended to forego the vested interest and back the problem. It is far cheaper in the long run and leaves you in a better position. I would disagree with the statement here that the fundamentals of the Internet have changed. They have not. The technology definitely has. But the fundamentals that governed networking in 1970 are still the same. Hopefully our understanding of them has improved. Our problem as I alluded to above is that the fundamentals we built on were basically what we understood at the time of the ICCC demo in 1972. We have been band-aiding ever since and relying on Moore's Law to make us look good. We have to go back to fundamentals and be prepared to question everything we know. We have to be willing to do it and throw out whatever is in the way. We may be pleasantly surprised by how much we got right, but we have to do it. We can't do, as I have heard so often recently, "call for a revolution as long as it doesn't touch what was already done." That isn't much of a revolution. At least not in science. We have to be willing to destroy the Internet we know to save the Internet we need. Noel has remarked that when a theory or architectures solves problems you hadn't designed into it, it is a good indication that it is right or close to it. The more often it does it, the more right it is. The converse is also true. Every time you want to do some thing different or accommodate a new requirement a new work-around is necessary, a new kludge; then you know what you have isn't right. This discussion about using source-routing for mobility has been a wonderful case in point, as is the current discussion in on the RAM list. Noel also remarked that we need to learn the lesson of v6 to ensure that there is an economic reason to change. I am afraid there is an even harder lesson from history that we have to learn from: The lesson of OSI which was don't invite the legacy architecture to participate in the revolution. They will destroy it. Don't be accommodating. Go it alone. They have too much vested interest in maintaining the status quo. We are going to have to learn to let go or be supplanted by people who look upon us with as much disdain as we looked on the phone company guys in 1975. ;-) It was clear that they just didn't get it. ;-) (Some of you will think the lesson of OSI is something else. Believe me, those lessons are all a consequence of this one. Remember OSI was started by the computer industry to create network standards that weren't done by ITU.) It really saddens me, but the behavior of the Internet community today has more in common with the phone companies of 1975 than the ARPANet/Internet/NPLnet/CYCLADES of 1975. They were out to foment revolution, we seem to be more out to preserve someone else's revolution. We seem to be making more rules about what you can't do than what you can do. We seem to want to protect desires that serve our interests whether capitalist or utopian, rather than do science and let the chips fall where they may. I apologize for the screed, it wasn't suppose to be this long. ;-) Take care, John At 10:25 +0100 2007/05/16, wrote: >[RESENT - this time without signature - my apologies, still had to find >the off-by-default setting] > >David, > >I wonder if such almost revolutionary tone is both helpful and effective >in reaching the goal (or anything for that matter) you're promoting. > >Not only do I believe (more hope) that the intention back then, when >constructing IP, TCP, UDP, ..., was not 'to beat Mother Bell's control >ambitions' but to truly enable end user innovation (driven by the true >belief that this would benefit everybody), I would also argue that times >do have changed since then. Change of fundamentals in the Internet is >today more of an educational process than ever. It might be driven by >technology, certainly not only though, but it certainly includes more >than ever proper education beyond the pure technology community and the >consideration for the concerns of everybody involved. It isn't a >technology exercise anymore within a governmentally funded research >community that, over the course of some twenty years, will then turn >into a fundamental piece of societial life. It IS part of the societal >life. So advocating changes needs to take into account the different >concerns, also the ones of the 'routerheads' and the 'control freaks', >if you will, in order to be successful. > >So it is not the goal that I'm questioning (you know how much I >subscribe to end user driven innovation), it is your, to me, ineffective >and confrontational method that I fear will turn out to be wasteful >rather than fruitful. What the technology community CAN provide is the >ammunition for this educational process, the proof that end user >innovation is indeed enabled, for the good of everybody involved (and >point our alternatives for the ones that seemingly will need to change). > > >BTW, as you know I recently have joined a company you might characterize >as being on the 'controlling end' of the spectrum, coming from an end >user type of company. But believe me that I would have not joined if I >didn't believe such education is possible. It isn't all black and white >(us - whoever that is - against them). > >Dirk > >> -----Original Message----- >> From: end2end-interest-bounces at postel.org >> [mailto:end2end-interest-bounces at postel.org] On Behalf Of >> David P. Reed >> Sent: Tuesday, May 15, 2007 3:57 PM >> To: end2end-interest list >> Subject: [e2e] Time for a new Internet Protocol >> >> A motivation for TCP and then IP, TCP/IP, UDP/IP, RTP/IP, >> etc. was that network vendors had too much control over what >> could happen inside their networks. >> >> Thus, IP was the first "overlay network" designed from >> scratch to bring heterogeneous networks into a common, >> world-wide "network of networks" > > (term invented by Licklider and Taylor in their prescient > > paper, The Computer as a Communications Device). By creating >> universal connectivity, with such properties as allowing >> multitudinous connections simultaneously between a node and >> its peers, an extensible user-layer naming system called DNS, >> and an ability to invent new end-to-end protocols, gradually >> a new ecology of computer mediated communications evolved, >> including the WWW (dependent on the ability to make 100 "calls" >> within a few milliseconds to a variety of hosts), email >> (dependent on the ability to deploy end-system server >> applications without having to ask the "operator" for >> permission for a special 800 number that facilitates public >> addressability). >> >> Through a series of tragic events (including the dominance of >> routerheads* in the network community) the Internet is >> gradually being taken back into the control of providers who >> view their goal as limiting what end users can do, based on >> the theory that any application not invented by the pipe and >> switch owners is a waste of resources. They argue that >> "optimality" of the network is required, and that any new >> application implemented at the edges threatens the security >> and performance they pretend to provide to users. > > >> Therefore, it is time to do what is possible: construct a new >> overlay network that exploits the IP network just as the IP >> network exploited its predecessors the ARPANET and ATT's >> longhaul dedicated links and new technologies such as LANs. >> >> I call for others to join me in constructing the next >> Internet, not as an extension of the current Internet, >> because that Internet is corrupted by people who do not value >> innovation, connectivity, and the ability to absorb new ideas >> from the user community. >> >> The current IP layer Internet can then be left to be >> "optimized" by those who think that 100G connections should >> drive the end user functionality. We can exploit the >> Internet of today as an "autonomous system" just as we built >> a layer on top of Ethernet and a layer on top of the ARPANET >> to interconnect those. >> >> To save argument, I am not arguing that the IP layer could >> not evolve. >> I am arguing that the current research community and industry >> community that support the IP layer *will not* allow it to evolve. >> >> But that need not matter. If necessary, we can do this >> inefficiently, >> creating a new class of routers that sit at the edge of the >> IP network >> and sit in end user sites. We can encrypt the traffic, so >> that the IP >> monopoly (analogous to the ATT monopoly) cannot tell what our >> layer is doing, and we can use protocols that are more >> aggressively defensive since the IP layer has indeed gotten >> very aggressive in blocking traffic and attempting to prevent >> user-to-user connectivity. >> >> Aggressive defense is costly - you need to send more packets when the >> layer below you is trying to block your packets. But DARPA >> would be a >> useful funder, because the technology we develop will support >> DARPA's efforts to develop networking technologies that work >> in a net-centric world, where US forces partner with >> temporary partners who may provide connectivity today, but >> should not be trusted too much. >> >> One model is TOR, another is Joost. Both of these services overlay >> rich functions on top of the Internet, while integrating >> servers and clients into a full Internet on top of today's Internets. >> >> * routerheads are the modern equivalent of the old "bellheads". The >> problem with bellheads was that they believed that the right >> way to build a communications system was to put all functions >> into the network layer, and have that layer controlled by a >> single monopoly, in order to "optimize" the system. Such an >> approach reminds one of the argument for >> the corporate state a la Mussolini: the trains run on time. Today's >> routerheads believe that the Internet is created by the >> fibers and pipes, rather than being an end-to-end set of >> agreements that can layer >> on top of any underlying mechanism. Typically they work for > > backbone >> ISPs or Router manufacturers as engineers, or in academic > > circles they focus on running hotrod competitions for the >> fastest file transfer between two points on the earth >> (carefully lining up fiber and switches between specially >> tuned endpoints), or worse, running NS2 simulations that >> demonstrate that it is possible to stand on one's head while >> singing the National Anthem to get another publication in >> some Springer-Verlag journal. >> >> >> >> From tvest at pch.net Thu May 17 15:33:57 2007 From: tvest at pch.net (Tom Vest) Date: Thu, 17 May 2007 18:33:57 -0400 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: On May 17, 2007, at 4:58 PM, Vadim Antonov wrote: > > > On Thu, 17 May 2007, Greg Skinner wrote: > >> DARPA-funded research provided computing resources upon which email, >> USENET, Unix, etc. were extended and popularized. > > DARPA didn't create those resources from the thin air. They took it > from > somebody first. Your statement is an example of "What is seen and > what is > not seen" fallacy. > > --vadim Okay, what is seen: The universe of telecom facilities was owned and operated through the vehicle of adjacent, non-overlapping territorial monopolies for at least 4-5 decades leading up to the 1970s -- either as the result of a market outcome (e.g., in the US), or of subsequent movers observing how things played out in the earliest telecom markets (e.g., in the US). That said, the raw inputs required for the Internet to emerge (telecom facilities, technology, clever people, etc.) were widely distributed throughout the world in the 1970s and 1980s. The same laws of physics that permitted T-carrier technology and 4ESS switches to work in the United States from 1975 on also applied everywhere else on Earth. In the US alone, the advent of of the latter technologies was accompanied by regulatory changes (60 FCC 2D / 1976, which compelled the incumbent territorial facilities monopoly owner to sell T-1 circuits to 3rd parties even when those parties intended to use them for commercial purposes) which made it possible for someone other than an incumbent facilities owner to provision telecom "infrastructure", manage it independently, and use it for any purpose that they saw fit. In the US alone, the incumbent facilities owner's efforts to squelch the "invidious bypass" that this new technology made possible (i.e., the Consumer Communications Reform Act of 1978, aka "the Bell Bill") were quashed. Eventually -- sometimes many years later -- other regulatory jurisdictions followed a similar path, and the Internet started growing in those places as well. Eventually -- generally decades later -- in places where such changes never occur(red), some green field bypass telecom facilities (e.g., wireless) began to provide (at the moment, grossly inferior) options similar to those created by the aforementioned regulatory interventions. In the mean time, much of the Internet service that was available in the "unreconstructed" parts of the word arrived in the form of " service imports", i.e., services provided by offshore operators based in one of the infrastructure-friendly jurisdictions. Something else that is seen: Empirical evidence supporting the story above is visible in the global distribution of autonomous systems (using the country code and/ or org fields to localize each to a particular country). Since ASes are tools for managing multihoming, and multihoming is only technically possible where telecom facilities are overlapping, or fungible (i.e., available in fractional bits and pieces as "infrastructure" that can be managed independently from the facilities provider), and available on commercially reasonable terms, this distribution makes perfect technical sense. Places with more ASes generally have more Internet users, devices etc., all things (population, GDP, geography, number of years providing Internet service, etc.) remaining equal. What is unseen? Tom From jnc at mercury.lcs.mit.edu Thu May 17 17:19:11 2007 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 17 May 2007 20:19:11 -0400 (EDT) Subject: [e2e] It's all my fault Message-ID: <20070518001911.4245D872E5@mercury.lcs.mit.edu> > From: Vadim Antonov This is getting a bit far afield, and so I apologize in advance to anyone who's irritated... > The fact that government hired a bright person to do some work does not > mean that the very same person (or another person just as bright) > wouldn't do the same if hired by a private company for the same wage > (if govnernment didn't expropriate it earlier). > ... > If you check the track record of DARPA's performance on other projects > you'll see that nearly all of them were total disasters. There is a favourite saying of mine that goes something like "There are two kinds of confused people: one says 'This is old, and therefore good', and the other says 'This is new, and therefore better'." I trust the point is obvious: it's not the newness or oldness which makes something good, but simply whether or not the thing is good. I think there's probably a relative of this saying which says something like "'This is done by government, and therefore good', and the other says 'This is done by private industry, and therefore better'". If you look at history, there are plenty of examples of extremely influential things which a government did after private industry didn't take up the challenge, e.g. Harrison's invention of the marine chronometer for finding latitude, which made seaborne commerce much more viable; that was done in response to a UK government initiative. There are even examples of things where private industry tried and failed, and a government project to do it succeeded: e.g. the Panama Canal. Of course, there are also plenty of examples where private industry did something better than the government too: the Wright brothers versus Langley; the R-100 airship versus the R-101 (a major subject of Nevil Shute's fabulous autobiography "Slide Rule", in which he rails, mostly correctly, against government attempts to do things). But as the saying goes, "the plural of anecdote is not data". And of course most of the scientific research in most of the world for the last century has been funded by taxes. Sure, it's laundered through universities, etc, but it's still tax money being spent. Should we dispense with all that, too? Pure research pays off big-time in the long run, but most of it is too long-term, and the eventual results impossible to forsee, for private industry to get involved. E.g. semi-conductors wouldn't exist without the physicists (none sponsored by private industry, as far as I can recall) who developed quantum mechanics in the 20's, decades before. And if you think private industry will take over if we do, think again. Look what happened when the US Government cut the funding for the Super-Conducting Super-Collider; private business sure as heck hasn't taken that baton up. I think government funding, *intelligently managed*, has a role. Of course, that's a high qualifier, and a lot of what's spent these days simply doesn't pass muster. (Speaking of which, the amount of pork barrel spending on useless earmarked research in the US is pathetic. I am reminded of that famous acerbic de Tocqueville observation: "The American republic will endure until the politicians find they can bribe the people with their own money.") But to simply dismiss all government projects/research as unworthy is just not supported by history. I'd rather look at each one and say "was this money well spent", and treat each case on its own merits. Noel From jnc at mercury.lcs.mit.edu Thu May 17 17:55:35 2007 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 17 May 2007 20:55:35 -0400 (EDT) Subject: [e2e] It's all my fault Message-ID: <20070518005535.AEC78872E5@mercury.lcs.mit.edu> > From: Vadim Antonov > There was no market for IP routers... why? Because in the beginning (late 70's) there was no Internet to connect to. There's no economic incentive for someone to go out and buy/install/use a wide-area networking protocol when there's no wide-area network to connect to. Until the government jump-started it by setting up the initial Internet, there was no movement at all to an inter-organization network. Local-area networking protocols did get designed and sold (e.g. Banyan, Novell) for intra-organization use, but they weren't designed for inter-organization use (as TCP/IP was, from the start - albeit not as well as it ought to have been). In fact, what private industry was doing at that point in time was 180 degrees out of phase with the goal of ubiquitous inter-organizational networking, actually. Every computer company (IBM, DEC, etc) had their own protocol family (SNA, DECNet, etc), and their idea was to lock their customers in to their (incompatible) protocol family. Letting everyone talk to everyone was absolutely not in their minds. Not only that, they didn't even want everything speaking over an industry standard protocol (so that you could easily add in boxes from vendor B, in a company which heretofore had all boxes from company A). Which is not to say we might not have gotten there eventually. But the governments intervention definitely sped things up greatly. > the principle of storing and forwarding chunks of discrete data was in > commercial use long before Baran's work - it was common since Victorian > times in telegraph networks. What about the postal system? That stores and forward discrete chunks too. But I reject those predecessors as being that meaningful because what was unique about Baran's work was that he came up with the idea of breaking up user's messages into smaller pieces, and forwarding the pieces independently - something nobody before him had thought of. And if you think it's that obvious, try reading Kleinrock's contemporaneous work on queing theory - it's all in terms of complete messages. Lots of great ideas are "obvious" in hindsight. > May it be because there was no Intel with 8086 and no IBM with PCs, and > no Bill Gates with Windows, and no Apple? Actually, the initial breakout of Internet growth occurred before all those events. I started selling IP routers when basically everyone I sold to was using time-sharing systems running on DEC hardware. The PC and uSoft drove the later rapid growth, yes, but that was later. > In fact, the Internet was impossible without transistors and > minicomputers; and their availability is what make Internet possible. Yeah, but they are so basic you could just as well make the same statement, but replace "transistors and minicomputers" with "electricity and quantum mechanics". > .. the market for "the Internet" was created not by availability of > TCP/IP networks, but by the lowly BBSes, e-mail, and USENET. Neither of > which depended on anything developed by DARPA-funded research. An interesting list. Let's look at a few in more detail. Intra-machine e-mail (the first form to appear) first appeared (as far as we can tell) on SDC's Q32 and MIT's CTSS - both created with government funding. Inter-machine e-mail appears to have originated with the ARPAnet. USENET ran on Unix systems. Unix was heavily influenced by the time Thompson and Richie spent working on the Multics project - another DARPA project. (Although it turns out the strongest technical influence on Unix was actually the earlier CTSS system, but that was also government funded.) Not that it's all government: e.g. the personal computer, the heart of most BBS'es, only took off after the creation of the micro-computer, the Intel 4004 being the first. (Although some BBS's existed before that, running on mini-computers.) But to say government had no role is not correct. > Claiming the impossibility of the Internet without the contribution of > a particular research based on the ideas which were known for over a > hundred years and only waited availaibility of technology to get > implemented in automated form is, well, stretching the truth very > thinly. I never said it was "impossibl[e]". I merely said that government funding had a role in making it happen, and further (see above) probably made it happen a lot faster than it would have otherwise. Speaking of "a lot faster", it's interesting to look at history and see how technological development/deployment speeds up during wartime - which is, of course, all government funding. The US Civil War (which greatly accelerated the deployment of telegraphs and railroads), World War I (the aeroplane), World War II (modern electronics). Now there's an interesting conjunction: war and technological development... > What *is* a well-established fact is that monopoly in every area of > human endeavour leads to stagnation. I think most of us generally agree with you on that. > it is a vastly different network, too. With tons of band-aids and > workarounds and ugly hacks needed to keep it running despite > short-sighted decisions made by the original designers. Again, I think most of us would agree that the original design had some significant areas where is was not developed enough. (Why that happened is a whole separate discussion.) However, the important observation is that some of us think that innovations like moving the path-selection out of the network core, and other similar things, is a good way to get rid of some of those "band-aids and workarounds and ugly hacks" - provided that we can make them economically viable, of course. Noel From davide+e2e at cs.cmu.edu Thu May 17 18:05:56 2007 From: davide+e2e at cs.cmu.edu (Dave Eckhardt) Date: Thu, 17 May 2007 21:05:56 -0400 Subject: [e2e] It's all my fault In-Reply-To: Message-ID: <19342.1179450356@lunacy.ugrad.cs.cmu.edu> > The universe of telecom facilities was owned and operated through the > vehicle of adjacent, non-overlapping territorial monopolies for at > least 4-5 decades leading up to the 1970s -- either as the result of > a market outcome (e.g., in the US), or of subsequent movers observing > how things played out in the earliest telecom markets (e.g., in the > US). Actually, as a result of anti-competitive government interventions: Unnatural Monopoly: critical moments in the development of the Bell System Monopoly Adam D. Thierer Cato Journal Volume 14 Number 2, Fall 1994 http://www.cato.org/pubs/journal/cjv14n2-6.html Dave Eckhardt From perfgeek at mac.com Thu May 17 19:28:14 2007 From: perfgeek at mac.com (rick jones) Date: Thu, 17 May 2007 19:28:14 -0700 Subject: [e2e] It's all my fault In-Reply-To: References: <4649A8B3.7080000@pobox.com> Message-ID: <5aa7c1c39c11621281fa959621a1f11c@mac.com> > Perhaps a compromise would be to reduce the number of intermediate > hops that can be specified from 40 to say 2. That reduces the > "traffic multiplier" available for DoS, but allows users to select > between a handful of paths. Two or three paths is enough diversity to > get a "pretty good" route if the default BGP route is temporarily > congested. I'll ask a naive question from the peanut gallery - I take that checking the source routes for duplicate IP's is insufficient to deal with the proposed problem? rick jones there is no rest for the wicked, yet the virtuous have no pillows From perfgeek at mac.com Thu May 17 19:42:32 2007 From: perfgeek at mac.com (rick jones) Date: Thu, 17 May 2007 19:42:32 -0700 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: <6764e858e02aebbd46e396bffaf6c86f@mac.com> > I'm advocating very loose source routing here. The key to this is > diversity. That diversity can come from knowing one host on each of > several nearby ISPs, to "force" your traffic onto that AS. Another naive question from the peanut gallery - can one actually do loose source routing with _hosts_ as the IPs in the list rather than routers? Or perhaps I should say with "customer hosts?" I ask because elsewhere it is suggested (as a BCP I think) that ISP's should filter-out IP datagrams coming from (customer) hosts attached to their networks which have as their source IP an IP that isn't part of the ISP's network. This to deal with source IP address spoofing... rick jones Wisdom teeth are impacted, people are affected by the effects of events From day at std.com Thu May 17 18:56:50 2007 From: day at std.com (John Day) Date: Thu, 17 May 2007 21:56:50 -0400 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: At 18:33 -0400 2007/05/17, Tom Vest wrote: >On May 17, 2007, at 4:58 PM, Vadim Antonov wrote: > >> >> >>On Thu, 17 May 2007, Greg Skinner wrote: >> >>>DARPA-funded research provided computing resources upon which email, >>>USENET, Unix, etc. were extended and popularized. >> >>DARPA didn't create those resources from the thin air. They took it from >>somebody first. Your statement is an example of "What is seen and what is >>not seen" fallacy. >> >>--vadim > >Okay, what is seen: > >The universe of telecom facilities was owned and operated through >the vehicle of adjacent, non-overlapping territorial monopolies for >at least 4-5 decades leading up to the 1970s -- either as the result >of a market outcome (e.g., in the US), or of subsequent movers >observing how things played out in the earliest telecom markets >(e.g., in the US). That said, the raw inputs required for the >Internet to emerge (telecom facilities, technology, clever people, >etc.) were widely distributed throughout the world in the 1970s and >1980s. The same laws of physics that permitted T-carrier technology >and 4ESS switches to work in the United States from 1975 on also >applied everywhere else on Earth. > >In the US alone, the advent of of the latter technologies was >accompanied by regulatory changes (60 FCC 2D / 1976, which compelled >the incumbent territorial facilities monopoly owner to sell T-1 I would dispute this. The technologies were not developed in the US alone. The US may have been the one place where quite by accident there was an organization willing provide a place for it and grossly overprovision the technology so that its flaws were not immediately apparent. The US may have also had sufficient domestic facilities for the technology to reach a critical mass. But all the smart people were not in the US. >circuits to 3rd parties even when those parties intended to use them >for commercial purposes) which made it possible for someone other >than an incumbent facilities owner to provision telecom >"infrastructure", manage it independently, and use it for any >purpose that they saw fit. In the US alone, the incumbent facilities >owner's efforts to squelch the "invidious bypass" that this new >technology made possible (i.e., the Consumer Communications Reform >Act of 1978, aka "the Bell Bill") were quashed. Be careful here. This is a pretty sugar coated version of what happened. None of this would have happened if ARPA, NPL, and IRIA had not proven that the technology worked by 1972 and there was another way to do networks. This lead every major phone company to show that they could do it too. So that by 1977, there were several commercial packet networks in the world. Had they been left to their own devices the phone companies would never have built a packet network. When they were they still would not have built one friendly to computers. That was forced on them by the computer companies. The computer companies saw this as a way to go after market that the phone companies were ill suited to pursue and the phone companies very quickly saw that the organization that was being adopted relegated them to a commodity business. > >Eventually -- sometimes many years later -- other regulatory >jurisdictions followed a similar path, and the Internet started >growing in those places as well. Eventually -- generally decades >later -- in places where such changes never occur(red), some green >field bypass telecom facilities (e.g., wireless) began to provide >(at the moment, grossly inferior) options similar to those created >by the aforementioned regulatory interventions. In the mean time, >much of the Internet service that was available in the >"unreconstructed" parts of the word arrived in the form of " service >imports", i.e., services provided by offshore operators based in one >of the infrastructure-friendly jurisdictions. > >Something else that is seen: > >Empirical evidence supporting the story above is visible in the >global distribution of autonomous systems (using the country code >and/or org fields to localize each to a particular country). Since >ASes are tools for managing multihoming, and multihoming is only >technically possible where telecom facilities are overlapping, or >fungible (i.e., available in fractional bits and pieces as >"infrastructure" that can be managed independently from the >facilities provider), and available on commercially reasonable >terms, this distribution makes perfect technical sense. Places with >more ASes generally have more Internet users, devices etc., all >things (population, GDP, geography, number of years providing >Internet service, etc.) remaining equal. > >What is unseen? I am afraid that you have fallen into a trap that historians are quite familiar with. Looking at the events and seeing the sequence as almost foreordained as if it was a very deterministic sequence that could not have turned out any other way. I am afraid that a closer reading of the events will reveal that much of it is quite accidental. And the inflection points sometimes hinge on very small seemingly unimportant events. What you see is more the result of a random walk than the deux ex machina of history. Take care, John From braden at ISI.EDU Thu May 17 20:53:17 2007 From: braden at ISI.EDU (Bob Braden) Date: Thu, 17 May 2007 20:53:17 -0700 Subject: [e2e] It's all my fault In-Reply-To: <20070517164517.GA45601@hut.isi.edu> References: <20070516234931.EA264872D5@mercury.lcs.mit.edu> Message-ID: <5.1.0.14.2.20070517205046.0375a900@boreas.isi.edu> This thread has contained an awful lot of silliness, ignorance, mythology, and polemic. Let's move on to something more constructive. Bob Braden From day at std.com Thu May 17 21:07:23 2007 From: day at std.com (John Day) Date: Fri, 18 May 2007 00:07:23 -0400 Subject: [e2e] It's all my fault In-Reply-To: <20070518005535.AEC78872E5@mercury.lcs.mit.edu> References: <20070518005535.AEC78872E5@mercury.lcs.mit.edu> Message-ID: At 20:55 -0400 2007/05/17, Noel Chiappa wrote: > > From: Vadim Antonov > > > There was no market for IP routers... why? > >Because in the beginning (late 70's) there was no Internet to connect to. >There's no economic incentive for someone to go out and buy/install/use a >wide-area networking protocol when there's no wide-area network to connect >to. Until the government jump-started it by setting up the initial Internet, >there was no movement at all to an inter-organization network. > >Local-area networking protocols did get designed and sold (e.g. Banyan, >Novell) for intra-organization use, but they weren't designed for >inter-organization use (as TCP/IP was, from the start - albeit not as well as >it ought to have been). > >In fact, what private industry was doing at that point in time was 180 >degrees out of phase with the goal of ubiquitous inter-organizational >networking, actually. Every computer company (IBM, DEC, etc) had their own >protocol family (SNA, DECNet, etc), and their idea was to lock their >customers in to their (incompatible) protocol family. Letting everyone talk >to everyone was absolutely not in their minds. Not only that, they didn't >even want everything speaking over an industry standard protocol (so that you >could easily add in boxes from vendor B, in a company which heretofore had >all boxes from company A). > >Which is not to say we might not have gotten there eventually. But the >governments intervention definitely sped things up greatly. There were several companies I could name that wanted to and built prototype routers in the early 80s, but their marketing dept "knew" there was no market for them and wouldn't even try to sell them. Cisco got it going because there was enough critical mass of Internet nodes in the Bay Area that weren't just universities that they had a 100 customers before they needed to go for funding. Several groups saw the market coming and knew it was there, but as usual marketing only 20-20 hindsight and 1-1 foresight. > > > > the principle of storing and forwarding chunks of discrete data was in > > commercial use long before Baran's work - it was common since Victorian > > times in telegraph networks. > >What about the postal system? That stores and forward discrete chunks too. > >But I reject those predecessors as being that meaningful because what was >unique about Baran's work was that he came up with the idea of breaking up >user's messages into smaller pieces, and forwarding the pieces independently >- something nobody before him had thought of. You know I am still not sure of this. I have read Baran's reports and I can't tell if he is describing packet switching as in the ARPANet or packet switching as in the CYCLADES. Given that everything Baran was involved in after the reports is more the former than the latter, I am tending to the conclusion that Baran and Davies independently invented packet switching. (NPLnet was definitely more like ARPANet.) But the kind of connectionless networking and clean separation between Network and Transport seems to have come from Pouzin. CYCLADES had a very clean distinction between CIGALE and TS which the ARPANet did not have. The ARPANet was more like X.25 than IP. > >And if you think it's that obvious, try reading Kleinrock's contemporaneous >work on queing theory - it's all in terms of complete messages. Lots of great >ideas are "obvious" in hindsight. > I agree. Kleinrock's thesis is definitely in the "beads-on-a-string" mindset. Picking up on what I said above, it is also interesting that everything Larry Roberts did after the ARPANet was more in the connection-oriented packet switching mode than the connectionless. But this kind of transition in thought is common when a paradigm shift occurs. Some never leave the old paradigm, the very early ones are usually a foot in both camps because they are still sorting it out, and then the just early may be in the new model. > > May it be because there was no Intel with 8086 and no IBM with PCs, and > > no Bill Gates with Windows, and no Apple? > >Actually, the initial breakout of Internet growth occurred before all those >events. I started selling IP routers when basically everyone I sold to was >using time-sharing systems running on DEC hardware. Noel is right. It was the rise of workstations and LANs, not the PC that drove it. > >The PC and uSoft drove the later rapid growth, yes, but that was later. Agreed. > > > In fact, the Internet was impossible without transistors and > > minicomputers; and their availability is what make Internet possible. > >Yeah, but they are so basic you could just as well make the same statement, >but replace "transistors and minicomputers" with "electricity and quantum >mechanics". Agreed. It is interesting if one looks at "data comm" and how SNA was defined such that it didn't tread on the phone company and the phone company didn't tread on it. Networking happened when minicomputers come along and are cheap enough that they can be dedicated to "non-productive" work, i.e. not running user applications but just supporting them. This puts computer and phone companies in direct competition. The trouble was that peer layered Internet architecture with an end-to-end Transport layer was a major threat to both "monopolies:" The peer architecture prevented a transition path for IBM and the Transport layer relegated the phone companies' core business to a commodity. > > > > .. the market for "the Internet" was created not by availability of > > TCP/IP networks, but by the lowly BBSes, e-mail, and USENET. Neither of > > which depended on anything developed by DARPA-funded research. > >An interesting list. Let's look at a few in more detail. > >Intra-machine e-mail (the first form to appear) first appeared (as far as we >can tell) on SDC's Q32 and MIT's CTSS - both created with government funding. >Inter-machine e-mail appears to have originated with the ARPAnet. Agreed and I was telecommuting using the Net and email by 1976. > >USENET ran on Unix systems. Unix was heavily influenced by the time Thompson >and Richie spent working on the Multics project - another DARPA project. >(Although it turns out the strongest technical influence on Unix was actually >the earlier CTSS system, but that was also government funded.) I have always described UNIX as the amount of Multics one could get on a PDP-11/45. I always thought we should have redone the exercise for 68000s! ;-) > >Not that it's all government: e.g. the personal computer, the heart of most >BBS'es, only took off after the creation of the micro-computer, the Intel >4004 being the first. (Although some BBS's existed before that, running on >mini-computers.) But to say government had no role is not correct. If government had not been grossly overprovisioning it and turning a blind eye to the "abusive" use of its resources, there would never have been an Internet. I hate to think what would have happened if at anytime from 1970 to 1990 if some crusading journalist had figured out all of the non-DoD activities going on the ARPANet/Internet and done an expose! The things we were doing! The waste of tax payer dollars! The nice thing about high tech then was they didn't really understand it. > > > Claiming the impossibility of the Internet without the contribution of > > a particular research based on the ideas which were known for over a > > hundred years and only waited availaibility of technology to get > > implemented in automated form is, well, stretching the truth very > > thinly. > >I never said it was "impossibl[e]". I merely said that government funding had >a role in making it happen, and further (see above) probably made it happen a >lot faster than it would have otherwise. I think it nearly was impossible. First, corporations don't do the research that is as far out as the ARPANet was from showing a benefit or is as risky. (Actually governments don't any more either for the most part.) Second, the mind set of corporations would not have taken the risk on something that risky. Third, you should remember as I said above, the ARPANet was more like X.25 than IP. But it was grossly overprovisioned compared to the other experimental networks that you couldn't see the limitations. (NPL and CYCLADES had 9.6K lines whereas the ARPANet was 56K.) > >Speaking of "a lot faster", it's interesting to look at history and see how >technological development/deployment speeds up during wartime - which is, of >course, all government funding. The US Civil War (which greatly accelerated >the deployment of telegraphs and railroads), World War I (the aeroplane), >World War II (modern electronics). Now there's an interesting conjunction: >war and technological development... I have another one: intellectual flowerings, the kind of period of new thinking, happens after mass extinctions. Renaissance comes after the Black Death, the Enlightenment after the 30 years war, etc. > > > > What *is* a well-established fact is that monopoly in every area of > > human endeavour leads to stagnation. > >I think most of us generally agree with you on that. > > > it is a vastly different network, too. With tons of band-aids and > > workarounds and ugly hacks needed to keep it running despite > > short-sighted decisions made by the original designers. > >Again, I think most of us would agree that the original design had some >significant areas where is was not developed enough. (Why that happened is a >whole separate discussion.) Agreed. Basically what we have is an unfinished demo. > >However, the important observation is that some of us think that innovations >like moving the path-selection out of the network core, and other similar >things, is a good way to get rid of some of those "band-aids and workarounds >and ugly hacks" - provided that we can make them economically viable, of >course. It seems that there is a large group think headed in that direction. Is it another imitation of the Faber Marching Band? We will see. Take care, John From Jon.Crowcroft at cl.cam.ac.uk Thu May 17 23:40:42 2007 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Fri, 18 May 2007 07:40:42 +0100 Subject: [e2e] fault apportionmant and mitigation In-Reply-To: <5.1.0.14.2.20070517205046.0375a900@boreas.isi.edu> References: <20070516234931.EA264872D5@mercury.lcs.mit.edu> <5.1.0.14.2.20070517205046.0375a900@boreas.isi.edu> Message-ID: so what might be interesting would be to hear about DDoS mitgation and detecting sources of ddos (note some dos mitgation doesnt require one to detect/isolate and take down sources - some ISPs have told us (in the CRN DoS working group) that they dont care so much about dos traffic traversing their net (particulalry ones with now actual servers attached:), as others - some questiosn though: botnets - i) are they clusteed on certain ISPs/ ASs and ii) do they tend to come from mostly homgenous sets of users/machines? (e.g. large pools of machines in big businesses like insurance companies who leave 10s of 1000s of systems up at night and dont run much in the way of security update, or is it loads of mom&pop home windows 98 systems:) iii) how often are attacks sources from Big Fast small numebrs (even 1) machine on a GigE or 10GigE? iv) dos target : is it mainly server or is it as often topological attacks? v) ditto scanning vi) when ISPs shut things down near a source, what is th sequence of take down actions (detect/inform/warn/blackhole etc etc) and what are the costs of false positive vii) how often is source spoofing an issue (e.g. would loose source routing make it worse much really?:-) on triffic engineering (I'm sure all ISPs are triffic at engineering:): a) how do ISPs engineer customer/provider relationships? b) what are economics in customer/provider bills of not meeting SLAs? c) what would make BGP failover work fast enough to not break VOIP, IPTV, etc? on mythology: XVII) what are the similarities between Marvel Comic and Homeric heroes? XVIII) are the lacking parental relationships of jor el/clark kent/superman (DC) and spider man (peter parker, marvel) archetypes for the demi-god status/origins of pre-helenic heroes, or are they more reflective of the randomness of the gods/families of primitive societies like 1500 BC asia minor and 1940s america? In missive <5.1.0.14.2.20070517205046.0375a900 at boreas.isi.edu>, Bob Braden typed: >>This thread has contained an awful lot of silliness, ignorance, mythology, >>and polemic. Let's move on to something more constructive. by the way i thought there actrually were several threads in there that were also interesting, useful, and thoughtful, but they got drowned out:) cheers jon From tvest at pch.net Fri May 18 02:29:50 2007 From: tvest at pch.net (Tom Vest) Date: Fri, 18 May 2007 05:29:50 -0400 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: On May 17, 2007, at 9:56 PM, John Day wrote: > At 18:33 -0400 2007/05/17, Tom Vest wrote: >> On May 17, 2007, at 4:58 PM, Vadim Antonov wrote: >> >>> >>> >>> On Thu, 17 May 2007, Greg Skinner wrote: >>> >>>> DARPA-funded research provided computing resources upon which >>>> email, >>>> USENET, Unix, etc. were extended and popularized. >>> >>> DARPA didn't create those resources from the thin air. They took >>> it from >>> somebody first. Your statement is an example of "What is seen and >>> what is >>> not seen" fallacy. >>> >>> --vadim >> >> Okay, what is seen: >> >> The universe of telecom facilities was owned and operated through >> the vehicle of adjacent, non-overlapping territorial monopolies >> for at least 4-5 decades leading up to the 1970s -- either as the >> result of a market outcome (e.g., in the US), or of subsequent >> movers observing how things played out in the earliest telecom >> markets (e.g., in the US). That said, the raw inputs required for >> the Internet to emerge (telecom facilities, technology, clever >> people, etc.) were widely distributed throughout the world in the >> 1970s and 1980s. The same laws of physics that permitted T-carrier >> technology and 4ESS switches to work in the United States from >> 1975 on also applied everywhere else on Earth. >> >> In the US alone, the advent of of the latter technologies was >> accompanied by regulatory changes (60 FCC 2D / 1976, which >> compelled the incumbent territorial facilities monopoly owner to >> sell T-1 > > I would dispute this. The technologies were not developed in the > US alone. The US may have been the one place where quite by > accident there was an organization willing provide a place for it > and grossly overprovision the technology so that its flaws were not > immediately apparent. The US may have also had sufficient domestic > facilities for the technology to reach a critical mass. But all > the smart people were not in the US. Actually that's exactly what I said: telecom facilities, technology, and clever people were in abundance in many places. I guess I should have separated these into two lists -- things that are presumably uniform globally (distribution of smarts, laws of physics / "how things work") and things that are not uniformly distributed but were available in many places (dense telecom facilities). I didn't say anything (nor intend to make any specific point) about where T-carrier or 4ESS technologies, or any other particular technologies, were invented. >> circuits to 3rd parties even when those parties intended to use >> them for commercial purposes) which made it possible for someone >> other than an incumbent facilities owner to provision telecom >> "infrastructure", manage it independently, and use it for any >> purpose that they saw fit. In the US alone, the incumbent >> facilities owner's efforts to squelch the "invidious bypass" that >> this new technology made possible (i.e., the Consumer >> Communications Reform Act of 1978, aka "the Bell Bill") were quashed. > > Be careful here. This is a pretty sugar coated version of what > happened. None of this would have happened if ARPA, NPL, and IRIA > had not proven that the technology worked by 1972 and there was > another way to do networks. This lead every major phone company to > show that they could do it too. So that by 1977, there were > several commercial packet networks in the world. I agree with you entirely about demonstrating demand -- but because I agree with you I wonder why the success of R&E / university based network experiments in (other) places didn't lead to an explosion of demand (and hence supply) in all associated host countries/markets. In some places -- I use the US as an example only because I believe it was "first", not "only" -- the demonstration was followed by some critical changes in telecom rules, changes which permitted non-PSTNs to play a big role in Internet growth and dynamism going forward. In many other places, when the Internet was ready to "outgrow the campus" it was largely reabsorbed into the PSTN service platform. I agree with you entirely about the existence of a few small commercial packet network companies, esp. *after* 1976 -- but because I agree with you (repeat the above) ... in the US non-PSTNs came to lead Internet development precisely because the rule changes made a big difference. Had the kind of rules changes exemplified by the 1976 (contingently, US) law not happened, and commercial packet network operators been forced to inefficiently self-provision facilities for every point-to-point network segment necessary to reach every customer, rather than efficiently leveraging existing capital stock and rising excess capacity created by advances in multiplexing technologies -- well, the Internet would have been a very different thing. Thanks to the magic of longitudinal comparison, we can get some idea of what kind of thing it might have been by looking at those countries where such rules were never implemented. Generally speaking, if you count things like number of Internet users, PCs, domains (the last being are a good indicator of demand, not supply), it's not a very flattering comparison. > Had they been left to their own devices the phone companies would > never have built a packet network. When they were they still would > not have built one friendly to computers. That was forced on them > by the computer companies. > > The computer companies saw this as a way to go after market that > the phone companies were ill suited to pursue and the phone > companies very quickly saw that the organization that was being > adopted relegated them to a commodity business. > >> >> Eventually -- sometimes many years later -- other regulatory >> jurisdictions followed a similar path, and the Internet started >> growing in those places as well. Eventually -- generally decades >> later -- in places where such changes never occur(red), some green >> field bypass telecom facilities (e.g., wireless) began to provide >> (at the moment, grossly inferior) options similar to those created >> by the aforementioned regulatory interventions. In the mean time, >> much of the Internet service that was available in the >> "unreconstructed" parts of the word arrived in the form of " >> service imports", i.e., services provided by offshore operators >> based in one of the infrastructure-friendly jurisdictions. >> >> Something else that is seen: >> >> Empirical evidence supporting the story above is visible in the >> global distribution of autonomous systems (using the country code >> and/or org fields to localize each to a particular country). Since >> ASes are tools for managing multihoming, and multihoming is only >> technically possible where telecom facilities are overlapping, or >> fungible (i.e., available in fractional bits and pieces as >> "infrastructure" that can be managed independently from the >> facilities provider), and available on commercially reasonable >> terms, this distribution makes perfect technical sense. Places >> with more ASes generally have more Internet users, devices etc., >> all things (population, GDP, geography, number of years providing >> Internet service, etc.) remaining equal. >> >> What is unseen? > > I am afraid that you have fallen into a trap that historians are > quite familiar with. Looking at the events and seeing the sequence > as almost foreordained as if it was a very deterministic sequence > that could not have turned out any other way. I am afraid that a > closer reading of the events will reveal that much of it is quite > accidental. And the inflection points sometimes hinge on very > small seemingly unimportant events. What you see is more the > result of a random walk than the deux ex machina of history. Again, you completely misread me; we are in complete agreement. This is not a Whig History of Internet Development. Nowhere did I say (nor do I believe) that things turned out the way they did because omniscient US regulators clearly foresaw the results of the rule changes that they (coincidentally, contingently) happened to try first in 1976, and then again in 1984. I *am* saying that those kind of rule changes -- the ones that make it possible for many different kinds of people, institutions, commercial ventures, etc. to play with the critical inputs necessary to assemble IP networks -- which just happened to get tried quite early in the US -- ended up making a huge difference in how things turned out in this particular run of history. I am saying that the "Tussle in Cyberspace" that has done so much to drive innovation didn't get started until, and isn't really relevant except when/where the "Tussle in Tele-space" resulted in a particular set of outcomes. I assert that empirical patterns backing these claims up are observable in the evolving contents of the routing table, at least over the last decade for which we have archived routing tables. I was afraid that the repetition of "US" in my first post was going to elicit a visceral response like this. I apologize for not leading originally with caveats about radical contingency, the undeniable contributions made by universities outside the US made, the subsequent contributions made by some PSTNs, some computer vendors, etc. Tom From tvest at pch.net Fri May 18 02:57:44 2007 From: tvest at pch.net (Tom Vest) Date: Fri, 18 May 2007 05:57:44 -0400 Subject: [e2e] It's all my fault In-Reply-To: <5.1.0.14.2.20070517205046.0375a900@boreas.isi.edu> References: <20070516234931.EA264872D5@mercury.lcs.mit.edu> <5.1.0.14.2.20070517205046.0375a900@boreas.isi.edu> Message-ID: <8397EAAD-D087-46C6-81EC-C51D13E41F9B@pch.net> On May 17, 2007, at 11:53 PM, Bob Braden wrote: > > This thread has contained an awful lot of silliness, ignorance, > mythology, > and polemic. Let's move on to something more constructive. > > Bob Braden There seems to be growing belief, if not consensus, that economics and institutional rules matter, i.e., they determine what problems can be solved, through what means, by who, over what time horizon, etc. Given the apparent fact that more constructive, purely technical discussions have yet to produce solutions to quite a few pressing problems, I would personally rather see this thread get purged of the silliness, ignorance, etc. through discussion rather than merely getting dropped. Tom From avg at kotovnik.com Fri May 18 03:46:45 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Fri, 18 May 2007 03:46:45 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: <20070517164517.GA45601@hut.isi.edu> Message-ID: On Thu, 17 May 2007, Ted Faber wrote: > On Wed, May 16, 2007 at 07:33:03PM -0700, Vadim Antonov wrote: > > The real role of the government in the history of the Internet was > > stalling, obfuscating, and supporting monopolism. If not for the > > government-propped monopoly of AT&T, we'd see widely available commercial > > data networks in early 80s, not in 90s - the technology was there. > > Governments do not support natural monopolies like the telecom network, > because they have no role in those monopolies. Now we got into the Alice's Wonderland, I guess. No role? Huh? FCC does not regulate telcos? Since when? Can I dig in a cable into the ground without asking various government busybodies first? What about the land grants and easements for the cable plants? What about patents? What about "lawful access" requirements? What about Public Utility Comissions and their price-fixing? NO ROLE??? > Significant economies of > scale and high capital barriers to entry will shut other providers of > similar services out of the market completely. Even without aggressive > action by the providers, this leads directly to monopoly. That's a > property of the market, not a government imposed attribute. Economics 101 - natural monopoly (aka single provider) and monopoly are not the same. By far. The "natural monopoly" cannot exploit consumers by raising prices or by reducing services using its "monopoly" position because doing so will create opportunity for entrance by a smaller competitor. Real monopolies which depend on government enforcement of its "rights" can deter competition and thus can exploit consumers. In fact, scale is not an issue whatsoever. No matter how large a "naturaly monopolistic" company is, it is always possible to borrow enough capital to create a comparably sized company. The history of Internet fiber glut demonstrates that pretty conclusively. > Furthermore, all recorded cases of natural monopoly have evidenced > aggressive action; providers in natural monopoly situations crush > competitors and resist changes to their market. "Aggressive" is either a) violent or b) vigorous. "B" means innovation, price cutting, and otherwise serving customers better. This is _good_. If somebody seres customers much better than anyone else could, then it is the best possible outcome for customers. "A" means using actual or threatened violence to suppress competition. Because governments have monopoly on the legal violence, the only way to do "a" is to collude with goverment and let the lawyers to sic judicial powers of the government (which has the guns) on the competition. Note that "a" requires government playing a central role. > Why would they do differently? Yes, why would they when they can buy enough politicans to do a joe job on potential competition instead of competing fairly. > If US telecom were really deregulated tomorrow - no requirements to > share infrastructure, no limits on size, no service requirements - > there'd be one phone company in a decade at the most. I'm sure they told exactly the same nonsense when MCI tried to break the long-distance market open. > It's hard to see how you can characterize this as monopoly protection. Monopoly protection requires actual or threatened violence. In the modern world only governments do that at the large scale. > You couldn't lease a T1 before the government made AT&T lease you one - > an action I'm surprised you don't characterize as the government > stealing AT&T's capital. Yep. Considering that the government made AT&T a monopoly, that sure is a relief. You may want to actually read the text of Kingsbury Commitment. Oh, and don't forget the Bell's patent on a system which other people were developing at the same time. > It's a lot more difficult to build a nationwide (to say nothing of > worldwide) data network if you have to spend the capital to run the > lines. Hard, not impossible. Many companies have done that. > Without the government(s) acting in direct conflict with monopoly interests > by forcing access to the infrastructure and financing the development > of the technology there would be no commercial Internet today. There > might be one in decades, but it would cost more and be more constrained, > IMHO. Yeah, yeah. First, create a problem. Then valiantly wrestle with the problem (creating more problems along the way). That's the modus operandi of any federal agency I've seen so far. > Now, I don't think that the government had a coordinated plan to create > a new market, but without the (accidental) confluence of those actions, > the Internet would be unlikely to emerge. Governments do not create markets. They cannot. Markets are created by the acts of voluntary exchange between people, and precede governments by tens of thousands of years. (Heck, even apes do some trading among themselves). What governments can do is destroy markets. Fortunately, US is not so far gone down that road as was good ol' USSR. --vadim From avg at kotovnik.com Fri May 18 03:50:33 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Fri, 18 May 2007 03:50:33 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: <464C9709.4090807@psg.com> Message-ID: On Thu, 17 May 2007, Randy Bush wrote: > it's also very convenient if the benefit is gained close to where/by-whom > the costs are incurred, a point often overlooked. > > randy Heh. That's the point squarely ignored by most of the so-called science of economics. Every time they average utilities to derive some metric of "common good" or "utility for the society" or whatever it is called, they ignore the basic fact of life that people tend to act in their own interests. --vadim From avg at kotovnik.com Fri May 18 04:17:45 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Fri, 18 May 2007 04:17:45 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: <464CD20E.8010701@reed.com> Message-ID: On Thu, 17 May 2007, David P. Reed wrote: > Are taxes collected and then spent different > qualitatively from profits taken and then invested? Both are > involuntary redirections of resources based on the power of one set of > humans over another. How exactly profits are "involuntary"? They result from people voluntarily buying goods. There is no other way for non-criminal companies to get profits. > Is a democratic government morally better than a corporate entity > created and protected by that government's police powers? Democracy is a tyrrany of majority. If this is moral depends on your definion of morality. The "corporate entities" are NOT created by the governments. The times when one couldn't form a company without a permission from the King are, thankfully, past. In modern times governments merely register businesses to collect taxes from them. As for the protection... heh. You may be surprised to learn that businesses in US are protected mostly by other businesses (security companies, arbitrage providers, insurers, etc) and not by the government. There are five times more security guards than policemen in this country; and every commercial contract I've seen includes arbitrage provisions. All of that simply because government protection sucks. Cases take years to go through courts. Police won't sit in a lobby and watch visitors. And even if it does it has no obligation to actually protect any individual or business. Yes, you heard me right. In the US police has no duty to protect citizens. Neither do courts, DHS, or any other agencies. The courts said so (see, for example Warren v. District of Columbia, 444 A.2d 1 -- "fundamental principle that a government and its agents are under no general duty to provide public services, such as police protection, to any particular individual citizen"). The real gem about the police protection comes from Souza v. City of Antioch, 62 California Reporter, 2d 909, 916: "police officers have no affirmative statutory duty to do anything." Meaning that if a police officer helps you it is only because he's a good guy, not because it is his duty. Sorry to bust another collectivist myth. --vadim PS. If somebody thinks that this discussion is out of place, please say so. I certainly do not want to impose this on uninterested readers. From tvest at pch.net Fri May 18 04:25:30 2007 From: tvest at pch.net (Tom Vest) Date: Fri, 18 May 2007 07:25:30 -0400 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: <9787CD02-A02A-4818-9A68-D394F2CE5A0D@pch.net> On May 18, 2007, at 6:50 AM, Vadim Antonov wrote: > On Thu, 17 May 2007, Randy Bush wrote: > >> it's also very convenient if the benefit is gained close to where/ >> by-whom >> the costs are incurred, a point often overlooked. >> >> randy > > Heh. That's the point squarely ignored by most of the so-called > science of > economics. Every time they average utilities to derive some metric of > "common good" or "utility for the society" or whatever it is > called, they > ignore the basic fact of life that people tend to act in their own > interests. > > --vadim Thank you for the opportunity to revisit this. Your willingness to dismiss the net effects of such considerations so casually is itself a textbook example of the "What is seen and what is not seen" fallacy. I'll won't hold it against you, since the whole notion is absurd -- nothing more than a rhetorical attempt to push an interlocutor to prove a negative, which is logically problematic at best. Offlist, I'll be happy to use history and empirical data to try to illustrate why some (not all) regulatory interventions are good (not bad) -- but only after you demonstrate to me why arguments that benchmark the real world against some fantastical, never-to-be-seen world of pure interests should merit any attention at all. TV, eschewing the polemical forevermore after this From avg at kotovnik.com Fri May 18 05:09:39 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Fri, 18 May 2007 05:09:39 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: <9787CD02-A02A-4818-9A68-D394F2CE5A0D@pch.net> Message-ID: On Fri, 18 May 2007, Tom Vest wrote: > Your willingness to dismiss the net effects of such considerations so > casually is itself a textbook example of the "What is seen and what is > not seen" fallacy. Tom - you may want to look up what this term means (hint: this is a title of an essay by Bastiat). You certainly make no sense applying it in this context. My statement was merely a very simple observation that averaging, comparing, or otherwise aggregating non-additive quantities (such as personal utilities) is mathematically invalid. Meaning that all economic models based on this operation are pure unadulterated junk. From a purely logical point of view, without any political bias. The non-additivity of utilities is easily demonstrated with a simple experiment: let's say I hit you on the head with a cluebat. Does our aggregate utility (my increasing because of moral satisfaction, yours decreasing because I have a heavy hand and extensive experience with spanking and striking implements) increase or decrease? What is the measure of "social good" of this act? How does one determine it? --vadim From tvest at pch.net Fri May 18 06:11:12 2007 From: tvest at pch.net (Tom Vest) Date: Fri, 18 May 2007 09:11:12 -0400 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: <9834DE08-A968-46D8-A742-CA82E5E48345@pch.net> On May 18, 2007, at 6:46 AM, Vadim Antonov wrote: > On Thu, 17 May 2007, Ted Faber wrote: > >> On Wed, May 16, 2007 at 07:33:03PM -0700, Vadim Antonov wrote: >>> The real role of the government in the history of the Internet was >>> stalling, obfuscating, and supporting monopolism. If not for the >>> government-propped monopoly of AT&T, we'd see widely available >>> commercial >>> data networks in early 80s, not in 90s - the technology was there. >> >> Governments do not support natural monopolies like the telecom >> network, >> because they have no role in those monopolies. > > Now we got into the Alice's Wonderland, I guess. No role? Huh? FCC > does > not regulate telcos? Since when? Can I dig in a cable into the ground > without asking various government busybodies first? In the state of nature, if the ground wasn't held by someone else, then you could do whatever you wanted with it. In the state of nature, if someone else controlled it, and didn't want you to dig, then you could try to persuade them by some means. In the state of nature, if you couldn't persuade them, but you were stronger then they were, then maybe you could do it anyway. People didn't like that risk, so they created governments, and property rights, and other kinds of laws and regulations too. Perhaps in the fantastical, never-to-be-seen world of pure markets, where property rights are miraculously respected even though no other laws or law enforcement mechanisms exist, things would be different. I like speculative fiction as much as the next guy; I just demand a slightly more plausible plot line to achieve the necessary suspension of belief... > What about the land > grants and easements for the cable plants? What about patents? > What about > "lawful access" requirements? What about Public Utility Comissions and > their price-fixing? NO ROLE??? > >> Significant economies of >> scale and high capital barriers to entry will shut other providers of >> similar services out of the market completely. Even without >> aggressive >> action by the providers, this leads directly to monopoly. That's a >> property of the market, not a government imposed attribute. > > Economics 101 - natural monopoly (aka single provider) and monopoly > are > not the same. By far. The "natural monopoly" cannot exploit > consumers by > raising prices or by reducing services using its "monopoly" position > because doing so will create opportunity for entrance by a smaller > competitor. Ahem, apparently a review of Economics 100 is in order here. > Real monopolies which depend on government enforcement of its > "rights" can > deter competition and thus can exploit consumers. > > In fact, scale is not an issue whatsoever. No matter how large a > "naturaly > monopolistic" company is, it is always possible to borrow enough > capital > to create a comparably sized company. The history of Internet fiber > glut > demonstrates that pretty conclusively. Ahem, the capex involved in building point-to-point facilities between 400-500 buildings is somewhat less than the cost of building same between 4-5 million (or 40-50 million, or 400-500 million) premises. >> Furthermore, all recorded cases of natural monopoly have evidenced >> aggressive action; providers in natural monopoly situations crush >> competitors and resist changes to their market. > > "Aggressive" is either a) violent or b) vigorous. > > "B" means innovation, price cutting, and otherwise serving customers > better. This is _good_. If somebody seres customers much better than > anyone else could, then it is the best possible outcome for customers. > > "A" means using actual or threatened violence to suppress competition. > Because governments have monopoly on the legal violence, the only > way to > do "a" is to collude with goverment and let the lawyers to sic > judicial > powers of the government (which has the guns) on the competition. > > Note that "a" requires government playing a central role. Sigh. The conveniently excluded middle here is that commercial/ financial/market power is fungible -- just because it doesn't usually manifest in the form of armies doesn't mean that it can't be used to affect the exact same kind of outcomes that are commonly associated with violence and armies -- or that it can't buy armies on occasion. Banishing governments wouldn't banish power or the bad behavior that results from too much power, it would just align the distribution of such things to the distribution of wealth -- which is a cumulative fact. The strong would still do what they willed, the weak would still do what they had no choice but to do. If you can tell me what, in a pure market world, would prevent powerful market actors at time T from "fixing things" to assure their continued dominance at time (T+x), then maybe I'll join the party. >> Why would they do differently? > > Yes, why would they when they can buy enough politicans to do a joe > job on > potential competition instead of competing fairly. Lets stipulate that all governments are somewhat corrupt all of the time. Are you arguing that no private entities are ever corrupt any of the time -- or rather are you saying that all should be fair in love, war, and competition, i.e., that the ideas of fair/unfair and honest/corrupt are simply meaningless in all-market world? I'm not sure if I'd call that fatalism or nihilism, but I'm pretty sure that whatever adjective applies wouldn't be very complimentary. >> If US telecom were really deregulated tomorrow - no requirements to >> share infrastructure, no limits on size, no service requirements - >> there'd be one phone company in a decade at the most. > > I'm sure they told exactly the same nonsense when MCI tried to > break the > long-distance market open. > >> It's hard to see how you can characterize this as monopoly >> protection. > > Monopoly protection requires actual or threatened violence. In the > modern > world only governments do that at the large scale. This is just proof by assertion, with the particular assertions in question coming from orthodox libertarian ideology. >> You couldn't lease a T1 before the government made AT&T lease you >> one - >> an action I'm surprised you don't characterize as the government >> stealing AT&T's capital. > > Yep. Considering that the government made AT&T a monopoly, that > sure is a > relief. You may want to actually read the text of Kingsbury > Commitment. > Oh, and don't forget the Bell's patent on a system which other > people were > developing at the same time. If AT&T became a monopoly only because government intervention, then one wonders why the monopoly service provision model was 100% globally uniform before the mid-1970s. If the explanation is that governments were responsible everywhere, then the fantasy world / speculative fiction rule applies. Don't declare that governments are the problem; persuade me why and how exactly your preferred hypothetical world would be better than the one we actually live in. >> It's a lot more difficult to build a nationwide (to say nothing of >> worldwide) data network if you have to spend the capital to run the >> lines. > > Hard, not impossible. Many companies have done that. Can you provide one or more example of a country that possess two or more completely physically independent telecom facilities platforms that are equivalent in scope/scale and provide equivalent (substitutable) services? I've built large-scale IP network in a lot of countries, and I haven't observed any place like that yet. >> Without the government(s) acting in direct conflict with monopoly >> interests >> by forcing access to the infrastructure and financing the development >> of the technology there would be no commercial Internet today. There >> might be one in decades, but it would cost more and be more >> constrained, >> IMHO. > > Yeah, yeah. First, create a problem. Then valiantly wrestle with the > problem (creating more problems along the way). That's the modus > operandi > of any federal agency I've seen so far. > >> Now, I don't think that the government had a coordinated plan to >> create >> a new market, but without the (accidental) confluence of those >> actions, >> the Internet would be unlikely to emerge. > > Governments do not create markets. They cannot. Markets are created > by the > acts of voluntary exchange between people, and precede governments > by tens > of thousands of years. (Heck, even apes do some trading among > themselves). In a pure marketplace of ideas, I might try to silence someone who believed that -- or more likely I would need to arm and physically defend myself against the "vigorous" efforts of marketarians to silence me. Unless perhaps I lived in that special market-only universe described above, where opinions and property values are universally regarded as sacrosanct, and no one ever violates that maxim. > What governments can do is destroy markets. Fortunately, US is not > so far > gone down that road as was good ol' USSR. I think I'll declare a Law of Conservation of Markets here: Markets are never created or destroyed, but merely change form in accordance with the rules imposed by external (law) and internal (market power) agencies. TV, really the last polemic I promise From jnc at mercury.lcs.mit.edu Fri May 18 06:25:23 2007 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Fri, 18 May 2007 09:25:23 -0400 (EDT) Subject: [e2e] It's all my fault Message-ID: <20070518132523.E370D86AE8@mercury.lcs.mit.edu> > From: Vadim Antonov Again, apologies to all for the wandering nature... > natural monopoly (aka single provider) and monopoly are not the same. > .. The "natural monopoly" cannot exploit consumers by raising prices or > by reducing services using its "monopoly" position because doing so > will create opportunity for entrance by a smaller competitor. > ... > No matter how large a "naturaly monopolistic" company is, it is always > possible to borrow enough capital to create a comparably sized company. I think you're incorrect here. First, I think of three categories of monopoly, not two: - statutory monopoly, i.e. one instantiated by the government - 'ordinary' monopoly, i.e. one where one organization has taken over a market, and uses its economic power to maintain that monopoly - natural monopoly, i.e. one where it makes no economic sense to have competition (short of incompetence or malfeasance), e.g. a local household electricity distribution network (Of course, one can further subdivide each of these into two subsets, ones which are well-run and don't abuse their power to milk their customers, and those which aren't, but that's an orthagonal axis.) Second, abusive 'ordinary' monopolies aren't that easy to break, especially if the market is large, and the monopolist has accumulated considerable economic resources thereby. The monopolist's economic power is used to make it impossible for a new competitor to get a foothold. The usual tactic is to drop prices to below what it costs to provide the goods/service, i.e. it's impossible for the competitor to make a profit. The monopolist can usually run this way for a long time, e.g. years. (Heck, even non-monopolist large companies, ones which are losing money through simple incompetence, can last a long time: look at many airlines, and US car manufacturers.) Most financiers are not interested in getting into a situation where they have to i) invest a really large amount of money, ii) wait years to get any return at all (remember, prices <= cost, so no profits), and iii) probably cannot get the same rate of return as they could if they deployed their resources in some less-cuthroat part of the economy. Another possible tactic, one specialized to communication networks, is to refuse interconnection. The whole point of a communication network is to communicate: if by signing up to provider B, which is much smaller than provider A, you can only talk to provider B's clients, most people will sign up with A. B soon withers away. (A similar effect is what eliminated all competing protocols to TCP/IP.) This may lose provider A the money they could charge B for interconnecting with A, but if their goal is to maintain the monopoly, they will accept that loss. For others, just study Microsoft's business tactics over the years... >> Furthermore, all recorded cases of natural monopoly have evidenced >> aggressive action; providers in natural monopoly situations crush >> competitors and resist changes to their market. > "Aggressive" is either a) violent or b) vigorous. > "B" means innovation, price cutting, and otherwise serving customers > better. This is _good_. No, again, there's a third alternative you aren't listing: "take all possible business actions it takes to drive the new competitor out of business", e.g. deliberately losing money on sales. Yes, if someone can gather the capital from people foolish enough to try and create a competitor, as a result there may be a period, while the monopolist is vanquishing its attacker, where i) products improve faster, and ii) prices are lower. However, in the end the attacker will run out of resources, fold, and then monopolist is back in business - and wil have deterred potential competitors for a generation or more, who see what happened to this one. On a more general note, through long experience the West has discovered some of the pitfalls of capitalism - and they do exist - and created systems to ameliorate them. Totally free markets turn into jungles, alas. (In fact, a jungle is by definition the ultimate free market, in terms of competition.) I can recommend a wonderful book, a colourful, fascinating and easy read (I found it as hard to put down as a crime novel, no mean feat for a economics history), which makes this point very clearly: John Steele Gordon, "The Scarlet Woman of Wall Street: Jay Gould, Jim Fisk, Cornelius Vanderbilt, the Erie Railway Wars and the Birth of Wall Street", Weidenfeld and Nicolson, New York, 1988 It describes the shenanigans in the US stock market in the 1860's and 1870's, the events that led to the creation of initial regulatory bodies in those markets (although period crashes since then, such as 1929, have shown how those regulations need to be tweaked as weaknesses appear). It also makes an interesting point: much of that regulation is self-regulation, not government-imposed (although there has come to be more of that recently). However, it is also clear that in some circumstances individuals, even banded together, can't impose the necessary regulation, and in those cases (and aggressive, abusive, monopolists are one) government regulation may indeed be needed. Noel From dpreed at reed.com Fri May 18 06:29:58 2007 From: dpreed at reed.com (David P. Reed) Date: Fri, 18 May 2007 09:29:58 -0400 Subject: [e2e] It's all my fault In-Reply-To: <19342.1179450356@lunacy.ugrad.cs.cmu.edu> References: <19342.1179450356@lunacy.ugrad.cs.cmu.edu> Message-ID: <464DAA56.3050405@reed.com> Adam Thierer is hardly an unbiased expert. Cato is like Cheney - it hires people to prove what they already take on faith. Thierer has one point of view among many - one that starts with the idea that private property is the core of goodness. That said, I have been called an anarcho-libertarian among many other names. Unfortunately for Cato, my view of liberty is closer to that of Hayek and the ACLU. Dave Eckhardt wrote: >> The universe of telecom facilities was owned and operated through the >> vehicle of adjacent, non-overlapping territorial monopolies for at >> least 4-5 decades leading up to the 1970s -- either as the result of >> a market outcome (e.g., in the US), or of subsequent movers observing >> how things played out in the earliest telecom markets (e.g., in the >> US). >> > > Actually, as a result of anti-competitive government interventions: > > Unnatural Monopoly: critical moments in the development of the Bell System Monopoly > Adam D. Thierer > Cato Journal > Volume 14 Number 2, Fall 1994 > http://www.cato.org/pubs/journal/cjv14n2-6.html > > Dave Eckhardt > > From davide+e2e at cs.cmu.edu Fri May 18 06:54:49 2007 From: davide+e2e at cs.cmu.edu (Dave Eckhardt) Date: Fri, 18 May 2007 09:54:49 -0400 Subject: [e2e] It's all my fault In-Reply-To: <464DAA56.3050405@reed.com> Message-ID: <21733.1179496489@lunacy.ugrad.cs.cmu.edu> > Adam Thierer is hardly an unbiased expert. The part of his article you disagree with is...? Dave Eckhardt From tvest at pch.net Fri May 18 07:02:37 2007 From: tvest at pch.net (Tom Vest) Date: Fri, 18 May 2007 10:02:37 -0400 Subject: [e2e] It's all my fault In-Reply-To: References: Message-ID: On May 18, 2007, at 8:09 AM, Vadim Antonov wrote: > On Fri, 18 May 2007, Tom Vest wrote: > >> Your willingness to dismiss the net effects of such considerations so >> casually is itself a textbook example of the "What is seen and >> what is >> not seen" fallacy. > > Tom - you may want to look up what this term means (hint: this is a > title > of an essay by Bastiat). You certainly make no sense applying it in > this > context. > > My statement was merely a very simple observation that averaging, > comparing, or otherwise aggregating non-additive quantities (such as > personal utilities) is mathematically invalid. Meaning that all > economic > models based on this operation are pure unadulterated junk. From a > purely > logical point of view, without any political bias. > > The non-additivity of utilities is easily demonstrated with a simple > experiment: let's say I hit you on the head with a cluebat. Does our > aggregate utility (my increasing because of moral satisfaction, yours > decreasing because I have a heavy hand and extensive experience with > spanking and striking implements) increase or decrease? What is the > measure of "social good" of this act? How does one determine it? > > --vadim Thank you for making my previous, tongue-in-cheek hypothetical example so apropos. It helps my always-vulnerable reputation for prescience immensely ;-) I agree with you that abstract utilities are silly, and that much in mainstream academic economics (and many other orthodoxies) is silly. I'm not the kind of orthodox economist (or anything else) that feels the need to defend the discipline, right or wrong. I guess that's why blanket assertions from any particular ideological orthodoxy tend to set me off. For that I apologize. In this particular case, however, I do have something that economists lack: an empirically measurable unidemensional metric of output or value that I can use to benchmark the aggregate results of all of the things we're discussing. That metric is unique public routed IP addresses, some of which are multiplexed at the edges with NAT, etc. It's a meaningful scale, I believe, because it actually follows (both historically and longitudinally) the distribution of "addressable network resources" (users, devices, etc.) that are counted independently by institutions like the ITU -- provided of course one knows how to apply the kind of (historically evolving) conversion rules that IP analysts in RIRs and hostmasters within commercial IP networks actually use to do that conversion in real-world practice. Granted, it's a tricky metric, and the continued use of pre-RIR address resources makes it tricker, but the latter represent less than 20% of what's in production now, and about 70 or so countries only came online after the establishment of the RIRs (and Route Views) so there are some questions that can be addressed with relatively high confidence -- or at least with much higher confidence than can be claimed in any other meta-policy-oriented Q&A context. Being an Internet booster, I base my judgments on better vs. worse aggregate arrangements and outcomes on what gets more Internet "out there" faster, all things remaining equal. In this case, more ASes -- which means more "independent" utilization of network facilities -- which in the real world means greater availability/affordability of fractional network "infrastructure" (i.e., circuits over a fiber, or two fibers in a bundle, but not necessarily the whole trench, conduit, and everything else) -- tends to correlate with more/faster Internet stuff over time, all things remaining equal. It goes without saying that "more Internet" is not a universal absolute good. Perhaps there are countries where more Internet has actually been harmful, or come at some price which is too high by some other metric. I concede that my approach says nothing about such matters. The idea that the regulations that gave rise to rapid Internet development did violence to some abstract ideology about pure markets and property rights won't find much purchase with me, but I'd be happy to hear about real-world cases that others think might exemplify the hypothetical "too much Internet" outcome... TV From dpreed at reed.com Fri May 18 07:24:05 2007 From: dpreed at reed.com (David P. Reed) Date: Fri, 18 May 2007 10:24:05 -0400 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <54EF0EE260D79F4085EC94FB6C9AA56202F9EFC9@E03MVZ1-UKDY.domain1.systemhost.net> References: <54EF0EE260D79F4085EC94FB6C9AA56202F9EFC9@E03MVZ1-UKDY.domain1.systemhost.net> Message-ID: <464DB705.70902@reed.com> Dirk - I have been thinking about this. The point of the Internet was to achieve a network of networks, which seemed pleasantly obvious to anyone who had studied digital systems at the time since bits were bits and messages were messages. You are right that it wasn't to "beat Ma Bell" - but it also was the case that Ma Bell was not interested in solving the many problems. ATT was asked, but it could not be interested in serving an unproven need, one that was seen only by a few crazy people like Licklider, Englebart, Taylor, etc. as important - I really recommend reading The Computer as a Communications Device to put yourself in those times. I remember my Professor Joel Moses once asking me why we were wasting time on the Internet, when all you needed to access any remote computer was a dialup modem, and once AT&T delivered faster connections to homes (I think the expectation was to use microwaves at the time) everyone would be able to call up any time sharing system they wanted. LANs of small computers were viewed as insanely wasteful ways to engineer systems - what you wanted was a big "computer utility". Grosch's Law was the law of the land - the bigger the computer, the more efficient the system. Today IMHO, I see that pattern playing out again. The idea that the "backbone is the Internet" is historically wrong - the IP Internet is the whole damn thing that consists mostly of LANs and a little backbone glue. It's really nice that some of the carriers get the value of offering first class Internet transport (IP layer) services, and additional services that enhance that transport. But most carriers and service providers are indeed viewing that as the place where Internet edge-based innovation should *stop* or slow. In particular, they are slow to deploy support for anything new at the edges, for a very good reason: it might disrupt their profitability. A solipsistic or narcissistic view by the carriers that they know best what is good for the IP users has settled in. It's kind of like the White Man's Burden - all those poor people in undeveloped user land need help figuring out what they want, so carriers should impose enlightened governors on them in the form of AUPs, NAT boxes, etc. And of course we won't mention that by keeping the colonials under our enlightened governance, they won't disrupt carrier business plans to maintain dominance by, for example, incorporating new technologies and services into their communications pallette. Beyond being slow, they have a chorus of pundits who help redefine the network service in terms of *speed* rather than interoperation and adaptability to innovation. It's Joel Moses' comment to me all over again: just wait till the providers give us *fast enough* connections. That's all we want, right? To get to servers faster? How do we know that is all we want? I want mobility in the truest sense. Not mobile phone calls. I move through the world, and I'm the same person whereever I am - and I'm sick and tired of having the 10 radios I carry on my person have to pretend they are 300 baud acoustic coupled modems that take a minute to "log in" on whatever network I might be able to find. I hate that those radios touch other radios within meters, within kilometers, etc. but they cannot talk - merely because there is no common "network of networks" among them. There is *no* carrier who will do this whole job - at best they can help. Yet, like Licklider, I see that there is a job to do. And I also see that there is a possibility to do the job *despite* this unwillingness, and without the carriers - even fighting the modern equivalent of the Bellheads who claimed that connecting modems to phone lines might crash the network (with erudite arguments about erlangs in phone crossbars) - those being the routerheads who argue (eruditely) that bad packets must be blocked by deep packet inspection and AUPs, lest the whole backbone crash. I harbor no ill will towards carriers. Some people hated Ma Bell, but I recommended fellow students to go work for them at Bell Labs. This is not a call for tearing down the walls. If carriers want to join this effort, *wonderful*. But I doubt they will. And I know many carrier executives who think it is their *duty* to view this as something to fight against, just as I know many carrier executives who find it of great interest. dirk.trossen at bt.com wrote: > David, > > I wonder if such almost revolutionary tone is both helpful and effective > in reaching the goal (or anything for that matter) you're promoting. > > Not only do I believe (more hope) that the intention back then, when > constructing IP, TCP, UDP, ..., was not 'to beat Mother Bell's control > ambitions' but to truly enable end user innovation (driven by the true > belief that this would benefit everybody), I would also argue that times > do have changed since then. Change of fundamentals in the Internet is > today more of an educational process than ever. It might be driven by > technology, certainly not only though, but it certainly includes more > than ever proper education beyond the pure technology community and the > consideration for the concerns of everybody involved. It isn't a > technology exercise anymore within a governmentally funded research > community that, over the course of some twenty years, will then turn > into a fundamental piece of societial life. It IS part of the societal > life. So advocating changes needs to take into account the different > concerns, also the ones of the 'routerheads' and the 'control freaks', > if you will, in order to be successful. > > So it is not the goal that I'm questioning (you know how much I > subscribe to end user driven innovation), it is your, to me, ineffective > and confrontational method that I fear will turn out to be wasteful > rather than fruitful. What the technology community CAN provide is the > ammunition for this educational process, the proof that end user > innovation is indeed enabled, for the good of everybody involved (and > point our alternatives for the ones that seemingly will need to change). > > > BTW, as you know I recently have joined a company you might characterize > as being on the 'controlling end' of the spectrum, coming from an end > user type of company. But believe me that I would have not joined if I > didn't believe such education is possible. It isn't all black and white > (us - whoever that is - against them). > > Dirk > Dirk Trossen > Chief Researcher > BT Group Chief Technology Office > pp 69, Sirius House > Adastral Park, Martlesham > Ipswich, Suffolk > IP5 3RE > UK > e-mail: dirk.trossen at bt.com > phone: +44(0) 7918711695 > __________________________________________ > British Telecommunications plc > Registered office: 81 Newgate Street London EC1A 7AJ > Registered in England no. 1800000 This electronic message contains > information from British Telecommunications plc which may be privileged > and confidential. The information is intended to be for the use of the > individual(s) or entity named above. If you are not the intended > recipient, be aware that any disclosure, copying, distribution or use of > the contents of this information is prohibited. If you have received > this electronic message in error, please notify us by telephone or email > (to the number or address above) immediately. Activity and use of the > British Telecommunications plc email system is monitored to secure its > effective operation and for other lawful business purposes. > Communications using this system will also be monitored and may be > recorded to secure effective operation and for other lawful business > purposes. > > > > >> -----Original Message----- >> From: end2end-interest-bounces at postel.org >> [mailto:end2end-interest-bounces at postel.org] On Behalf Of >> David P. Reed >> Sent: Tuesday, May 15, 2007 3:57 PM >> To: end2end-interest list >> Subject: [e2e] Time for a new Internet Protocol >> >> A motivation for TCP and then IP, TCP/IP, UDP/IP, RTP/IP, >> etc. was that network vendors had too much control over what >> could happen inside their networks. >> >> Thus, IP was the first "overlay network" designed from >> scratch to bring heterogeneous networks into a common, >> world-wide "network of networks" >> (term invented by Licklider and Taylor in their prescient >> paper, The Computer as a Communications Device). By creating >> universal connectivity, with such properties as allowing >> multitudinous connections simultaneously between a node and >> its peers, an extensible user-layer naming system called DNS, >> and an ability to invent new end-to-end protocols, gradually >> a new ecology of computer mediated communications evolved, >> including the WWW (dependent on the ability to make 100 "calls" >> within a few milliseconds to a variety of hosts), email >> (dependent on the ability to deploy end-system server >> applications without having to ask the "operator" for >> permission for a special 800 number that facilitates public >> addressability). >> >> Through a series of tragic events (including the dominance of >> routerheads* in the network community) the Internet is >> gradually being taken back into the control of providers who >> view their goal as limiting what end users can do, based on >> the theory that any application not invented by the pipe and >> switch owners is a waste of resources. They argue that >> "optimality" of the network is required, and that any new >> application implemented at the edges threatens the security >> and performance they pretend to provide to users. >> >> Therefore, it is time to do what is possible: construct a new >> overlay network that exploits the IP network just as the IP >> network exploited its predecessors the ARPANET and ATT's >> longhaul dedicated links and new technologies such as LANs. >> >> I call for others to join me in constructing the next >> Internet, not as an extension of the current Internet, >> because that Internet is corrupted by people who do not value >> innovation, connectivity, and the ability to absorb new ideas >> from the user community. >> >> The current IP layer Internet can then be left to be >> "optimized" by those who think that 100G connections should >> drive the end user functionality. We can exploit the >> Internet of today as an "autonomous system" just as we built >> a layer on top of Ethernet and a layer on top of the ARPANET >> to interconnect those. >> >> To save argument, I am not arguing that the IP layer could >> not evolve. >> I am arguing that the current research community and industry >> community that support the IP layer *will not* allow it to evolve. >> >> But that need not matter. If necessary, we can do this >> inefficiently, >> creating a new class of routers that sit at the edge of the >> IP network >> and sit in end user sites. We can encrypt the traffic, so >> that the IP >> monopoly (analogous to the ATT monopoly) cannot tell what our >> layer is doing, and we can use protocols that are more >> aggressively defensive since the IP layer has indeed gotten >> very aggressive in blocking traffic and attempting to prevent >> user-to-user connectivity. >> >> Aggressive defense is costly - you need to send more packets when the >> layer below you is trying to block your packets. But DARPA >> would be a >> useful funder, because the technology we develop will support >> DARPA's efforts to develop networking technologies that work >> in a net-centric world, where US forces partner with >> temporary partners who may provide connectivity today, but >> should not be trusted too much. >> >> One model is TOR, another is Joost. Both of these services overlay >> rich functions on top of the Internet, while integrating >> servers and clients into a full Internet on top of today's Internets. >> >> * routerheads are the modern equivalent of the old "bellheads". The >> problem with bellheads was that they believed that the right >> way to build a communications system was to put all functions >> into the network layer, and have that layer controlled by a >> single monopoly, in order to "optimize" the system. Such an >> approach reminds one of the argument for >> the corporate state a la Mussolini: the trains run on time. Today's >> routerheads believe that the Internet is created by the >> fibers and pipes, rather than being an end-to-end set of >> agreements that can layer >> on top of any underlying mechanism. Typically they work for >> backbone >> ISPs or Router manufacturers as engineers, or in academic >> circles they focus on running hotrod competitions for the >> fastest file transfer between two points on the earth >> (carefully lining up fiber and switches between specially >> tuned endpoints), or worse, running NS2 simulations that >> demonstrate that it is possible to stand on one's head while >> singing the National Anthem to get another publication in >> some Springer-Verlag journal. >> >> >> >> >> > > > From bauer at mit.edu Fri May 18 08:08:52 2007 From: bauer at mit.edu (Steven Bauer) Date: Fri, 18 May 2007 11:08:52 -0400 Subject: [e2e] It's all my fault In-Reply-To: <20070518132523.E370D86AE8@mercury.lcs.mit.edu> References: <20070518132523.E370D86AE8@mercury.lcs.mit.edu> Message-ID: <20070518150852.GB12068@mit.edu> Noel Chiappa [jnc at mercury.lcs.mit.edu] wrote: >Second, abusive 'ordinary' monopolies aren't that easy to break, especially >if the market is large, and the monopolist has accumulated considerable >economic resources thereby. The monopolist's economic power is used to make >it impossible for a new competitor to get a foothold. > >The usual tactic is to drop prices to below what it costs to provide the >goods/service, i.e. it's impossible for the competitor to make a profit. The >monopolist can usually run this way for a long time, e.g. years. (Heck, even >non-monopolist large companies, ones which are losing money through simple >incompetence, can last a long time: look at many airlines, and US car >manufacturers.) > >Most financiers are not interested in getting into a situation where they >have to i) invest a really large amount of money, ii) wait years to get any >return at all (remember, prices <= cost, so no profits), and iii) probably >cannot get the same rate of return as they could if they deployed their >resources in some less-cuthroat part of the economy. Perhaps the market for packet-switched networking is simultaneously a monopoly with large profit margins and also a "cutthroat" market with better risk-adjusted returns elsewhere... :) From jnc at mercury.lcs.mit.edu Fri May 18 08:16:26 2007 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Fri, 18 May 2007 11:16:26 -0400 (EDT) Subject: [e2e] It's all my fault Message-ID: <20070518151626.A586287336@mercury.lcs.mit.edu> > From: John Day Please excuse this last post, but I wanted to get in a plug for an un-deservedly forgotten piece of our history: CYCLADES (see below). >> what was unique about Baran's work was that he came up with the idea >> of breaking up user's messages into smaller pieces, and forwarding the >> pieces independently - something nobody before him had thought of. > I have read Baran's reports I'm not sure, from your comment, if you're disagreeing with the above observation? > I can't tell if he is describing packet switching as in the ARPANet or > packet switching as in the CYCLADES. I'm not sure if he had thought it through to that level of detail at that stage, actually. > I am tending to the conclusion that Baran and Davies independently > invented packet switching. This is a complicated topic. The problem is that Baran had published a lengthy paper summarizing his work in the IEEE Transactions on Networking, a fairly significant open journal, in March 1964, and an abstract of the IEEE ToN paper had been published in IEEE Spectrum, which was of course very widely distributed (circulation about 160,000 in those days) in August '64. That was over a year before Davies' work (starting in very late 1965). The question is whether some of Baran's ideas had percolated at some semi-subconcious level - perhaps via a chance conversation with a colleague - into Davies' thinking. We'll just never know, and I suspect Davies himself couldn't know. I liken this to the question of the influence of Babbage on the first computers (circa '46-'47). Some people (e.g. Wilkes, IIRC) say "Babbage had been forgotten by then, I certainly wasn't influenced by his work". The problem is that there were others who were active in the field who did know of Babbage's work (definitely Aiken, who explicitly recognized Babbage in contemporanous writings), and the early people did all know of each other's work, so it's hard to know what the subconcious/indirect influences of Babbage's on the field as a whole were. > But the kind of connectionless networking and clean separation between > Network and Transport seems to have come from Pouzin. CYCLADES had a > very clean distinction between CIGALE and TS which the ARPANet did not > have. Absolutely. CYCLADES is a mostly-forgotten - and *very* undeservedly-so - piece of networking history, and it's quite clear that it's the single most important technical pregenitor of the Internet. The decision to make the hosts reponsible for reliable delivery was one of the key innovations in going from the ARPANet to the Internet. For one, it made the switches so much simpler when they didn't have to take responsibility for delivery of the data - which they couldn't really, anyway, as we now understand, according to the end-end principle. When I was active on Wikipedia, I always meant to upgrade their CYCLADES article, but I never got to it, alas... > Networking happened when minicomputers come along and are cheap enough > that they can be dedicated to "non-productive" work Good point. > I hate to think what would have happened if at anytime from 1970 to > 1990 if some crusading journalist had figured out all of the non-DoD > activities going on the ARPANet/Internet and done an expose! The things > we were doing! The waste of tax payer dollars! I think there was some publicity, actually, but for some reason it didn't make a big splash. Amusing story: One thing that did make a bit of a hit was when some DoD domestic intelligence (on Viet Nam protestors) was moved over the Internet; the resulting newspaper headline (cut out and pasted on one of the MIT IMPs for many years) was: "Computer network accused of transmitting files"! :-) Noel From touch at ISI.EDU Fri May 18 08:38:31 2007 From: touch at ISI.EDU (Joe Touch) Date: Fri, 18 May 2007 08:38:31 -0700 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <4649CA54.1050000@reed.com> References: <4649CA54.1050000@reed.com> Message-ID: <464DC877.3050706@isi.edu> David P. Reed wrote: > A motivation for TCP and then IP, TCP/IP, UDP/IP, RTP/IP, etc. was that > network vendors had too much control over what could happen inside their > networks. > > Thus, IP was the first "overlay network" ... I disagree; IMO, overlays are networks on top of an existing network. The Internet is no more an overlay in that regard than Ethernet is (e.g., over different media). That's just layered networking. The thing that made the Internet different was it was a single global network layer that didn't dictate all the layers below. Overlays are, IMO, networks composed of headers (provided by tunnels) and forwarding mechanisms that provide new topologies and forwarding capabilities _among the same set of nodes already connected in the (base) network on which they operate_. There's no such "base network" for IP. Joe -- ---------------------------------------- Joe Touch Sr. Network Engineer, USAF TSAT Space Segment -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070518/7e91edfd/signature.bin From touch at ISI.EDU Fri May 18 08:38:40 2007 From: touch at ISI.EDU (Joe Touch) Date: Fri, 18 May 2007 08:38:40 -0700 Subject: [e2e] It's all my fault In-Reply-To: <4648FFF1.5050006@web.de> References: <46452F80.9040801@reed.com><4645D5F1.5030801@psg.com> <46460728.4050006@reed.com><464887EC.1020502@isi.edu> <4648B29A.30202@reed.com> <70C6EFCDFC8AAD418EF7063CD132D0640485DDBF@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> <4648DF74.3040002@psg.com> <70C6EFCDFC8AAD418EF7063CD132D0640485DDCE@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> <4648FFF1.5050006@web.de> Message-ID: <464DC880.2090807@isi.edu> Detlef Bosau wrote: ... > And even for overlay networking, I wonder whether we really need > _application_ _based_ routing. If we need a mesh of nodes for a certain > application, SAP, IRC, usenet, mail, skype, whatever, isn?t it > sufficient to assign an IP address to these and then to rely upon well > known and well understood internetworking techniques? And of course > _with_ end to end path transparency, end to end path redundancy etc.? Not if forwarding isn't based on an address. See: ?DataRouter: A Network-Layer Service for Application-Layer Forwarding,? J. Touch, V. Pingalil, Proc. International Workshop on Active Networks (IWAN), Osaka, Springer-Verlag, December 2003. http://www.isi.edu/touch/pubs/iwan2003 -- ---------------------------------------- Joe Touch Sr. Network Engineer, USAF TSAT Space Segment -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070518/909e7f27/signature.bin From touch at ISI.EDU Fri May 18 09:00:37 2007 From: touch at ISI.EDU (Joe Touch) Date: Fri, 18 May 2007 09:00:37 -0700 Subject: [e2e] It's all my fault In-Reply-To: <6764e858e02aebbd46e396bffaf6c86f@mac.com> References: <6764e858e02aebbd46e396bffaf6c86f@mac.com> Message-ID: <464DCDA5.5040102@isi.edu> rick jones wrote: >> I'm advocating very loose source routing here. The key to this is >> diversity. That diversity can come from knowing one host on each of >> several nearby ISPs, to "force" your traffic onto that AS. > > Another naive question from the peanut gallery - can one actually do > loose source routing with _hosts_ as the IPs in the list rather than > routers? Or perhaps I should say with "customer hosts?" Yes, we can and we do: - peer to peer nets pushes routing into the app layer, but then requires reimplementing the entire stack there as well - overlays hosts - or routers - can participate as nodes in a routed network, where that routing is hidden from the base (existing) network by tunnels (as others have pointed out, see www.isi.edu/xbone) It's more efficient to move this function into the core of the network, both to share resources, and because moving things out to the edge is costly in terms of BW and delay (see any of the Active Nets papers on this issue). Joe -- ---------------------------------------- Joe Touch Sr. Network Engineer, USAF TSAT Space Segment -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070518/83621ca1/signature.bin From braden at ISI.EDU Fri May 18 09:30:46 2007 From: braden at ISI.EDU (Bob Braden) Date: Fri, 18 May 2007 09:30:46 -0700 Subject: [e2e] It's all my fault In-Reply-To: <8397EAAD-D087-46C6-81EC-C51D13E41F9B@pch.net> References: <5.1.0.14.2.20070517205046.0375a900@boreas.isi.edu> <20070516234931.EA264872D5@mercury.lcs.mit.edu> <5.1.0.14.2.20070517205046.0375a900@boreas.isi.edu> Message-ID: <5.2.1.1.2.20070518092859.00ac4d60@boreas.isi.edu> > > >problems, I would personally rather see this thread get purged of the >silliness, ignorance, etc. through discussion Uhhh ... when was the last time you saw that happen on a large open mailing list?? Good luck, and let's stop feeding the tiger. Bob Braden From touch at ISI.EDU Fri May 18 10:02:29 2007 From: touch at ISI.EDU (Joe Touch) Date: Fri, 18 May 2007 10:02:29 -0700 Subject: [e2e] It's all my fault In-Reply-To: <5aa7c1c39c11621281fa959621a1f11c@mac.com> References: <4649A8B3.7080000@pobox.com> <5aa7c1c39c11621281fa959621a1f11c@mac.com> Message-ID: <464DDC25.3030307@isi.edu> rick jones wrote: >> Perhaps a compromise would be to reduce the number of intermediate >> hops that can be specified from 40 to say 2. That reduces the >> "traffic multiplier" available for DoS, but allows users to select >> between a handful of paths. Two or three paths is enough diversity to >> get a "pretty good" route if the default BGP route is temporarily >> congested. > > I'll ask a naive question from the peanut gallery - I take that checking > the source routes for duplicate IP's is insufficient to deal with the > proposed problem? Given that routers and end nodes can have multiple IP addresses and there's no definitive test for "equivalence" (i.e., which set lie at the same router", nope. ;-) Joe -- ---------------------------------------- Joe Touch Sr. Network Engineer, USAF TSAT Space Segment -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070518/d29b4d14/signature.bin From touch at ISI.EDU Fri May 18 10:07:34 2007 From: touch at ISI.EDU (Joe Touch) Date: Fri, 18 May 2007 10:07:34 -0700 Subject: [e2e] It's all my fault In-Reply-To: <5.2.1.1.2.20070518092859.00ac4d60@boreas.isi.edu> References: <5.1.0.14.2.20070517205046.0375a900@boreas.isi.edu> <20070516234931.EA264872D5@mercury.lcs.mit.edu> <5.1.0.14.2.20070517205046.0375a900@boreas.isi.edu> <5.2.1.1.2.20070518092859.00ac4d60@boreas.isi.edu> Message-ID: <464DDD56.5090405@isi.edu> A note to all on the list: Please keep the language and tone civil. Those who do not risk having their posts moderated (which means 24+ hour delay on their posts). Rather than highlighting individuals involved, they have been contacted privately. Thanks, Joe (as list admin) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070518/b0b94f71/signature.bin From lachlan.andrew at gmail.com Thu May 17 20:00:16 2007 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Thu, 17 May 2007 20:00:16 -0700 Subject: [e2e] It's all my fault In-Reply-To: <5aa7c1c39c11621281fa959621a1f11c@mac.com> References: <4649A8B3.7080000@pobox.com> <5aa7c1c39c11621281fa959621a1f11c@mac.com> Message-ID: Greetings Rick, On 17/05/07, rick jones wrote: > > Perhaps a compromise would be to reduce the number of intermediate > > hops that can be specified from 40 to say 2. That reduces the > > "traffic multiplier" available for DoS, but allows users to select > > between a handful of paths. Two or three paths is enough diversity to > > get a "pretty good" route if the default BGP route is temporarily > > congested. > > I'll ask a naive question from the peanut gallery - I take that > checking the source routes for duplicate IP's is insufficient to deal > with the proposed problem? That's right. Consider two networks connected by a single link. As long as alternate IP addresses are on either side of that link, there is a DoS on the routers on that link. They don't have to be identical IPs, or even have identical prefixes. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From faber at ISI.EDU Fri May 18 11:07:14 2007 From: faber at ISI.EDU (Ted Faber) Date: Fri, 18 May 2007 11:07:14 -0700 Subject: [e2e] It's all my fault In-Reply-To: References: <20070517164517.GA45601@hut.isi.edu> Message-ID: <20070518180714.GB65382@hut.isi.edu> On Fri, May 18, 2007 at 03:46:45AM -0700, Vadim Antonov wrote: > On Thu, 17 May 2007, Ted Faber wrote: > > > On Wed, May 16, 2007 at 07:33:03PM -0700, Vadim Antonov wrote: > > > The real role of the government in the history of the Internet was > > > stalling, obfuscating, and supporting monopolism. If not for the > > > government-propped monopoly of AT&T, we'd see widely available commercial > > > data networks in early 80s, not in 90s - the technology was there. > > > > Governments do not support natural monopolies like the telecom network, > > because they have no role in those monopolies. > > Now we got into the Alice's Wonderland, I guess. No role? Huh? FCC does > not regulate telcos? Since when? Can I dig in a cable into the ground > without asking various government busybodies first? What about the land > grants and easements for the cable plants? What about patents? What about > "lawful access" requirements? What about Public Utility Comissions and > their price-fixing? NO ROLE??? No role in their creation. I left the phrase out buy it's clear from the rest of the paragraph. If the market could control, them all that regulation would be unnecessary. The difference between our positions seems to be that you think that the government protects telecom monopolies for a piece of the action (creating/protecting a monopolist) rather than coercing an established and inevitable monopolist into taking actions counter to their interest in order to not be actively broken up. They're different in that the removal of regulation has very different effects - a protected monopolist (a sugar grower in the American South) is forced to compete (and likely lose); a threatened monopolist returns to control of a market, and service to customers and society likely degrades. > > > Significant economies of > > scale and high capital barriers to entry will shut other providers of > > similar services out of the market completely. Even without aggressive > > action by the providers, this leads directly to monopoly. That's a > > property of the market, not a government imposed attribute. > > Economics 101 - natural monopoly (aka single provider) and monopoly are > not the same. By far. The "natural monopoly" cannot exploit consumers by > raising prices or by reducing services using its "monopoly" position > because doing so will create opportunity for entrance by a smaller > competitor. Prefacing your statements with contempt makes them no more accurate. Again, a natural monopoly is a market that has significant economies of scale, large capital barriers to entry, and no close substitutes. It is a kind of market, not a kind of firm. It's probably not covered in your 101 text, though it could be; the concepts are not that difficult. Imagine that the market starts with an arbitratry number of firms, the first one to gain an edge will win the game with marginally correct play. As soon as one firm gets larger it can undercut another beyond the point of profitability and devour its customers. Then it is *more* efficient (economies of scale), and takes on the next one. Eventually it stands alone. Then it can very easily eat anyone else that appears by undercutting their prices and buying their capital. If anything this is worse in telecom, because the thing the firm sells is access to other customers. All the telecom regulation that exists interposes itself to make that process difficult - again it is competitors who are assisted, not the monopolist. It forces large telcos to give smaller competitors access to the larger firm's customers across their wires at rates the large firm cannot set. Without that, there would be one telecom company. That's what the term "natural monopoly" means; not that there happens to be one provider, but that without non-market action, a single provider will dominate that market by the properties of the product being sold. External actions prop up competitors or force monopolist actions. > > Real monopolies which depend on government enforcement of its "rights" can > deter competition and thus can exploit consumers. Any sizable telecom entity would be significantly more profitable without regulation. It is the government that forces them to share their plant with competitors rather than using it as a barrier to protect their revenue stream. > > In fact, scale is not an issue whatsoever. No matter how large a "naturaly > monopolistic" company is, it is always possible to borrow enough capital > to create a comparably sized company. The history of Internet fiber glut > demonstrates that pretty conclusively. Capital's a market, too, and I don't believe an intelligent investor would decide that attempting to overthrow a provider serving a natural monopoly with all its natural advantages would be the best way to invest their money. What you suggest is not mathematically impossible, but I believe it to be extremely unlikely. > > > Furthermore, all recorded cases of natural monopoly have evidenced > > aggressive action; providers in natural monopoly situations crush > > competitors and resist changes to their market. > [B == vigorous] > "B" means innovation, price cutting, and otherwise serving customers > better. This is _good_. If somebody seres customers much better than > anyone else could, then it is the best possible outcome for customers. You're assuming that the market would sustain more than one competitor indefinitely. A natural monopoly will not. It is to try to inject these aspects into a natural monopoly that the government props up competitors. I'll beat this dead horse once more in the hope you'll read it somewhere: Once I've laid my plant down, every call/packet that I collect cash for has a lower marginal cost than the previous one. The largest provider can spread that cost over more customers, which means lower costs per customer. Once my fixed costs are covered, per call/packet fees are largely profit. Mmmm economies of scale. I can undercut the rates of other providers until they lose all their customers or go bankrupt. If necessary, I can forgo any profit at all until competitors fail. As you've argued other places, a packet's a packet and a call's a call, so as long as I'm not actually incompetent my lower rates win. And, of course I'll make it more expensive for competitors to access my customers, if I allow it at all. Having secured that monopoly, these customers get monopoly rates and are a sink over which losses in markets one wishes to compete in can be amoritized. Yes that's illegal - darn that government, the market would allow it. A new competitor has to buy enough plant to compete, which is costly. Investors, seeing what happened to the last competitor, are probably hard to find, which is too bad. Building a telecom infrastructure is a high barrier to entry. Governments regulate these guys because there's no incentive to innovate - or once the competitors are gone, to provide low price service. The threat is to the monopolist to give something back to the society or be broken up by the government, not a promise to ward off competitors in exchange for a piece of the action. > > Why would they do differently? > > Yes, why would they when they can buy enough politicans to do a joe job on > potential competition instead of competing fairly. Fair competition means monopoly here. You start buying politians to interfere when you know you'll lose in the market. I was looking for an example of this, but you were kind enough to bring up MCI below. > > > If US telecom were really deregulated tomorrow - no requirements to > > share infrastructure, no limits on size, no service requirements - > > there'd be one phone company in a decade at the most. > > I'm sure they told exactly the same nonsense when MCI tried to break the > long-distance market open. Without the government forcing AT&T to allow access to its customers/network for MCI, they had at best a niche market. This is a clear case of the government creating competition where the market would have killed it. AT&T owned the network and denied MCI access, in exactly the way a trucking company might deny access to a competitor (yes, yes, there are common carrier laws, but they're the result of the same process - government regulation of a natural monopoly for a social good). MCI invested in politicans and lawyers and forced access to that network. It's MCI's "rights" that are being created by the government here, and the threat was to AT&T. > > It's hard to see how you can characterize this as monopoly protection. > > Monopoly protection requires actual or threatened violence. In the modern > world only governments do that at the large scale. And that threat - to the competitors - is compeletely absent here. It is the monopoly that is consistently threatened. The threat is not "do as we say or we'll lower a tariff and let your competitors at you" it's "do as we say or we'll break up your monopoly by force." > > > You couldn't lease a T1 before the government made AT&T lease you one - > > an action I'm surprised you don't characterize as the government > > stealing AT&T's capital. > > Yep. Considering that the government made AT&T a monopoly, that sure is a > relief. We completely disagree on whether than monopoly was "made" or "recognized as inevitable," but again, the threat is action against the monopolist, not lowering a government restriction on competition. If the monopolist sees being forced to sell services that it can provide to a competitor as protection, I assert that's a foolish monopolist. > > It's a lot more difficult to build a nationwide (to say nothing of > > worldwide) data network if you have to spend the capital to run the > > lines. > > Hard, not impossible. Many companies have done that. Name one that's been the *second* company. Remember, government subsidies - including letting a competitor use the monopolist's capital - are cheating. > > Now, I don't think that the government had a coordinated plan to create > > a new market, but without the (accidental) confluence of those actions, > > the Internet would be unlikely to emerge. > > Governments do not create markets. They cannot. Markets are created by the > acts of voluntary exchange between people, and precede governments by tens > of thousands of years. (Heck, even apes do some trading among themselves). Of course they can. Governments have a lot of money and use it to buy things. Catalytic converters and smog check stations are unlikely free market outcomes. For that matter, there's this: http://www.washingtonpost.com/wp-dyn/articles/A64231-2005Mar1.html -- Ted Faber http://www.isi.edu/~faber PGP: http://www.isi.edu/~faber/pubkeys.asc Unexpected attachment on this mail? See http://www.isi.edu/~faber/FAQ.html#SIG -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: not available Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070518/bd600de9/attachment.bin From braden at ISI.EDU Fri May 18 13:30:13 2007 From: braden at ISI.EDU (Bob Braden) Date: Fri, 18 May 2007 13:30:13 -0700 (PDT) Subject: [e2e] ATT and monopolies Message-ID: <200705182030.NAA01169@gra.isi.edu> Ok, OK, I can't resist. My grandfather was a physician in Duluth, MN around the turn of the century. My father remembered that there were four separate phones in the front hall of their house. There had to be four phones, because there were four telephone companies in Duluth at the time. I suspect this was typical of American cities at the time. Had the Internet developed without any regulation or initial government support, I wonder how many Internets we would have now? Probably at least the IBM Internet (SNA), the DEC Internet (DECnet), the Verizon Internet, the Microsoft Internet, ... Bob Braden From jnc at mercury.lcs.mit.edu Fri May 18 13:33:19 2007 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Fri, 18 May 2007 16:33:19 -0400 (EDT) Subject: [e2e] It's all my fault Message-ID: <20070518203319.E0C6586AF3@mercury.lcs.mit.edu> > From: "Lachlan Andrew" >> The problem is that Baran had published a lengthy paper summarizing >> his work in the IEEE Transactions on Networking .. in March 1964 > Didn't ToN start in the '90s? Was that TransComm, or was there an > earlier ToN that I missed? Ooops, my bad: it was actually the IEEE Transactions on Communications Systems (in case anyone was trying to look it up :-). Noel From jtk at northwestern.edu Fri May 18 14:25:03 2007 From: jtk at northwestern.edu (John Kristoff) Date: Fri, 18 May 2007 16:25:03 -0500 Subject: [e2e] fault apportionmant and mitigation In-Reply-To: References: <20070516234931.EA264872D5@mercury.lcs.mit.edu> <5.1.0.14.2.20070517205046.0375a900@boreas.isi.edu> Message-ID: <20070518212503.90C6A136C82@aharp.ittns.northwestern.edu> On Fri, 18 May 2007 07:40:42 +0100 Jon Crowcroft wrote: I have some practical experience mitigating DDoS attacks, but from the perspective of a DNS service provider. So my views may not be a good representation of all DDoS attacks, but I have seen the same botnets attack other systems and networks than the type specific to what I have direct operational responsibilities for. > some questiosn though: > botnets - > i) are they clusteed on certain ISPs/ ASs and Very often so. Very large national home Internet service providers are common sources and sometimes a particular botnet is often made up of many sources from a handful of them. However, in my judgment these large providers are not necessarily seeing disproportionate numbers of bots to any other sector. Same goes for every other sector/AS, they seem to be generally representative of their size. Response, the ability, willingness and capability to mitigate can differ widely however. > iv) dos target : is it mainly server or is it as often topological attacks? Almost always I see that packet floods are destined to a specific end system that represents some user/customer server (usually http) or their DNS service. The target being directly related to the victim that the attacker is (almost surely) being paid to attack. > v) ditto scanning > > vi) when ISPs shut things down near a source, what is th sequence of take down > actions (detect/inform/warn/blackhole etc etc) and what are the costs of false > positive Often it is either: detect/verify report-> filter/blackhole -> wait for complaint or detect/verify report -> filter/blackhole -> notify > vii) how often is source spoofing an issue (e.g. would loose source routing make > it worse much really?:-) It happens, but not that common in the attacks I've seen (out of the last dozen I can recall mitigating, maybe twice it happened and in those particular cases I'm thinking of, they were coupled with a non-spoofed packet flood attack and the spoofing was easy to detect and filter). John From day at std.com Fri May 18 14:23:31 2007 From: day at std.com (John Day) Date: Fri, 18 May 2007 17:23:31 -0400 Subject: [e2e] It's all my fault In-Reply-To: <20070518151626.A586287336@mercury.lcs.mit.edu> References: <20070518151626.A586287336@mercury.lcs.mit.edu> Message-ID: At 11:16 -0400 2007/05/18, Noel Chiappa wrote: > > From: John Day > >Please excuse this last post, but I wanted to get in a plug for an >un-deservedly forgotten piece of our history: CYCLADES (see below). > > > >> what was unique about Baran's work was that he came up with the idea > >> of breaking up user's messages into smaller pieces, and forwarding the > >> pieces independently - something nobody before him had thought of. > > > I have read Baran's reports > >I'm not sure, from your comment, if you're disagreeing with the above >observation? I'm not either. ;-) I am sure that he proposed breaking it up into smaller pieces, I am just not sure about the next part. I wanted to get your take on it. > > > I can't tell if he is describing packet switching as in the ARPANet or > > packet switching as in the CYCLADES. > >I'm not sure if he had thought it through to that level of detail at that >stage, actually. That is my impression. (And it wouldn't be surprising given what I said about people making paradigm shifts to be a little bit in both camps, it is a common phenomena.) In some sense, if Baran had made the "half step" then the ARPANet implemented his view and CYCLADES starting slightly later took it the other half step. OR if Baran made the whole step then the ARPANet made a "half step" back and CYCLADES implemented Baran's view. ;-) > > I am tending to the conclusion that Baran and Davies independently > > invented packet switching. > >This is a complicated topic. > >The problem is that Baran had published a lengthy paper summarizing his work >in the IEEE Transactions on Networking, a fairly significant open journal, in >March 1964, and an abstract of the IEEE ToN paper had been published in IEEE >Spectrum, which was of course very widely distributed (circulation about >160,000 in those days) in August '64. That was over a year before Davies' >work (starting in very late 1965). The question is whether some of Baran's >ideas had percolated at some semi-subconcious level - perhaps via a chance >conversation with a colleague - into Davies' thinking. We'll just never know, >and I suspect Davies himself couldn't know. Did you see Davies' posthumous article on this? Yea, I thought it was pretty accepted that they had each come to it independently. I will go read the Davies article again. >I liken this to the question of the influence of Babbage on the first >computers (circa '46-'47). Some people (e.g. Wilkes, IIRC) say "Babbage had >been forgotten by then, I certainly wasn't influenced by his work". The >problem is that there were others who were active in the field who did know >of Babbage's work (definitely Aiken, who explicitly recognized Babbage in >contemporanous writings), and the early people did all know of each other's >work, so it's hard to know what the subconcious/indirect influences of >Babbage's on the field as a whole were. Right. > > > But the kind of connectionless networking and clean separation between > > Network and Transport seems to have come from Pouzin. CYCLADES had a > > very clean distinction between CIGALE and TS which the ARPANet did not > > have. > >Absolutely. CYCLADES is a mostly-forgotten - and *very* undeservedly-so - >piece of networking history, and it's quite clear that it's the single most >important technical pregenitor of the Internet. I agree. >The decision to make the hosts reponsible for reliable delivery was one of >the key innovations in going from the ARPANet to the Internet. For one, it >made the switches so much simpler when they didn't have to take >responsibility for delivery of the data - which they couldn't really, anyway, >as we now understand, according to the end-end principle. Right. I remember Louis making the argument that you were never going to convince the hosts to trust the network and assume that everything got there, so if they were doing all this work, then the network didn't have to work so hard! It was a classic Louis argument! ;-) > >When I was active on Wikipedia, I always meant to upgrade their CYCLADES >article, but I never got to it, alas... > > > > Networking happened when minicomputers come along and are cheap enough > > that they can be dedicated to "non-productive" work > >Good point. > > > I hate to think what would have happened if at anytime from 1970 to > > 1990 if some crusading journalist had figured out all of the non-DoD > > activities going on the ARPANet/Internet and done an expose! The things > > we were doing! The waste of tax payer dollars! > >I think there was some publicity, actually, but for some reason it didn't >make a big splash. > >Amusing story: One thing that did make a bit of a hit was when some DoD >domestic intelligence (on Viet Nam protestors) was moved over the Internet; >the resulting newspaper headline (cut out and pasted on one of the MIT IMPs >for many years) was: "Computer network accused of transmitting files"! :-) ;-) O, I was thinking of the other non-DoD uses the network was put to on a regular basis. The hours of "IM-ing" with Jim Calvin's Tenex teleconferencing program back in 71 and 72. He did it by hacking the link terminal command and since Tenex was character at a time, if two people started typing at the same time, their characters were interleaved! ;-) Or all of us using SAIL's AP wire program to follow the news on Agnew's resignation, the overthrow of Allende, etc. But all of that was before your time. Take care, John From avg at kotovnik.com Fri May 18 14:30:06 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Fri, 18 May 2007 14:30:06 -0700 (PDT) Subject: [e2e] It's all my fault In-Reply-To: <464DAA56.3050405@reed.com> Message-ID: On Fri, 18 May 2007, David P. Reed wrote: > Adam Thierer is hardly an unbiased expert. Cato is like Cheney - it > hires people to prove what they already take on faith. And of course that invalidates their arguments - how? Sorry, this is just a demagogic cop-out, you cannot win an argument by dismissing opponents simply because you don't like what they say or because they spend their resources to get their message across. --vadim PS. I was asked to move this discussion off-list. So, please write me directly, I will NOT be replying to the list. From tvest at pch.net Fri May 18 14:53:30 2007 From: tvest at pch.net (Tom Vest) Date: Fri, 18 May 2007 17:53:30 -0400 Subject: [e2e] ATT and monopolies In-Reply-To: <200705182030.NAA01169@gra.isi.edu> References: <200705182030.NAA01169@gra.isi.edu> Message-ID: <6EA244E4-A312-49A9-B552-6B2BD78FF7E9@pch.net> On May 18, 2007, at 4:30 PM, Bob Braden wrote: > Ok, OK, I can't resist. > > My grandfather was a physician in Duluth, MN around the turn of the > century. My father remembered that there were four separate phones > in the > front hall of their house. There had to be four phones, because there > were four telephone companies in Duluth at the time. I suspect this > was typical of American cities at the time. > > Had the Internet developed without any regulation or initial > government > support, I wonder how many Internets we would have now? Probably at > least the IBM Internet (SNA), the DEC Internet (DECnet), the Verizon > Internet, the Microsoft Internet, ... > > Bob Braden Welcome to the fuzzy side ;-) I think we can safely say that AT&T probably solved your grandfather's first problem, by buying up and integrating all of the competing networks. At first I bet he was quite happy to need just one handset/billing relationship, but before too long somewhat less happy to have to suffer (the observed) rising prices, as well as the (unobserved) absence of new features and services that never emerged. The FCC came into the picture in the 1930s to try to mitigate the first problem. We were stuck with the second problem until first the FCC Carterfone decision (1968) eliminated the incumbent's prerogative to deny permission to attach third-party devices, and eventually until the AT&T breakup (1984) divided the telecom market into territorial distinct monopoly "basic service" segments, plus an overlapping, extra-territorial, wide open "value added" service segment -- the latter of which encompassed "international", "long distance", and coincidentally, data. The 1996 Telecom Act was an attempt to create the same breakthrough in the remaining bottleneck / local access segments. It was a happy time, one we are likely to remember fondly soon, now that most of the regulations that created the conditions for such great achievements have been abandoned. The anti-government, anti- regulation, pro-monopoly interests have largely had their way; we'll probably see how well this works soon enough. TV From davide+e2e at cs.cmu.edu Fri May 18 14:54:49 2007 From: davide+e2e at cs.cmu.edu (Dave Eckhardt) Date: Fri, 18 May 2007 17:54:49 -0400 Subject: [e2e] ATT and monopolies In-Reply-To: <200705182030.NAA01169@gra.isi.edu> Message-ID: <23558.1179525289@lunacy.ugrad.cs.cmu.edu> > My father remembered that there were four separate phones in the > front hall of their house. There had to be four phones, because > there were four telephone companies in Duluth at the time. I > suspect this was typical of American cities at the time. Thus calling into question the idea that telephone service is a natural monopoly. In fact, the costs of laying wire, etc., were *not* so high that it would have been insane for real competition to exist, because it did--until AT&T talked the state and federal governments into supporting one system (except for obscure outlying areas they weren't interested in serving). > Had the Internet developed without any regulation or initial > government support, I wonder how many Internets we would have > now? Probably at least the IBM Internet (SNA), the DEC Internet > (DECnet), the Verizon Internet, the Microsoft Internet, ... The four-phones situation was back when switching was expensive and painful. I expect that as the price came down it would have been possible to make cross-network calls... ...as, indeed, it is possible for me to place a call from my Verizon CDMA phone to my TA's Cingular GSM phone. The companies compete on things like calling plans, who has the cooler phones, who has fewer dropped calls, etc., without denying access to each other's customers. While the actual path we took to get here involved the government making a monopoly and then splitting it up, it is by no means obvious either that only this path would have got us here or that all future networks need to follow that path. Somehow we have competing Ethernet vendors without (as far as I'm aware) government regulation. Dave Eckhardt From avg at kotovnik.com Fri May 18 21:21:27 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Fri, 18 May 2007 21:21:27 -0700 (PDT) Subject: [e2e] ATT and monopolies In-Reply-To: <200705182030.NAA01169@gra.isi.edu> Message-ID: On Fri, 18 May 2007, Bob Braden wrote: > Had the Internet developed without any regulation or initial government > support, I wonder how many Internets we would have now? Probably at > least the IBM Internet (SNA), the DEC Internet (DECnet), the Verizon > Internet, the Microsoft Internet, ... And how long would it take for some company to produce "multi-system" telephone... wait. They did that very successfully with TVs and VCRs - most of these in Asia can handle every system devised, so nobody's limited to the Never The Same Color thingie :) Diversity is often annoying, but in the end it always comes ahead. --vadim From pekka.nikander at nomadiclab.com Sun May 20 08:04:54 2007 From: pekka.nikander at nomadiclab.com (Pekka Nikander) Date: Sun, 20 May 2007 18:04:54 +0300 Subject: [e2e] Economics of Standards in Information Networks In-Reply-To: References: Message-ID: <66B55B8D-9089-4FDD-BD26-F9C9FF572DCA@nomadiclab.com> For those who care, there are a few relatively recently published books that may help the economically minded people here to better understand so called network externalities in the context of standardisation: Tim Weitzel. Economics of Standards in Information Networks, 2004. Springer, ISBN 3790800767. Kurt. Geihs, Falk von Westarp, Wolfgang K?nig. Networks: Standardization, Infraestructure And Applications, 2002. Springer, ISBN 3790814490. Falk von Westarp, Modeling Software Markets: empirical analysis, network simulations, and marketing implications, 2003. Springer, ISBN 3790800090. ------ The Wealth of Networks, by Yochak Benkler (Yale University Press, 2006) seems to offer a liberal (but not libertarian) political analysis of the larger landscape (i.e. beyond standards), arguing rigourously for commons-based (i.e. non-market based) side of economics as an important factor for liberty and freedom. [Ducking out to avoid the potential political flame war. My intention is not to start flaming but allow those that care to educate themselves in order to avoid more of the silly, religious- type of argumentation. For me, both (micro)economics and political science seem to be difficult enough for a typical routerhead, like me.] --Pekka Nikander From dpreed at reed.com Mon May 21 08:42:14 2007 From: dpreed at reed.com (David P. Reed) Date: Mon, 21 May 2007 11:42:14 -0400 Subject: [e2e] So IPv6 is dead? Message-ID: <4651BDD6.20804@reed.com> http://www.arin.net/announcements/20070521.html I still remember debating variable length addressing and source routing in the 1970's TCP design days, and being told that 4 Thousand Million addresses would be enough for the life of the Internet. Wann will man je verstehen? From dpreed at reed.com Mon May 21 09:13:15 2007 From: dpreed at reed.com (David P. Reed) Date: Mon, 21 May 2007 12:13:15 -0400 Subject: [e2e] ATT and monopolies In-Reply-To: <23558.1179525289@lunacy.ugrad.cs.cmu.edu> References: <23558.1179525289@lunacy.ugrad.cs.cmu.edu> Message-ID: <4651C51B.6000502@reed.com> Dave Eckhardt wrote: > While the actual path we took to get here involved the government > making a monopoly and then splitting it up, it is by no means > obvious either that only this path would have got us here or > that all future networks need to follow that path. Somehow we > have competing Ethernet vendors without (as far as I'm aware) > government regulation. > > One can, at any time, create a non-interoperable network. Corporations do it all the time - they create a boundary at their corporate edge and block transit, entry, and exit of packets. That is not the Internet. It's a private network based on the IP protocol stack. Things get muddier if there is limited interconnection. Then, the Internet would be best defined as the bounded system of endpoints that can each originate packets to each other that *will* be delivered with best efforts. It's hard to draw that boundary, but it exists. Given this view, I don't think government regulations played a significant role in the growth of the Internet. We have had lots of private networks, many larger than the Internet. I know, for example, that Microsoft built a large x.25 based network in 1985 to provide non-Internet dialup information services for Windows. It was called MSN, and was designed to compete with the large private AOL network. What the Internet had going for it was *scalability* and *interoperability*. Metcalfe's Law and Reed's Law and other "laws". Those created connectivity options that scaled faster than private networks could. Economists calle these "network externalites" or "increasing returns to scale". Gov't regulation *could* have killed the Internet's scalability. Easy ways would be to make interconnection of networks a point of control that was taxed or monitored (e.g. if trans-border connectivity were viewed as a point for US Customs do inspect or if CALEA were implemented at peering points). But lacking that, AOL and MSN just could not compete with the scalability of the Internet. That has nothing to do with competition to supply IP software stacks in operating systems, or competition among Ethernet hardware vendors. However, increasing returns to scale is not Destiny. The Internet was not destined to become the sole network. But all the members of the Internet (people who can communicate with anyone else on it) would be nuts to choose a lesser network unless they suffer badly enough to outweigh the collective benefits. Individualist hermits don't get this, I guess. If you want to be left alone, and don't depend on anyone else, there are no returns to scale for you at any scale. Grow your own bits in the backyard, make your own computers from sand, invent your own science from scratch. If the walls are high enough, perhaps you can survive without connectivity. From hnw at cs.wm.edu Fri May 18 11:46:03 2007 From: hnw at cs.wm.edu (Haining Wang) Date: Fri, 18 May 2007 14:46:03 -0400 Subject: [e2e] Call for Participation - IWQoS'2007 Message-ID: (Apologies if you have received this more than once) ======================================================================== CALL FOR PARTICIPATION Fifteenth IEEE International Workshop on Quality of Service (IWQoS 2007) Chicago, IL, USA, June 21-22, 2007 http://iwqos07.ece.ucdavis.edu/ IWQoS has emerged as the premier workshop on cutting edge research and technologies on quality of service (QoS) since its first establishment in 1994. Building on the successes of previous workshops, the objective of IEEE IWQoS 2007 is to bring together researchers, developers, and practitioners working in this area to discuss innovative research results, and identify future directions and challenges. We will continue IWQoS's long standing tradition of being highly interactive while maintaining highest standards of competitiveness and excellence. This characteristic will be re-emphasized through a technical program consisting of keynote addresses, rigorously reviewed technical sessions (including both long and short papers), and stimulating panel discussions about controversial and cutting edge topics. The panel and the short paper sessions will be highly interactive and leave much time and space for the audience to get involved. We encourage you to check our Web site for the registration and advance program as well as up-to-date conference information: 1. Registration. The early registration deadline is 6/4. Please check out the registration online at http://iwqos07.ece.ucdavis.edu/registration.html 2. Hotel reservation. We have reserved a block of rooms at Hilton Garden Inn. The group rate is USD 129 per night, USD 10 per extra person. The group code is "IWQoS". The reservation line is 847/475-6400 or 1-877-STAYHGI (782-9444). Please be sure to mention the group code to get the discounted rate. The CUT OFF DATE for this reservation is MAY 29. That is, the reservations and the rate are valid only till then. Therefore, PLEASE RESERVE YOUR ROOM ASAP. Note that it will be very hard to get any room after the deadline. For more information, check out the travel page at the IWQoS website: http://www.cs.northwestern.edu/~ychen/services/iwqos07-travel.html From peyman at MIT.EDU Mon May 21 13:32:54 2007 From: peyman at MIT.EDU (Peyman Faratin) Date: Mon, 21 May 2007 16:32:54 -0400 Subject: [e2e] end2end-interest Digest, Vol 39, Issue 32 In-Reply-To: References: Message-ID: <411A6C00-4CB8-4258-98C2-267D5F7FE480@mit.edu> > David I am not sure whether the folks are building computers from sand in the playground of the media lab, but I can say that you are "inventing your own science from scratch". Network externality is _not_ the same concept as increasing return to scale. One is to do with (desirable/undesirable) side-effects of actions of one agent on another not in the original contract (externalities) and the other is to do with efficiency gains of productions of a _single_ firm (where the average cost of production diminishes with increasing quantities produced). The marginal cost of checking these facts is insignificant in this day and age of google and wikipedia. Regulation and economics of networks are non-trivial and require attention to the details of the arguments that people so freely misrepresent. Government regulation did have a very significant impact on the Internet (through differential settlement structures between interconnecting PSTN and early dialup ISPs). This government regulation of settlements in fact _helped_ Internet scaling, not to mention their public investment in the interchanges and the backbone. MSN was rolled out and could've tipped to become the dominant standard (as many other inferior technologies have done so historically - VHS/Betmax, Gasoline/steam,....). See [1] and [2] for some more recent text on the economics and regulation of Internet. Determination of causality in an (un)regulated economy is a very non- trivial task and, like all sciences, the validity of an economic (and the accompanying regulatory) hypothesis/proposition is conditioned on the semantics of the model primitives (externalities, returns to scale, etc). The devil is in the details. best Peyman [1] H. E. Nuechterlein and P.J. Weiser (2005) Digital Crossroads: American Telecommunications Policy in the Internet Age, MIT Press, Cambridge, MA, US, 2005 [2] Handbook of Telecommunications Economics, Technology Evolution and the Internet, Vol.2, S.K. Majumdar, I Vogelsang and M. Cave (eds), Elsevier, 289?364, 2005. >> > One can, at any time, create a non-interoperable network. > Corporations > do it all the time - they create a boundary at their corporate edge > and > block transit, entry, and exit of packets. > > That is not the Internet. It's a private network based on the IP > protocol stack. Things get muddier if there is limited > interconnection. Then, the Internet would be best defined as the > bounded system of endpoints that can each originate packets to each > other that *will* be delivered with best efforts. It's hard to draw > that boundary, but it exists. > > Given this view, I don't think government regulations played a > significant role in the growth of the Internet. We have had lots of > private networks, many larger than the Internet. I know, for > example, > that Microsoft built a large x.25 based network in 1985 to provide > non-Internet dialup information services for Windows. It was called > MSN, and was designed to compete with the large private AOL network. > > What the Internet had going for it was *scalability* and > *interoperability*. Metcalfe's Law and Reed's Law and other "laws". > Those created connectivity options that scaled faster than private > networks could. Economists calle these "network externalites" or > "increasing returns to scale". > > Gov't regulation *could* have killed the Internet's scalability. > Easy > ways would be to make interconnection of networks a point of control > that was taxed or monitored (e.g. if trans-border connectivity were > viewed as a point for US Customs do inspect or if CALEA were > implemented > at peering points). > > But lacking that, AOL and MSN just could not compete with the > scalability of the Internet. > > That has nothing to do with competition to supply IP software > stacks in > operating systems, or competition among Ethernet hardware vendors. > > However, increasing returns to scale is not Destiny. The Internet > was > not destined to become the sole network. But all the members of the > Internet (people who can communicate with anyone else on it) would be > nuts to choose a lesser network unless they suffer badly enough to > outweigh the collective benefits. > > Individualist hermits don't get this, I guess. If you want to be > left > alone, and don't depend on anyone else, there are no returns to scale > for you at any scale. Grow your own bits in the backyard, make your > own computers from sand, invent your own science from scratch. If > the > walls are high enough, perhaps you can survive without connectivity. > > > > > > ------------------------------ > > Message: 3 > Date: Fri, 18 May 2007 14:46:03 -0400 > From: Haining Wang > Subject: [e2e] Call for Participation - IWQoS'2007 > To: end2end-interest at postel.org > Message-ID: > Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed > > > (Apologies if you have received this more than once) > > ====================================================================== > == > CALL FOR PARTICIPATION > > Fifteenth IEEE International Workshop on Quality of Service (IWQoS > 2007) > > Chicago, IL, USA, June 21-22, 2007 > http://iwqos07.ece.ucdavis.edu/ > > IWQoS has emerged as the premier workshop on cutting edge research and > technologies on quality of service (QoS) since its first establishment > in 1994. Building on the successes of previous workshops, the > objective > of IEEE IWQoS 2007 is to bring together researchers, developers, and > practitioners working in this area to discuss innovative research > results, and identify future directions and challenges. We will > continue > IWQoS's long standing tradition of being highly interactive while > maintaining highest standards of competitiveness and excellence. This > characteristic will be re-emphasized through a technical program > consisting of keynote addresses, rigorously reviewed technical > sessions > (including both long and short papers), and stimulating panel > discussions about controversial and cutting edge topics. The panel and > the short paper sessions will be highly interactive and leave much > time > and space for the audience to get involved. > > We encourage you to check our Web site for the registration and > advance > program as well as up-to-date conference information: > > 1. Registration. The early registration deadline is 6/4. Please > check > out the registration online at > http://iwqos07.ece.ucdavis.edu/registration.html > > 2. Hotel reservation. We have reserved a block of rooms at Hilton > Garden > Inn. The group rate is USD 129 per night, USD 10 per extra person. > The > group code is "IWQoS". The reservation line is 847/475-6400 or > 1-877-STAYHGI (782-9444). Please be sure to mention the group code to > get the discounted rate. The CUT OFF DATE for this reservation is MAY > 29. > That is, the reservations and the rate are valid only till then. > Therefore, > PLEASE RESERVE YOUR ROOM ASAP. Note that it will be very hard to get > any room after the deadline. For more information, check out the > travel page > at the IWQoS website: > http://www.cs.northwestern.edu/~ychen/services/iwqos07-travel.html > > > > ------------------------------ > > _______________________________________________ > end2end-interest mailing list > end2end-interest at postel.org > http://mailman.postel.org/mailman/listinfo/end2end-interest > > > End of end2end-interest Digest, Vol 39, Issue 32 > ************************************************ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070521/7976309e/attachment-0001.html From avg at kotovnik.com Mon May 21 14:32:50 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Mon, 21 May 2007 14:32:50 -0700 (PDT) Subject: [e2e] ATT and monopolies In-Reply-To: <4651C51B.6000502@reed.com> Message-ID: On Mon, 21 May 2007, David P. Reed wrote: > Individualist hermits don't get this, I guess. "Invidualist hermits", of course, being the standard collectivist slur against those who think that voluntary uncoerced cooperation is both morally superior to and more productive than the enforced march to the better future directed by the caste of "intellectuals" with their armed thugs. Take care, --vadim From dpreed at reed.com Mon May 21 14:43:36 2007 From: dpreed at reed.com (David P. Reed) Date: Mon, 21 May 2007 17:43:36 -0400 Subject: [e2e] end2end-interest Digest, Vol 39, Issue 32 In-Reply-To: <411A6C00-4CB8-4258-98C2-267D5F7FE480@mit.edu> References: <411A6C00-4CB8-4258-98C2-267D5F7FE480@mit.edu> Message-ID: <46521288.7010200@reed.com> Insulting the Media Lab as a playground is rather unnecessary, and since I claimed no credibility from where I work, it really sounds rather stupid as a rhetorical device. Does it make you feel superior? I got my terminology regarding network externalities and increasing returns from discussions and writings with economists and business school professors. It's very possible that I'm wrong in my usage - and I'm happy to be corrected. However, I didn't claim that network externalities were the *same* as increasing returns to scale. My fault for a non-parallel construction in that sentence, which might suggest that I thought that they were the same thing. They are different, and *both* apply to my argument. I was using the economist term "network externalities" not the regulatory law term. Sorry to be confusing if I was. If you are instead taking the opportunity for gratuitous insult, my skin is thick. Peyman Faratin wrote: >> > > David > > I am not sure whether the folks are building computers from sand in > the playground of the media lab, but I can say that you are "inventing > your own science from scratch". > > Network externality is _not_ the same concept as increasing return to > scale. One is to do with (desirable/undesirable) side-effects of > actions of one agent on another not in the original contract > (externalities) and the other is to do with efficiency gains of > productions of a _single_ firm (where the average cost of production > diminishes with increasing quantities produced). The marginal cost of > checking these facts is insignificant in this day and age of google > and wikipedia. > > Regulation and economics of networks are non-trivial and require > attention to the details of the arguments that people so freely > misrepresent. Government regulation did have a very significant impact > on the Internet (through differential settlement structures between > interconnecting PSTN and early dialup ISPs). This government > regulation of settlements in fact _helped_ Internet scaling, not to > mention their public investment in the interchanges and the backbone. > MSN was rolled out and could've tipped to become the dominant standard > (as many other inferior technologies have done so historically - > VHS/Betmax, Gasoline/steam,....). See [1] and [2] for some more recent > text on the economics and regulation of Internet. > > Determination of causality in an (un)regulated economy is a very > non-trivial task and, like all sciences, the validity of an economic > (and the accompanying regulatory) hypothesis/proposition is > conditioned on the semantics of the model primitives (externalities, > returns to scale, etc). The devil is in the details. > > best > > Peyman > > [1] H. E. Nuechterlein and P.J. Weiser (2005) /Digital Crossroads: > American Telecommunications Policy in the Internet Age/, MIT Press, > Cambridge, MA, US, 2005 > > [2] Handbook of Telecommunications Economics, Technology Evolution and > the Internet, Vol.2, S.K. Majumdar, I Vogelsang and M. Cave (eds), > Elsevier, 289?364, 2005. > >>> >> One can, at any time, create a non-interoperable network. Corporations >> do it all the time - they create a boundary at their corporate edge and >> block transit, entry, and exit of packets. >> >> That is not the Internet. It's a private network based on the IP >> protocol stack. Things get muddier if there is limited >> interconnection. Then, the Internet would be best defined as the >> bounded system of endpoints that can each originate packets to each >> other that *will* be delivered with best efforts. It's hard to draw >> that boundary, but it exists. >> >> Given this view, I don't think government regulations played a >> significant role in the growth of the Internet. We have had lots of >> private networks, many larger than the Internet. I know, for example, >> that Microsoft built a large x.25 based network in 1985 to provide >> non-Internet dialup information services for Windows. It was called >> MSN, and was designed to compete with the large private AOL network. >> >> What the Internet had going for it was *scalability* and >> *interoperability*. Metcalfe's Law and Reed's Law and other "laws". >> Those created connectivity options that scaled faster than private >> networks could. Economists calle these "network externalites" or >> "increasing returns to scale". >> >> Gov't regulation *could* have killed the Internet's scalability. Easy >> ways would be to make interconnection of networks a point of control >> that was taxed or monitored (e.g. if trans-border connectivity were >> viewed as a point for US Customs do inspect or if CALEA were implemented >> at peering points). >> >> But lacking that, AOL and MSN just could not compete with the >> scalability of the Internet. >> >> That has nothing to do with competition to supply IP software stacks in >> operating systems, or competition among Ethernet hardware vendors. >> >> However, increasing returns to scale is not Destiny. The Internet was >> not destined to become the sole network. But all the members of the >> Internet (people who can communicate with anyone else on it) would be >> nuts to choose a lesser network unless they suffer badly enough to >> outweigh the collective benefits. >> >> Individualist hermits don't get this, I guess. If you want to be left >> alone, and don't depend on anyone else, there are no returns to scale >> for you at any scale. Grow your own bits in the backyard, make your >> own computers from sand, invent your own science from scratch. If the >> walls are high enough, perhaps you can survive without connectivity. >> >> >> >> >> >> ------------------------------ >> >> Message: 3 >> Date: Fri, 18 May 2007 14:46:03 -0400 >> From: Haining Wang > >> Subject: [e2e] Call for Participation - IWQoS'2007 >> To: end2end-interest at postel.org >> Message-ID: > > >> Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed >> >> >> (Apologies if you have received this more than once) >> >> ======================================================================== >> CALL FOR PARTICIPATION >> >> Fifteenth IEEE International Workshop on Quality of Service (IWQoS 2007) >> >> Chicago, IL, USA, June 21-22, 2007 >> http://iwqos07.ece.ucdavis.edu/ >> >> IWQoS has emerged as the premier workshop on cutting edge research and >> technologies on quality of service (QoS) since its first establishment >> in 1994. Building on the successes of previous workshops, the objective >> of IEEE IWQoS 2007 is to bring together researchers, developers, and >> practitioners working in this area to discuss innovative research >> results, and identify future directions and challenges. We will continue >> IWQoS's long standing tradition of being highly interactive while >> maintaining highest standards of competitiveness and excellence. This >> characteristic will be re-emphasized through a technical program >> consisting of keynote addresses, rigorously reviewed technical sessions >> (including both long and short papers), and stimulating panel >> discussions about controversial and cutting edge topics. The panel and >> the short paper sessions will be highly interactive and leave much time >> and space for the audience to get involved. >> >> We encourage you to check our Web site for the registration and advance >> program as well as up-to-date conference information: >> >> 1. Registration. The early registration deadline is 6/4. Please check >> out the registration online at >> http://iwqos07.ece.ucdavis.edu/registration.html >> >> 2. Hotel reservation. We have reserved a block of rooms at Hilton Garden >> Inn. The group rate is USD 129 per night, USD 10 per extra person. The >> group code is "IWQoS". The reservation line is 847/475-6400 or >> 1-877-STAYHGI (782-9444). Please be sure to mention the group code to >> get the discounted rate. The CUT OFF DATE for this reservation is MAY >> 29. >> That is, the reservations and the rate are valid only till then. >> Therefore, >> PLEASE RESERVE YOUR ROOM ASAP. Note that it will be very hard to get >> any room after the deadline. For more information, check out the >> travel page >> at the IWQoS website: >> http://www.cs.northwestern.edu/~ychen/services/iwqos07-travel.html >> >> >> >> >> ------------------------------ >> >> _______________________________________________ >> end2end-interest mailing list >> end2end-interest at postel.org >> http://mailman.postel.org/mailman/listinfo/end2end-interest >> >> >> End of end2end-interest Digest, Vol 39, Issue 32 >> ************************************************ > From lynne at telemuse.net Mon May 21 14:57:10 2007 From: lynne at telemuse.net (Lynne Jolitz) Date: Mon, 21 May 2007 14:57:10 -0700 Subject: [e2e] So IPv6 is dead? In-Reply-To: <4651BDD6.20804@reed.com> Message-ID: <001801c79bf2$fb9fe720$8d01a8c0@telemuse.net> Dave, I remember that debate as well. But the whole genesis of why 32 bits was good enough was an (underjustified) view on the use of networks rather than an understanding of how sparse addresses were actually employed. Everybody knows hash tables work best mostly empty - the same may be true with address blocks because they are allocated in routable units. So what this bulletin says is that we are now out of sparse space and into dense space, i.e. if one did a fractal map of the 4 GByte address space it would have little unallocated. If true, this is an interesting juncture for ARIN and the IPv6 community. The counterpush at the time was for 64-bit object identifiers (for an unrelated project) - a ludicrously overblown number. For fun Ross Harvey calculated that 2^64 printed punchcards stacked one on the other would reach farther than the earth-sun distance. So one could go overboard in the opposite direction to little real purpose as well. The presumption of over 200 (254, or 252 for the annoyingly picky) Class A networks, each with about 16 million hosts (16*1024*1024-2 for the pointlessly obsessive) was the most hilarious, because nobody could explain how you could deal with a single network with 16 million hosts, much less some 200 of them. The claim was that a phone company using X.25 PADs might have 16 million subscribers connected in an odd configuration, but it was never to my knowledge deployed because of cost considerations. Actually, when it became possible with ISDN, it wasn't considered desireable by Pac Bell (Scott Adams was still in San Ramon then, BTW!) either as a business or technically (too much of a load for them to feel comfortable). The concerns expressed over the exhaustion of IPv4 address space are similar to the concerns expressed over the exhaustion of telephone numbers. The assumption was that everyone had to have multiple cell phone numbers plus LAN lines plus separate computer lines and so forth, so the estimates ran from 4-10 lines per person. Area codes were split to accomodate new growth, and the press began to run stories about how we would run out of telephone numbers, contributing to a general hysteria. Companies began to over-order on blocks of phone numbers to "build-in" room for growth on their switches. In one Internet portal company where I managed their datacenter, I also had to budget this item, even though I noticed my staff was increasingly using mobile devices and advocated a single number to mobile redirect policy. This number peaked in the late 1990's and has since fallen as technologies were developed to combine voice / data, and LAN lines are wholesale abandoned for purely mobile devices. Like the overbuying of phone lines of the 1990's, startups are often encouraged to budget for /19's even though the number of IP addresses actually used are very few, because the security demands of monitoring and securing open ports of such a large number of IPs overwhelm the IT staff, who in turn go to NAT (sometimes they really go overboard and reduce too much). As the IPv4 address space is used up and grows more expensive, perhaps there will be a similar collapse, where there are plenty of scattered small blocks which can be bartered among service providers. In this case, the ARIN announcement may simply be the peak before the drop. So this is also an exciting opportunity for anyone who likes to make wagers. Lynne Jolitz ---- We use SpamQuiz. If your ISP didn't make the grade try http://lynne.telemuse.net > -----Original Message----- > From: end2end-interest-bounces at postel.org > [mailto:end2end-interest-bounces at postel.org]On Behalf Of David P. Reed > Sent: Monday, May 21, 2007 8:42 AM > To: end2end-interest list > Subject: [e2e] So IPv6 is dead? > > > http://www.arin.net/announcements/20070521.html > > I still remember debating variable length addressing and source routing > in the 1970's TCP design days, and being told that 4 Thousand Million > addresses would be enough for the life of the Internet. > > Wann will man je verstehen? > > > From dpreed at reed.com Mon May 21 15:00:19 2007 From: dpreed at reed.com (David P. Reed) Date: Mon, 21 May 2007 18:00:19 -0400 Subject: [e2e] ATT and monopolies In-Reply-To: References: Message-ID: <46521673.1040700@reed.com> Wow. I'm learning a lot about full-cocked, hair-trigger rage. Vadim Antonov wrote: > On Mon, 21 May 2007, David P. Reed wrote: > > >> Individualist hermits don't get this, I guess. >> > > "Invidualist hermits", of course, being the standard collectivist slur > against those who think that voluntary uncoerced cooperation is both > morally superior to and more productive than the enforced march to the > better future directed by the caste of "intellectuals" with their armed > thugs. > > Take care, > > --vadim > > > From avg at kotovnik.com Mon May 21 15:12:50 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Mon, 21 May 2007 15:12:50 -0700 (PDT) Subject: [e2e] ATT and monopolies In-Reply-To: <46521673.1040700@reed.com> Message-ID: On Mon, 21 May 2007, David P. Reed wrote: A slur (technically, a strawman)... > Wow. I'm learning a lot about full-cocked, hair-trigger rage. ...followed by ascribing unbalanced mental state to an opponent (technically, ad hominem). --vadim From randy at psg.com Mon May 21 15:32:36 2007 From: randy at psg.com (Randy Bush) Date: Mon, 21 May 2007 18:32:36 -0400 Subject: [e2e] end2end-interest Digest, Vol 39, Issue 32 In-Reply-To: <46521288.7010200@reed.com> References: <411A6C00-4CB8-4258-98C2-267D5F7FE480@mit.edu> <46521288.7010200@reed.com> Message-ID: <46521E04.8020207@psg.com> if there was 1/10 the engineering or science exchanged here as there is useless insults, this might be a worthwhile discussion. randy From rbriscoe at jungle.bt.co.uk Mon May 21 16:10:27 2007 From: rbriscoe at jungle.bt.co.uk (Bob Briscoe) Date: Tue, 22 May 2007 00:10:27 +0100 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <4649CA54.1050000@reed.com> Message-ID: <5.2.1.1.2.20070521191233.05982d30@pop3.jungle.bt.co.uk> David, Going back to your opening posting in this thread... At 15:57 15/05/2007, David P. Reed wrote: >I call for others to join me in constructing the next Internet, not as an >extension of the current Internet, because that Internet is corrupted by >people who do not value innovation, connectivity, and the ability to >absorb new ideas from the user community. So, how do we make an Internet that can evolve to meet all sorts of future social and economic desires, except it mustn't evolve away from David Reed's original desires for it, and it mustn't evolve towards the desires of those who invest in it? Tough design brief :) My sarcasm is only intended to prevent you wasting a lot of years of your life on this project, without questioning whether the problem is with your aspirations, not with the Internet... Perhaps it would help _not_ to think of suppression of innovation as a failure. Innovation isn't an end in itself. People don't want innovation to the exclusion of all else. People want a balance between innovative new stuff and uninterrupted, cheap, robust, hassle-free enjoyment of previous innovations. Surely the real requirement is for a distributed computing internetwork that can be temporarily or locally closed to milk the fruits of an innovation without having to be permanently and ubiquitously closed. That is, locally open or locally closed by policy control. That's a heroic research challenge in its own right - and not impossible - here's some case studies that have (sometimes unconsciously) achieved this: A desire to embed _only_ openness into the architecture to the exclusion of thinking how to do closedness is the problem, not the solution. So, I for one won't be joining you in this venture, even though my initial reflex action would be (and always was) openness. I'd ask you to reconsider too. If you disagree with this 'Tussle in Cyberspace' argument, I think you ought to say why, as I've not heard a good argument against it. >To save argument, I am not arguing that the IP layer could not evolve. >I am arguing that the current research community and industry community >that support the IP layer *will not* allow it to evolve. You don't need to start out deciding that, whatever the solution, it won't be an evolution from where we are. That doesn't need to be decided until you know what the solution might look like. >But that need not matter. If necessary, we can do this inefficiently, >creating a new class of routers that sit at the edge of the IP network and >sit in end user sites. We can encrypt the traffic, so that the IP >monopoly (analogous to the ATT monopoly) cannot tell what our layer is >doing, and we can use protocols that are more aggressively defensive since >the IP layer has indeed gotten very aggressive in blocking traffic and >attempting to prevent user-to-user connectivity. If this is what you want you don't need a new Internet. You already have the power to encrypt and the power to be aggressively defensive with the current Internet (as your TOR and Joost examples demonstrate). You want to use the infrastructure those nasty routerheads have invested in, presumably to benefit from the network effect their investments (and your previous inventiveness) helped to create. And if they try to stop you, are they not justified? What is the difference then between your traffic and an attack - from /their/ point of view? Or are you claiming a higher moral right to abuse the policies they impose on their networks because you have honourable intentions, in /your/ opinion? Universal connectivity isn't a human right that trumps their policies. It's just something you (& I) care about a lot. Isn't this getting close to an analogy with animal rights activists conspiring to kill vivisectionists. Reversing this, what if someone launches a DoS attack against an unforeseen vulnerability in your new Internet? Would your architecture never allow it to be blocked, because that would damage universal connectivity? I think you need to take a step back and reconsider the aspersions you're casting on routerheads. They understand the value of universal connectivity too. But they also understand the higher value of some connectivities than others. Given the tools they have at their disposal right now, the best they can do is block some stuff to keep other stuff going. It's as much the fault of you and me that they have no other option, as it is their fault for blocking stuff. You are blaming operators for acting in their own self-interest. Shouldn't you blame the designers of the architecture for not expecting operators to act in their own interests? Again, what is your argument against 'Tussle in Cyberspace'? Bob From sm at mirapoint.com Mon May 21 16:22:43 2007 From: sm at mirapoint.com (Sam Manthorpe) Date: Mon, 21 May 2007 16:22:43 -0700 (PDT) Subject: [e2e] end2end-interest Digest, Vol 39, Issue 32 In-Reply-To: <46521288.7010200@reed.com> Message-ID: <20070521160650.J64318-100000@factoria.mirapoint.com> Folks, no judgement on any of the opinions set forth, but I tune into Bill O'Reilly if I want to watch a slug fest. I humbly suggest sticking to end2end interest without the personal stuff. This thread is making me start to automatically delete e2e emails. Maybe an e2e-bad-attitude is in order :-) It might be fun, actually. Cheers, -- Sam On Mon, 21 May 2007, David P. Reed wrote: > Insulting the Media Lab as a playground is rather unnecessary, and since > I claimed no credibility from where I work, it really sounds rather > stupid as a rhetorical device. Does it make you feel superior? > > I got my terminology regarding network externalities and increasing > returns from discussions and writings with economists and business > school professors. It's very possible that I'm wrong in my usage - and > I'm happy to be corrected. > > However, I didn't claim that network externalities were the *same* as > increasing returns to scale. My fault for a non-parallel construction > in that sentence, which might suggest that I thought that they were the > same thing. They are different, and *both* apply to my argument. > > I was using the economist term "network externalities" not the > regulatory law term. Sorry to be confusing if I was. If you are > instead taking the opportunity for gratuitous insult, my skin is thick. > > > > Peyman Faratin wrote: > >> > > > > David > > > > I am not sure whether the folks are building computers from sand in > > the playground of the media lab, but I can say that you are "inventing > > your own science from scratch". > > > > Network externality is _not_ the same concept as increasing return to > > scale. One is to do with (desirable/undesirable) side-effects of > > actions of one agent on another not in the original contract > > (externalities) and the other is to do with efficiency gains of > > productions of a _single_ firm (where the average cost of production > > diminishes with increasing quantities produced). The marginal cost of > > checking these facts is insignificant in this day and age of google > > and wikipedia. > > > > Regulation and economics of networks are non-trivial and require > > attention to the details of the arguments that people so freely > > misrepresent. Government regulation did have a very significant impact > > on the Internet (through differential settlement structures between > > interconnecting PSTN and early dialup ISPs). This government > > regulation of settlements in fact _helped_ Internet scaling, not to > > mention their public investment in the interchanges and the backbone. > > MSN was rolled out and could've tipped to become the dominant standard > > (as many other inferior technologies have done so historically - > > VHS/Betmax, Gasoline/steam,....). See [1] and [2] for some more recent > > text on the economics and regulation of Internet. > > > > Determination of causality in an (un)regulated economy is a very > > non-trivial task and, like all sciences, the validity of an economic > > (and the accompanying regulatory) hypothesis/proposition is > > conditioned on the semantics of the model primitives (externalities, > > returns to scale, etc). The devil is in the details. > > > > best > > > > Peyman > > > > [1] H. E. Nuechterlein and P.J. Weiser (2005) /Digital Crossroads: > > American Telecommunications Policy in the Internet Age/, MIT Press, > > Cambridge, MA, US, 2005 > > > > [2] Handbook of Telecommunications Economics, Technology Evolution and > > the Internet, Vol.2, S.K. Majumdar, I Vogelsang and M. Cave (eds), > > Elsevier, 289—364, 2005. > > > >>> > >> One can, at any time, create a non-interoperable network. Corporations > >> do it all the time - they create a boundary at their corporate edge and > >> block transit, entry, and exit of packets. > >> > >> That is not the Internet. It's a private network based on the IP > >> protocol stack. Things get muddier if there is limited > >> interconnection. Then, the Internet would be best defined as the > >> bounded system of endpoints that can each originate packets to each > >> other that *will* be delivered with best efforts. It's hard to draw > >> that boundary, but it exists. > >> > >> Given this view, I don't think government regulations played a > >> significant role in the growth of the Internet. We have had lots of > >> private networks, many larger than the Internet. I know, for example, > >> that Microsoft built a large x.25 based network in 1985 to provide > >> non-Internet dialup information services for Windows. It was called > >> MSN, and was designed to compete with the large private AOL network. > >> > >> What the Internet had going for it was *scalability* and > >> *interoperability*. Metcalfe's Law and Reed's Law and other "laws". > >> Those created connectivity options that scaled faster than private > >> networks could. Economists calle these "network externalites" or > >> "increasing returns to scale". > >> > >> Gov't regulation *could* have killed the Internet's scalability. Easy > >> ways would be to make interconnection of networks a point of control > >> that was taxed or monitored (e.g. if trans-border connectivity were > >> viewed as a point for US Customs do inspect or if CALEA were implemented > >> at peering points). > >> > >> But lacking that, AOL and MSN just could not compete with the > >> scalability of the Internet. > >> > >> That has nothing to do with competition to supply IP software stacks in > >> operating systems, or competition among Ethernet hardware vendors. > >> > >> However, increasing returns to scale is not Destiny. The Internet was > >> not destined to become the sole network. But all the members of the > >> Internet (people who can communicate with anyone else on it) would be > >> nuts to choose a lesser network unless they suffer badly enough to > >> outweigh the collective benefits. > >> > >> Individualist hermits don't get this, I guess. If you want to be left > >> alone, and don't depend on anyone else, there are no returns to scale > >> for you at any scale. Grow your own bits in the backyard, make your > >> own computers from sand, invent your own science from scratch. If the > >> walls are high enough, perhaps you can survive without connectivity. > >> > >> > >> > >> > >> > >> ------------------------------ > >> > >> Message: 3 > >> Date: Fri, 18 May 2007 14:46:03 -0400 > >> From: Haining Wang > > >> Subject: [e2e] Call for Participation - IWQoS'2007 > >> To: end2end-interest at postel.org > >> Message-ID: >> > > >> Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed > >> > >> > >> (Apologies if you have received this more than once) > >> > >> ======================================================================== > >> CALL FOR PARTICIPATION > >> > >> Fifteenth IEEE International Workshop on Quality of Service (IWQoS 2007) > >> > >> Chicago, IL, USA, June 21-22, 2007 > >> http://iwqos07.ece.ucdavis.edu/ > >> > >> IWQoS has emerged as the premier workshop on cutting edge research and > >> technologies on quality of service (QoS) since its first establishment > >> in 1994. Building on the successes of previous workshops, the objective > >> of IEEE IWQoS 2007 is to bring together researchers, developers, and > >> practitioners working in this area to discuss innovative research > >> results, and identify future directions and challenges. We will continue > >> IWQoS's long standing tradition of being highly interactive while > >> maintaining highest standards of competitiveness and excellence. This > >> characteristic will be re-emphasized through a technical program > >> consisting of keynote addresses, rigorously reviewed technical sessions > >> (including both long and short papers), and stimulating panel > >> discussions about controversial and cutting edge topics. The panel and > >> the short paper sessions will be highly interactive and leave much time > >> and space for the audience to get involved. > >> > >> We encourage you to check our Web site for the registration and advance > >> program as well as up-to-date conference information: > >> > >> 1. Registration. The early registration deadline is 6/4. Please check > >> out the registration online at > >> http://iwqos07.ece.ucdavis.edu/registration.html > >> > >> 2. Hotel reservation. We have reserved a block of rooms at Hilton Garden > >> Inn. The group rate is USD 129 per night, USD 10 per extra person. The > >> group code is "IWQoS". The reservation line is 847/475-6400 or > >> 1-877-STAYHGI (782-9444). Please be sure to mention the group code to > >> get the discounted rate. The CUT OFF DATE for this reservation is MAY > >> 29. > >> That is, the reservations and the rate are valid only till then. > >> Therefore, > >> PLEASE RESERVE YOUR ROOM ASAP. Note that it will be very hard to get > >> any room after the deadline. For more information, check out the > >> travel page > >> at the IWQoS website: > >> http://www.cs.northwestern.edu/~ychen/services/iwqos07-travel.html > >> > >> > >> > >> > >> ------------------------------ > >> > >> _______________________________________________ > >> end2end-interest mailing list > >> end2end-interest at postel.org > >> http://mailman.postel.org/mailman/listinfo/end2end-interest > >> > >> > >> End of end2end-interest Digest, Vol 39, Issue 32 > >> ************************************************ > > > From avg at kotovnik.com Mon May 21 16:39:32 2007 From: avg at kotovnik.com (Vadim Antonov) Date: Mon, 21 May 2007 16:39:32 -0700 (PDT) Subject: [e2e] end2end-interest Digest, Vol 39, Issue 32 In-Reply-To: <46521E04.8020207@psg.com> Message-ID: On Mon, 21 May 2007, Randy Bush wrote: > if there was 1/10 the engineering or science exchanged here as there is > useless insults, this might be a worthwhile discussion. Agreed. However, the issue with Source Routing is in many respects similar to the issue of Quality of Service features - there are many ways to devise a network which supports the concept, but the problem is not in the engineering, the problem is that there is a fundamental philosophical disconnect between the design goals and the reality of what users and ISPs want. The common utility-based arguments are based on the assumption that there is indeed a way to engineer some quasi-economic system which makes reasonable guesses about what particular communications are worth to the users. Some proposed metrics of worth include bandwidth (resulting in bandwidth-reservation schemes), ability to select a path, etc. The reality, of course, is that we simply cannot know. For example, a specific 50-byte text message may have a much more worth to a particular user than a specific multimegabit video stream. So it is nearly useless to try to optimize based on fixed metrics which cannot be directly translated into explicit goals of actors. These optimizations have unknown (or, even worse, unknowable) benefit, while imposing real costs on the specific actors. The only way to learn if something makes economic sense is to see if people are actually using it and are willing to pay. To the better, or to the worse, the standards (even the non-coercive standards, like Internet protocol specifications) have power to compel vendors to implement things even if nobody uses them - just for the sake of standards compliance and associated marketing advantage. Since SR was around for decades, widely deployed, and well-known, and still nobody uses it for anything but diagnostics, I think the question whether it should remain in the standard should be re-examined. Removing it will relieve the router and host software vendors from the pressure to implement it, thus resulting in cheaper and faster hardware, and will reduce network maintenance costs (because of improved security and because simpler equipment is more reliable). This position, of course, runs against the claim that there is a need for more academic research (and, by implication, more grants for research) in this field. I think there isn't - there is a healthy natural evolution of the technologies and business models in this marketplace, so any really worthwhile idea will surely be implemented and tested by those who are willing to take risks with their own money. --vadim From tvest at pch.net Mon May 21 18:09:28 2007 From: tvest at pch.net (Tom Vest) Date: Mon, 21 May 2007 21:09:28 -0400 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <5.2.1.1.2.20070521191233.05982d30@pop3.jungle.bt.co.uk> References: <5.2.1.1.2.20070521191233.05982d30@pop3.jungle.bt.co.uk> Message-ID: Being a big fan and frequent user/abuser of the tussle concept, let me be the first person to observe some obvious problems that follow from using it as a normative principle: 1. Although the concept of tussle is inherently recursive, it's typically only used (e.g., by network architects and systems theory people) to discuss the upper elements of the protocol/service stack. Too often people forget, or maybe fail to notice, that the Internet itself only exists in its "current canonical form" in places when & where a prior/foundational tussle over control of communications facilities/infrastructure inputs resulted in certain sorts of outcomes. In places where all or almost of the interfaces are hidden/ controlled by a single monolithic entity (e.g., like hierarchical/ horizontal infrastructure segments within a territorial monopoly PSTN), tussle may still exist, but it has approximately zero impact/ significance to outsiders. 2. As soon as "tusslers" become aware of the idea, they tend to incorporate it, rhetorically if not operationally, into their future actions. Granting that I am no game theory expert (and would love to hear a better informed comparison here), this seems like just another example of an iterative bargaining game, ala the Prisoner's Dilemma. An appeal to the reasonableness of a "tussle-friendly outcome" is just as likely as not to be a gambit to "win" a larger piece of the pie... unless maybe the appeal is coming from someone you already trust for some unrelated reason. Bottom line: tussle provides a great descriptive framework for understanding how, when, and why things change (or don't change), and would be a fine architectural guide for a monolithic Supreme Being who has prior knowledge of "what good would be good" to select as the criteria for winning in any particular tussle instance -- but as soon as you have two Semi-Supreme Beings they end up stuck in the same bargaining game described so crudely above... Regards all, TV On May 21, 2007, at 7:10 PM, Bob Briscoe wrote: > David, > > Going back to your opening posting in this thread... > > At 15:57 15/05/2007, David P. Reed wrote: >> I call for others to join me in constructing the next Internet, >> not as an extension of the current Internet, because that Internet >> is corrupted by people who do not value innovation, connectivity, >> and the ability to absorb new ideas from the user community. > > So, how do we make an Internet that can evolve to meet all sorts of > future social and economic desires, except it mustn't evolve away > from David Reed's original desires for it, and it mustn't evolve > towards the desires of those who invest in it? Tough design brief :) > > My sarcasm is only intended to prevent you wasting a lot of years > of your life on this project, without questioning whether the > problem is with your aspirations, not with the Internet... > > Perhaps it would help _not_ to think of suppression of innovation > as a failure. Innovation isn't an end in itself. People don't want > innovation to the exclusion of all else. People want a balance > between innovative new stuff and uninterrupted, cheap, robust, > hassle-free enjoyment of previous innovations. > > Surely the real requirement is for a distributed computing > internetwork that can be temporarily or locally closed to milk the > fruits of an innovation without having to be permanently and > ubiquitously closed. That is, locally open or locally closed by > policy control. That's a heroic research challenge in its own right > - and not impossible - here's some case studies that have > (sometimes unconsciously) achieved this: > > > A desire to embed _only_ openness into the architecture to the > exclusion of thinking how to do closedness is the problem, not the > solution. So, I for one won't be joining you in this venture, even > though my initial reflex action would be (and always was) openness. > I'd ask you to reconsider too. > > If you disagree with this 'Tussle in Cyberspace' argument, I think > you ought to say why, as I've not heard a good argument against it. > > >> To save argument, I am not arguing that the IP layer could not >> evolve. >> I am arguing that the current research community and industry >> community that support the IP layer *will not* allow it to evolve. > > You don't need to start out deciding that, whatever the solution, > it won't be an evolution from where we are. That doesn't need to be > decided until you know what the solution might look like. > > >> But that need not matter. If necessary, we can do this >> inefficiently, creating a new class of routers that sit at the >> edge of the IP network and sit in end user sites. We can encrypt >> the traffic, so that the IP monopoly (analogous to the ATT >> monopoly) cannot tell what our layer is doing, and we can use >> protocols that are more aggressively defensive since the IP layer >> has indeed gotten very aggressive in blocking traffic and >> attempting to prevent user-to-user connectivity. > > If this is what you want you don't need a new Internet. You already > have the power to encrypt and the power to be aggressively > defensive with the current Internet (as your TOR and Joost examples > demonstrate). > > You want to use the infrastructure those nasty routerheads have > invested in, presumably to benefit from the network effect their > investments (and your previous inventiveness) helped to create. And > if they try to stop you, are they not justified? What is the > difference then between your traffic and an attack - from /their/ > point of view? > > Or are you claiming a higher moral right to abuse the policies they > impose on their networks because you have honourable intentions, > in /your/ opinion? Universal connectivity isn't a human right that > trumps their policies. It's just something you (& I) care about a > lot. Isn't this getting close to an analogy with animal rights > activists conspiring to kill vivisectionists. > > Reversing this, what if someone launches a DoS attack against an > unforeseen vulnerability in your new Internet? Would your > architecture never allow it to be blocked, because that would > damage universal connectivity? > > I think you need to take a step back and reconsider the aspersions > you're casting on routerheads. They understand the value of > universal connectivity too. But they also understand the higher > value of some connectivities than others. Given the tools they have > at their disposal right now, the best they can do is block some > stuff to keep other stuff going. It's as much the fault of you and > me that they have no other option, as it is their fault for > blocking stuff. > > You are blaming operators for acting in their own self-interest. > Shouldn't you blame the designers of the architecture for not > expecting operators to act in their own interests? Again, what is > your argument against 'Tussle in Cyberspace'? > > > Bob > > > From dpreed at reed.com Mon May 21 18:25:49 2007 From: dpreed at reed.com (David P. Reed) Date: Mon, 21 May 2007 21:25:49 -0400 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: References: <5.2.1.1.2.20070521191233.05982d30@pop3.jungle.bt.co.uk> Message-ID: <4652469D.8040301@reed.com> I'm now completely confused. Perhaps those who understand the "tussle" principle could tease out these concepts in a way that the rest of us can understand? A small start would be explaining in what way that "tussle is inherently recursive"? Tom Vest wrote: > Being a big fan and frequent user/abuser of the tussle concept, let me > be the first person to observe some obvious problems that follow from > using it as a normative principle: > > 1. Although the concept of tussle is inherently recursive, it's > typically only used (e.g., by network architects and systems theory > people) to discuss the upper elements of the protocol/service stack. > Too often people forget, or maybe fail to notice, that the Internet > itself only exists in its "current canonical form" in places when & > where a prior/foundational tussle over control of communications > facilities/infrastructure inputs resulted in certain sorts of > outcomes. In places where all or almost of the interfaces are > hidden/controlled by a single monolithic entity (e.g., like > hierarchical/horizontal infrastructure segments within a territorial > monopoly PSTN), tussle may still exist, but it has approximately zero > impact/significance to outsiders. > > 2. As soon as "tusslers" become aware of the idea, they tend to > incorporate it, rhetorically if not operationally, into their future > actions. Granting that I am no game theory expert (and would love to > hear a better informed comparison here), this seems like just another > example of an iterative bargaining game, ala the Prisoner's Dilemma. > An appeal to the reasonableness of a "tussle-friendly outcome" is just > as likely as not to be a gambit to "win" a larger piece of the pie... > unless maybe the appeal is coming from someone you already trust for > some unrelated reason. > > Bottom line: tussle provides a great descriptive framework for > understanding how, when, and why things change (or don't change), and > would be a fine architectural guide for a monolithic Supreme Being who > has prior knowledge of "what good would be good" to select as the > criteria for winning in any particular tussle instance -- but as soon > as you have two Semi-Supreme Beings they end up stuck in the same > bargaining game described so crudely above... > > Regards all, > > TV > > On May 21, 2007, at 7:10 PM, Bob Briscoe wrote: > >> David, >> >> Going back to your opening posting in this thread... >> >> At 15:57 15/05/2007, David P. Reed wrote: >>> I call for others to join me in constructing the next Internet, not >>> as an extension of the current Internet, because that Internet is >>> corrupted by people who do not value innovation, connectivity, and >>> the ability to absorb new ideas from the user community. >> >> So, how do we make an Internet that can evolve to meet all sorts of >> future social and economic desires, except it mustn't evolve away >> from David Reed's original desires for it, and it mustn't evolve >> towards the desires of those who invest in it? Tough design brief :) >> >> My sarcasm is only intended to prevent you wasting a lot of years of >> your life on this project, without questioning whether the problem is >> with your aspirations, not with the Internet... >> >> Perhaps it would help _not_ to think of suppression of innovation as >> a failure. Innovation isn't an end in itself. People don't want >> innovation to the exclusion of all else. People want a balance >> between innovative new stuff and uninterrupted, cheap, robust, >> hassle-free enjoyment of previous innovations. >> >> Surely the real requirement is for a distributed computing >> internetwork that can be temporarily or locally closed to milk the >> fruits of an innovation without having to be permanently and >> ubiquitously closed. That is, locally open or locally closed by >> policy control. That's a heroic research challenge in its own right - >> and not impossible - here's some case studies that have (sometimes >> unconsciously) achieved this: >> >> >> A desire to embed _only_ openness into the architecture to the >> exclusion of thinking how to do closedness is the problem, not the >> solution. So, I for one won't be joining you in this venture, even >> though my initial reflex action would be (and always was) openness. >> I'd ask you to reconsider too. >> >> If you disagree with this 'Tussle in Cyberspace' argument, I think >> you ought to say why, as I've not heard a good argument against it. >> >> >>> To save argument, I am not arguing that the IP layer could not evolve. >>> I am arguing that the current research community and industry >>> community that support the IP layer *will not* allow it to evolve. >> >> You don't need to start out deciding that, whatever the solution, it >> won't be an evolution from where we are. That doesn't need to be >> decided until you know what the solution might look like. >> >> >>> But that need not matter. If necessary, we can do this >>> inefficiently, creating a new class of routers that sit at the edge >>> of the IP network and sit in end user sites. We can encrypt the >>> traffic, so that the IP monopoly (analogous to the ATT monopoly) >>> cannot tell what our layer is doing, and we can use protocols that >>> are more aggressively defensive since the IP layer has indeed gotten >>> very aggressive in blocking traffic and attempting to prevent >>> user-to-user connectivity. >> >> If this is what you want you don't need a new Internet. You already >> have the power to encrypt and the power to be aggressively defensive >> with the current Internet (as your TOR and Joost examples demonstrate). >> >> You want to use the infrastructure those nasty routerheads have >> invested in, presumably to benefit from the network effect their >> investments (and your previous inventiveness) helped to create. And >> if they try to stop you, are they not justified? What is the >> difference then between your traffic and an attack - from /their/ >> point of view? >> >> Or are you claiming a higher moral right to abuse the policies they >> impose on their networks because you have honourable intentions, in >> /your/ opinion? Universal connectivity isn't a human right that >> trumps their policies. It's just something you (& I) care about a >> lot. Isn't this getting close to an analogy with animal rights >> activists conspiring to kill vivisectionists. >> >> Reversing this, what if someone launches a DoS attack against an >> unforeseen vulnerability in your new Internet? Would your >> architecture never allow it to be blocked, because that would damage >> universal connectivity? >> >> I think you need to take a step back and reconsider the aspersions >> you're casting on routerheads. They understand the value of universal >> connectivity too. But they also understand the higher value of some >> connectivities than others. Given the tools they have at their >> disposal right now, the best they can do is block some stuff to keep >> other stuff going. It's as much the fault of you and me that they >> have no other option, as it is their fault for blocking stuff. >> >> You are blaming operators for acting in their own self-interest. >> Shouldn't you blame the designers of the architecture for not >> expecting operators to act in their own interests? Again, what is >> your argument against 'Tussle in Cyberspace'? >> >> >> Bob >> >> >> > > From tvest at pch.net Mon May 21 19:32:15 2007 From: tvest at pch.net (Tom Vest) Date: Mon, 21 May 2007 22:32:15 -0400 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <4652469D.8040301@reed.com> References: <5.2.1.1.2.20070521191233.05982d30@pop3.jungle.bt.co.uk> <4652469D.8040301@reed.com> Message-ID: <97B67348-C6C7-4065-AEC8-C9DB997F44B4@pch.net> On May 21, 2007, at 9:25 PM, David P. Reed wrote: > I'm now completely confused. Perhaps those who understand the > "tussle" principle could tease out these concepts in a way that the > rest of us can understand? A small start would be explaining in > what way that "tussle is inherently recursive"? Since I just made the idea up, I'll be happy to clarify -- perhaps that'll make it easier for others to spot the flaws in my logic. The idea is a nested game, where two tusslers are tussling over the exploitation/control of a given interface, the value of which is determined by the outcome of a more basic/encompassing game, one in which two tusslers are tussling.... repeat ad infinitum. Replace the word "game" with your favorite approximation of OSI/ protocol stack, and you arrive at what I had in mind. In fact, the point I was flogging to excess on the list last week was an application of this idea/explanatory framework to the origins of the Internet, and its relationship to the timing and location of telecom regulatory changes (e.g., on 1968, 1976, 1984, 1994, 1996) that slowly expanded, and then multiplied, the functional and spatial domain within which it was possible, and useful, to provision "independent" IP networks, routing blobs, ASes, etc. -- meaning operationally independent from each other as well as from the underlying telecom facilities owner(s), who previously were the only institutions that could make use of the facilities platform. Happy to provide more details/illustrations if that would help ;-) TV > Tom Vest wrote: >> Being a big fan and frequent user/abuser of the tussle concept, >> let me be the first person to observe some obvious problems that >> follow from using it as a normative principle: >> >> 1. Although the concept of tussle is inherently recursive, it's >> typically only used (e.g., by network architects and systems >> theory people) to discuss the upper elements of the protocol/ >> service stack. Too often people forget, or maybe fail to notice, >> that the Internet itself only exists in its "current canonical >> form" in places when & where a prior/foundational tussle over >> control of communications facilities/infrastructure inputs >> resulted in certain sorts of outcomes. In places where all or >> almost of the interfaces are hidden/controlled by a single >> monolithic entity (e.g., like hierarchical/horizontal >> infrastructure segments within a territorial monopoly PSTN), >> tussle may still exist, but it has approximately zero impact/ >> significance to outsiders. >> >> 2. As soon as "tusslers" become aware of the idea, they tend to >> incorporate it, rhetorically if not operationally, into their >> future actions. Granting that I am no game theory expert (and >> would love to hear a better informed comparison here), this seems >> like just another example of an iterative bargaining game, ala the >> Prisoner's Dilemma. An appeal to the reasonableness of a "tussle- >> friendly outcome" is just as likely as not to be a gambit to "win" >> a larger piece of the pie... unless maybe the appeal is coming >> from someone you already trust for some unrelated reason. >> >> Bottom line: tussle provides a great descriptive framework for >> understanding how, when, and why things change (or don't change), >> and would be a fine architectural guide for a monolithic Supreme >> Being who has prior knowledge of "what good would be good" to >> select as the criteria for winning in any particular tussle >> instance -- but as soon as you have two Semi-Supreme Beings they >> end up stuck in the same bargaining game described so crudely >> above... >> >> Regards all, >> >> TV >> >> On May 21, 2007, at 7:10 PM, Bob Briscoe wrote: >> >>> David, >>> >>> Going back to your opening posting in this thread... >>> >>> At 15:57 15/05/2007, David P. Reed wrote: >>>> I call for others to join me in constructing the next Internet, >>>> not as an extension of the current Internet, because that >>>> Internet is corrupted by people who do not value innovation, >>>> connectivity, and the ability to absorb new ideas from the user >>>> community. >>> >>> So, how do we make an Internet that can evolve to meet all sorts >>> of future social and economic desires, except it mustn't evolve >>> away from David Reed's original desires for it, and it mustn't >>> evolve towards the desires of those who invest in it? Tough >>> design brief :) >>> >>> My sarcasm is only intended to prevent you wasting a lot of years >>> of your life on this project, without questioning whether the >>> problem is with your aspirations, not with the Internet... >>> >>> Perhaps it would help _not_ to think of suppression of innovation >>> as a failure. Innovation isn't an end in itself. People don't >>> want innovation to the exclusion of all else. People want a >>> balance between innovative new stuff and uninterrupted, cheap, >>> robust, hassle-free enjoyment of previous innovations. >>> >>> Surely the real requirement is for a distributed computing >>> internetwork that can be temporarily or locally closed to milk >>> the fruits of an innovation without having to be permanently and >>> ubiquitously closed. That is, locally open or locally closed by >>> policy control. That's a heroic research challenge in its own >>> right - and not impossible - here's some case studies that have >>> (sometimes unconsciously) achieved this: >>> >>> >>> A desire to embed _only_ openness into the architecture to the >>> exclusion of thinking how to do closedness is the problem, not >>> the solution. So, I for one won't be joining you in this venture, >>> even though my initial reflex action would be (and always was) >>> openness. I'd ask you to reconsider too. >>> >>> If you disagree with this 'Tussle in Cyberspace' argument, I >>> think you ought to say why, as I've not heard a good argument >>> against it. >>> >>> >>>> To save argument, I am not arguing that the IP layer could not >>>> evolve. >>>> I am arguing that the current research community and industry >>>> community that support the IP layer *will not* allow it to evolve. >>> >>> You don't need to start out deciding that, whatever the solution, >>> it won't be an evolution from where we are. That doesn't need to >>> be decided until you know what the solution might look like. >>> >>> >>>> But that need not matter. If necessary, we can do this >>>> inefficiently, creating a new class of routers that sit at the >>>> edge of the IP network and sit in end user sites. We can >>>> encrypt the traffic, so that the IP monopoly (analogous to the >>>> ATT monopoly) cannot tell what our layer is doing, and we can >>>> use protocols that are more aggressively defensive since the IP >>>> layer has indeed gotten very aggressive in blocking traffic and >>>> attempting to prevent user-to-user connectivity. >>> >>> If this is what you want you don't need a new Internet. You >>> already have the power to encrypt and the power to be >>> aggressively defensive with the current Internet (as your TOR and >>> Joost examples demonstrate). >>> >>> You want to use the infrastructure those nasty routerheads have >>> invested in, presumably to benefit from the network effect their >>> investments (and your previous inventiveness) helped to create. >>> And if they try to stop you, are they not justified? What is the >>> difference then between your traffic and an attack - from /their/ >>> point of view? >>> >>> Or are you claiming a higher moral right to abuse the policies >>> they impose on their networks because you have honourable >>> intentions, in /your/ opinion? Universal connectivity isn't a >>> human right that trumps their policies. It's just something you >>> (& I) care about a lot. Isn't this getting close to an analogy >>> with animal rights activists conspiring to kill vivisectionists. >>> >>> Reversing this, what if someone launches a DoS attack against an >>> unforeseen vulnerability in your new Internet? Would your >>> architecture never allow it to be blocked, because that would >>> damage universal connectivity? >>> >>> I think you need to take a step back and reconsider the >>> aspersions you're casting on routerheads. They understand the >>> value of universal connectivity too. But they also understand the >>> higher value of some connectivities than others. Given the tools >>> they have at their disposal right now, the best they can do is >>> block some stuff to keep other stuff going. It's as much the >>> fault of you and me that they have no other option, as it is >>> their fault for blocking stuff. >>> >>> You are blaming operators for acting in their own self-interest. >>> Shouldn't you blame the designers of the architecture for not >>> expecting operators to act in their own interests? Again, what is >>> your argument against 'Tussle in Cyberspace'? >>> >>> >>> Bob >>> >>> >>> >> >> From day at std.com Mon May 21 20:05:07 2007 From: day at std.com (John Day) Date: Mon, 21 May 2007 23:05:07 -0400 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <4652469D.8040301@reed.com> References: <5.2.1.1.2.20070521191233.05982d30@pop3.jungle.bt.co.uk> <4652469D.8040301@reed.com> Message-ID: I think what he was trying to say (taking it from the other direction) was that the Internet succeeded by completely breaking with the status quo of what constituted a network in 1970 and never cooperating with it. To save the Internet you will have to do the same thing. Cooperation (tussling) will only lead to being coopted and ultimate failure. This is what the big power players are very good at. Take care, John At 21:25 -0400 2007/05/21, David P. Reed wrote: >I'm now completely confused. Perhaps those who understand the >"tussle" principle could tease out these concepts in a way that the >rest of us can understand? A small start would be explaining in >what way that "tussle is inherently recursive"? > >Tom Vest wrote: >>Being a big fan and frequent user/abuser of the tussle concept, let >>me be the first person to observe some obvious problems that follow >>from using it as a normative principle: >> >>1. Although the concept of tussle is inherently recursive, it's >>typically only used (e.g., by network architects and systems theory >>people) to discuss the upper elements of the protocol/service >>stack. Too often people forget, or maybe fail to notice, that the >>Internet itself only exists in its "current canonical form" in >>places when & where a prior/foundational tussle over control of >>communications facilities/infrastructure inputs resulted in certain >>sorts of outcomes. In places where all or almost of the interfaces >>are hidden/controlled by a single monolithic entity (e.g., like >>hierarchical/horizontal infrastructure segments within a >>territorial monopoly PSTN), tussle may still exist, but it has >>approximately zero impact/significance to outsiders. >> >>2. As soon as "tusslers" become aware of the idea, they tend to >>incorporate it, rhetorically if not operationally, into their >>future actions. Granting that I am no game theory expert (and would >>love to hear a better informed comparison here), this seems like >>just another example of an iterative bargaining game, ala the >>Prisoner's Dilemma. An appeal to the reasonableness of a >>"tussle-friendly outcome" is just as likely as not to be a gambit >>to "win" a larger piece of the pie... unless maybe the appeal is >>coming from someone you already trust for some unrelated reason. >> >>Bottom line: tussle provides a great descriptive framework for >>understanding how, when, and why things change (or don't change), >>and would be a fine architectural guide for a monolithic Supreme >>Being who has prior knowledge of "what good would be good" to >>select as the criteria for winning in any particular tussle >>instance -- but as soon as you have two Semi-Supreme Beings they >>end up stuck in the same bargaining game described so crudely >>above... >> >>Regards all, >> >>TV >> >>On May 21, 2007, at 7:10 PM, Bob Briscoe wrote: >> >>>David, >>> >>>Going back to your opening posting in this thread... >>> >>>At 15:57 15/05/2007, David P. Reed wrote: >>>>I call for others to join me in constructing the next Internet, >>>>not as an extension of the current Internet, because that >>>>Internet is corrupted by people who do not value innovation, >>>>connectivity, and the ability to absorb new ideas from the user >>>>community. >>> >>>So, how do we make an Internet that can evolve to meet all sorts >>>of future social and economic desires, except it mustn't evolve >>>away from David Reed's original desires for it, and it mustn't >>>evolve towards the desires of those who invest in it? Tough design >>>brief :) >>> >>>My sarcasm is only intended to prevent you wasting a lot of years >>>of your life on this project, without questioning whether the >>>problem is with your aspirations, not with the Internet... >>> >>>Perhaps it would help _not_ to think of suppression of innovation >>>as a failure. Innovation isn't an end in itself. People don't want >>>innovation to the exclusion of all else. People want a balance >>>between innovative new stuff and uninterrupted, cheap, robust, >>>hassle-free enjoyment of previous innovations. >>> >>>Surely the real requirement is for a distributed computing >>>internetwork that can be temporarily or locally closed to milk the >>>fruits of an innovation without having to be permanently and >>>ubiquitously closed. That is, locally open or locally closed by >>>policy control. That's a heroic research challenge in its own >>>right - and not impossible - here's some case studies that have >>>(sometimes unconsciously) achieved this: >>> >>> >>>A desire to embed _only_ openness into the architecture to the >>>exclusion of thinking how to do closedness is the problem, not the >>>solution. So, I for one won't be joining you in this venture, even >>>though my initial reflex action would be (and always was) >>>openness. I'd ask you to reconsider too. >>> >>>If you disagree with this 'Tussle in Cyberspace' argument, I think >>>you ought to say why, as I've not heard a good argument against it. >>> >>>>To save argument, I am not arguing that the IP layer could not evolve. >>>>I am arguing that the current research community and industry >>>>community that support the IP layer *will not* allow it to evolve. >>> >>>You don't need to start out deciding that, whatever the solution, >>>it won't be an evolution from where we are. That doesn't need to >>>be decided until you know what the solution might look like. >>> >>>>But that need not matter. If necessary, we can do this >>>>inefficiently, creating a new class of routers that sit at the >>>>edge of the IP network and sit in end user sites. We can >>>>encrypt the traffic, so that the IP monopoly (analogous to the >>>>ATT monopoly) cannot tell what our layer is doing, and we can use >>>>protocols that are more aggressively defensive since the IP layer >>>>has indeed gotten very aggressive in blocking traffic and >>>>attempting to prevent user-to-user connectivity. >>> >>>If this is what you want you don't need a new Internet. You >>>already have the power to encrypt and the power to be aggressively >>>defensive with the current Internet (as your TOR and Joost >>>examples demonstrate). >>> >>>You want to use the infrastructure those nasty routerheads have >>>invested in, presumably to benefit from the network effect their >>>investments (and your previous inventiveness) helped to create. >>>And if they try to stop you, are they not justified? What is the >>>difference then between your traffic and an attack - from /their/ >>>point of view? >>> >>>Or are you claiming a higher moral right to abuse the policies >>>they impose on their networks because you have honourable >>>intentions, in /your/ opinion? Universal connectivity isn't a >>>human right that trumps their policies. It's just something you (& >>>I) care about a lot. Isn't this getting close to an analogy with >>>animal rights activists conspiring to kill vivisectionists. >>> >>>Reversing this, what if someone launches a DoS attack against an >>>unforeseen vulnerability in your new Internet? Would your >>>architecture never allow it to be blocked, because that would >>>damage universal connectivity? >>> >>>I think you need to take a step back and reconsider the aspersions >>>you're casting on routerheads. They understand the value of >>>universal connectivity too. But they also understand the higher >>>value of some connectivities than others. Given the tools they >>>have at their disposal right now, the best they can do is block >>>some stuff to keep other stuff going. It's as much the fault of >>>you and me that they have no other option, as it is their fault >>>for blocking stuff. >>> >>>You are blaming operators for acting in their own self-interest. >>>Shouldn't you blame the designers of the architecture for not >>>expecting operators to act in their own interests? Again, what is >>>your argument against 'Tussle in Cyberspace'? >>> >>> >>>Bob From lynne at telemuse.net Mon May 21 20:46:02 2007 From: lynne at telemuse.net (Lynne Jolitz) Date: Mon, 21 May 2007 20:46:02 -0700 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: Message-ID: <002001c79c23$b800d840$8d01a8c0@telemuse.net> Wonderful discussion Tom! And very apropos. One additional item. You are presuming a comparative that tussle works symmetrically. This doesn't necessarily have to be. Tussle can have a hidden unfair advantage element depending on who begins the game, for example. Symmetry in this case is not maintained. There are other examples. Physics is a good teacher. Lynne Jolitz. ---- We use SpamQuiz. If your ISP didn't make the grade try http://lynne.telemuse.net > -----Original Message----- > From: end2end-interest-bounces at postel.org > [mailto:end2end-interest-bounces at postel.org]On Behalf Of Tom Vest > Sent: Monday, May 21, 2007 6:09 PM > To: Bob Briscoe > Cc: David P. Reed; end2end-interest list > Subject: Re: [e2e] Time for a new Internet Protocol > > > Being a big fan and frequent user/abuser of the tussle concept, let > me be the first person to observe some obvious problems that follow > from using it as a normative principle: > > 1. Although the concept of tussle is inherently recursive, it's > typically only used (e.g., by network architects and systems theory > people) to discuss the upper elements of the protocol/service stack. > Too often people forget, or maybe fail to notice, that the Internet > itself only exists in its "current canonical form" in places when & > where a prior/foundational tussle over control of communications > facilities/infrastructure inputs resulted in certain sorts of > outcomes. In places where all or almost of the interfaces are hidden/ > controlled by a single monolithic entity (e.g., like hierarchical/ > horizontal infrastructure segments within a territorial monopoly > PSTN), tussle may still exist, but it has approximately zero impact/ > significance to outsiders. > > 2. As soon as "tusslers" become aware of the idea, they tend to > incorporate it, rhetorically if not operationally, into their future > actions. Granting that I am no game theory expert (and would love to > hear a better informed comparison here), this seems like just another > example of an iterative bargaining game, ala the Prisoner's Dilemma. > An appeal to the reasonableness of a "tussle-friendly outcome" is > just as likely as not to be a gambit to "win" a larger piece of the > pie... unless maybe the appeal is coming from someone you already > trust for some unrelated reason. > > Bottom line: tussle provides a great descriptive framework for > understanding how, when, and why things change (or don't change), and > would be a fine architectural guide for a monolithic Supreme Being > who has prior knowledge of "what good would be good" to select as the > criteria for winning in any particular tussle instance -- but as soon > as you have two Semi-Supreme Beings they end up stuck in the same > bargaining game described so crudely above... > > Regards all, > > TV > > On May 21, 2007, at 7:10 PM, Bob Briscoe wrote: > > > David, > > > > Going back to your opening posting in this thread... > > > > At 15:57 15/05/2007, David P. Reed wrote: > >> I call for others to join me in constructing the next Internet, > >> not as an extension of the current Internet, because that Internet > >> is corrupted by people who do not value innovation, connectivity, > >> and the ability to absorb new ideas from the user community. > > > > So, how do we make an Internet that can evolve to meet all sorts of > > future social and economic desires, except it mustn't evolve away > > from David Reed's original desires for it, and it mustn't evolve > > towards the desires of those who invest in it? Tough design brief :) > > > > My sarcasm is only intended to prevent you wasting a lot of years > > of your life on this project, without questioning whether the > > problem is with your aspirations, not with the Internet... > > > > Perhaps it would help _not_ to think of suppression of innovation > > as a failure. Innovation isn't an end in itself. People don't want > > innovation to the exclusion of all else. People want a balance > > between innovative new stuff and uninterrupted, cheap, robust, > > hassle-free enjoyment of previous innovations. > > > > Surely the real requirement is for a distributed computing > > internetwork that can be temporarily or locally closed to milk the > > fruits of an innovation without having to be permanently and > > ubiquitously closed. That is, locally open or locally closed by > > policy control. That's a heroic research challenge in its own right > > - and not impossible - here's some case studies that have > > (sometimes unconsciously) achieved this: > > > > > > A desire to embed _only_ openness into the architecture to the > > exclusion of thinking how to do closedness is the problem, not the > > solution. So, I for one won't be joining you in this venture, even > > though my initial reflex action would be (and always was) openness. > > I'd ask you to reconsider too. > > > > If you disagree with this 'Tussle in Cyberspace' argument, I think > > you ought to say why, as I've not heard a good argument against it. > > > > > >> To save argument, I am not arguing that the IP layer could not > >> evolve. > >> I am arguing that the current research community and industry > >> community that support the IP layer *will not* allow it to evolve. > > > > You don't need to start out deciding that, whatever the solution, > > it won't be an evolution from where we are. That doesn't need to be > > decided until you know what the solution might look like. > > > > > >> But that need not matter. If necessary, we can do this > >> inefficiently, creating a new class of routers that sit at the > >> edge of the IP network and sit in end user sites. We can encrypt > >> the traffic, so that the IP monopoly (analogous to the ATT > >> monopoly) cannot tell what our layer is doing, and we can use > >> protocols that are more aggressively defensive since the IP layer > >> has indeed gotten very aggressive in blocking traffic and > >> attempting to prevent user-to-user connectivity. > > > > If this is what you want you don't need a new Internet. You already > > have the power to encrypt and the power to be aggressively > > defensive with the current Internet (as your TOR and Joost examples > > demonstrate). > > > > You want to use the infrastructure those nasty routerheads have > > invested in, presumably to benefit from the network effect their > > investments (and your previous inventiveness) helped to create. And > > if they try to stop you, are they not justified? What is the > > difference then between your traffic and an attack - from /their/ > > point of view? > > > > Or are you claiming a higher moral right to abuse the policies they > > impose on their networks because you have honourable intentions, > > in /your/ opinion? Universal connectivity isn't a human right that > > trumps their policies. It's just something you (& I) care about a > > lot. Isn't this getting close to an analogy with animal rights > > activists conspiring to kill vivisectionists. > > > > Reversing this, what if someone launches a DoS attack against an > > unforeseen vulnerability in your new Internet? Would your > > architecture never allow it to be blocked, because that would > > damage universal connectivity? > > > > I think you need to take a step back and reconsider the aspersions > > you're casting on routerheads. They understand the value of > > universal connectivity too. But they also understand the higher > > value of some connectivities than others. Given the tools they have > > at their disposal right now, the best they can do is block some > > stuff to keep other stuff going. It's as much the fault of you and > > me that they have no other option, as it is their fault for > > blocking stuff. > > > > You are blaming operators for acting in their own self-interest. > > Shouldn't you blame the designers of the architecture for not > > expecting operators to act in their own interests? Again, what is > > your argument against 'Tussle in Cyberspace'? > > > > > > Bob > > > > > > > > From touch at ISI.EDU Mon May 21 21:11:23 2007 From: touch at ISI.EDU (Joe Touch) Date: Mon, 21 May 2007 21:11:23 -0700 Subject: [e2e] All things in moderation - please In-Reply-To: <464DDD56.5090405@isi.edu> References: <5.1.0.14.2.20070517205046.0375a900@boreas.isi.edu> <20070516234931.EA264872D5@mercury.lcs.mit.edu> <5.1.0.14.2.20070517205046.0375a900@boreas.isi.edu> <5.2.1.1.2.20070518092859.00ac4d60@boreas.isi.edu> <464DDD56.5090405@isi.edu> Message-ID: <46526D6B.8080801@isi.edu> Hi, all, Another reminder. Self moderation would be appreciated. If necessary, other moderation can be (and has been, FWIW) imposed. Joe (as list admin) Joe Touch wrote: > A note to all on the list: > > Please keep the language and tone civil. Those who do not risk having > their posts moderated (which means 24+ hour delay on their posts). > > Rather than highlighting individuals involved, they have been contacted > privately. > > Thanks, > > Joe (as list admin) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 250 bytes Desc: OpenPGP digital signature Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20070521/3bef3e3c/signature.bin From huitema at windows.microsoft.com Mon May 21 23:14:35 2007 From: huitema at windows.microsoft.com (Christian Huitema) Date: Mon, 21 May 2007 23:14:35 -0700 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <4652469D.8040301@reed.com> References: <5.2.1.1.2.20070521191233.05982d30@pop3.jungle.bt.co.uk> <4652469D.8040301@reed.com> Message-ID: <70C6EFCDFC8AAD418EF7063CD132D0640498E6FC@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com> > I'm now completely confused. Perhaps those who understand the > "tussle" > principle could tease out these concepts in a way that the rest of us > can understand? Clark, D., Wroclawski, J., Sollins, K., Braden, R., "Tussle in Cyberspace: Defining Tomorrow's Internet". ACM SIGCOMM 2002, August 2002. http://www.acm.org/sigs/sigcomm/sigcomm2002/papers/tussle.pdf -- Christian Huitema From Jon.Crowcroft at cl.cam.ac.uk Tue May 22 02:23:00 2007 From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Tue, 22 May 2007 10:23:00 +0100 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <97B67348-C6C7-4065-AEC8-C9DB997F44B4@pch.net> References: <5.2.1.1.2.20070521191233.05982d30@pop3.jungle.bt.co.uk> <4652469D.8040301@reed.com> <97B67348-C6C7-4065-AEC8-C9DB997F44B4@pch.net> Message-ID: Tussles: Network architectures have levers that interface between the consumer and the producer and in the internet case, between competitors - These levers implement tussles Some examples of levers: 1. Congestion control is a lever which matches demand and supply on a short time frame, while trying to create (however illusory) some notion of fairness/transparency (neutrality?) as in Frank Kelly's dual model (or the caltech optimization formulation) 2. BGP policy control is a lever that controls ingress/egress/transit and create ways for businesses to compete or cooperate (peering or customer/provider) at the aggregate level (allegedly - of course, many policies are no such thing) 3. Obviously, recursive levers exist such as the interaction between TCP and TE or the interaction between IGP and BGP (Hot Potatoe versus other policy). 4. Since levers are coupled (e.g. change of route changes latency, and selects different bottlenecks, so changes TCP throughput), mutual recursion between tussles is obviously a given. Change the architecture, (e.g. do source routing) you may change the levers (allow uses to bypass BGP or TE) and you may make some tussles collapse (remove possible market) or you may improve the market efficiency (e.g. by giving more information so that the market operates better) depending on the details. Of course all this assumes that a market (in networks) is a good thing which some people have asserted, but some people may just be (in the words of Some Like it Hot) getting too big for their boots... Some reasons markets are inefficient: information hiding (BGP is good at this, which may be good or bad). mismatch in timescales (TE v. TCP) cartels (NANOG, anyone?) shortages esp. false scarcity (address space is pretty good example) over regulation under regulation etc etc Some reasons to be cheerful, part 3. Most people in the world dont use the internet so there's a big opportunity to build something useful for the 4/5 of the worlds population where all this chat is irrelevant and other things (buildings and food, maybe peace) matter more right now Some countries that are quite big dont subscribe to markets being the only way to do things and some of the same countries build their own quite good and quite cheap routers (can you say China) The weather is quite good and I'm going on vacation...ciao. In missive <97B67348-C6C7-4065-AEC8-C9DB997F44B4 at pch.net>, Tom Vest typed: >>On May 21, 2007, at 9:25 PM, David P. Reed wrote: >> >>> I'm now completely confused. Perhaps those who understand the >>> "tussle" principle could tease out these concepts in a way that the >>> rest of us can understand? A small start would be explaining in >>> what way that "tussle is inherently recursive"? >> >>Since I just made the idea up, I'll be happy to clarify -- perhaps >>that'll make it easier for others to spot the flaws in my logic. >> >>The idea is a nested game, where two tusslers are tussling over the >>exploitation/control of a given interface, the value of which is >>determined by the outcome of a more basic/encompassing game, one in >>which two tusslers are tussling.... repeat ad infinitum. >> >>Replace the word "game" with your favorite approximation of OSI/ >>protocol stack, and you arrive at what I had in mind. >>In fact, the point I was flogging to excess on the list last week was >>an application of this idea/explanatory framework to the origins of >>the Internet, and its relationship to the timing and location of >>telecom regulatory changes (e.g., on 1968, 1976, 1984, 1994, 1996) >>that slowly expanded, and then multiplied, the functional and spatial >>domain within which it was possible, and useful, to provision >>"independent" IP networks, routing blobs, ASes, etc. -- meaning >>operationally independent from each other as well as from the >>underlying telecom facilities owner(s), who previously were the only >>institutions that could make use of the facilities platform. >> >>Happy to provide more details/illustrations if that would help ;-) >> >>TV >> >> >>> Tom Vest wrote: >>>> Being a big fan and frequent user/abuser of the tussle concept, >>>> let me be the first person to observe some obvious problems that >>>> follow from using it as a normative principle: >>>> >>>> 1. Although the concept of tussle is inherently recursive, it's >>>> typically only used (e.g., by network architects and systems >>>> theory people) to discuss the upper elements of the protocol/ >>>> service stack. Too often people forget, or maybe fail to notice, >>>> that the Internet itself only exists in its "current canonical >>>> form" in places when & where a prior/foundational tussle over >>>> control of communications facilities/infrastructure inputs >>>> resulted in certain sorts of outcomes. In places where all or >>>> almost of the interfaces are hidden/controlled by a single >>>> monolithic entity (e.g., like hierarchical/horizontal >>>> infrastructure segments within a territorial monopoly PSTN), >>>> tussle may still exist, but it has approximately zero impact/ >>>> significance to outsiders. >>>> >>>> 2. As soon as "tusslers" become aware of the idea, they tend to >>>> incorporate it, rhetorically if not operationally, into their >>>> future actions. Granting that I am no game theory expert (and >>>> would love to hear a better informed comparison here), this seems >>>> like just another example of an iterative bargaining game, ala the >>>> Prisoner's Dilemma. An appeal to the reasonableness of a "tussle- >>>> friendly outcome" is just as likely as not to be a gambit to "win" >>>> a larger piece of the pie... unless maybe the appeal is coming >>>> from someone you already trust for some unrelated reason. >>>> >>>> Bottom line: tussle provides a great descriptive framework for >>>> understanding how, when, and why things change (or don't change), >>>> and would be a fine architectural guide for a monolithic Supreme >>>> Being who has prior knowledge of "what good would be good" to >>>> select as the criteria for winning in any particular tussle >>>> instance -- but as soon as you have two Semi-Supreme Beings they >>>> end up stuck in the same bargaining game described so crudely >>>> above... >>>> >>>> Regards all, >>>> >>>> TV >>>> >>>> On May 21, 2007, at 7:10 PM, Bob Briscoe wrote: >>>> >>>>> David, >>>>> >>>>> Going back to your opening posting in this thread... >>>>> >>>>> At 15:57 15/05/2007, David P. Reed wrote: >>>>>> I call for others to join me in constructing the next Internet, >>>>>> not as an extension of the current Internet, because that >>>>>> Internet is corrupted by people who do not value innovation, >>>>>> connectivity, and the ability to absorb new ideas from the user >>>>>> community. >>>>> >>>>> So, how do we make an Internet that can evolve to meet all sorts >>>>> of future social and economic desires, except it mustn't evolve >>>>> away from David Reed's original desires for it, and it mustn't >>>>> evolve towards the desires of those who invest in it? Tough >>>>> design brief :) >>>>> >>>>> My sarcasm is only intended to prevent you wasting a lot of years >>>>> of your life on this project, without questioning whether the >>>>> problem is with your aspirations, not with the Internet... >>>>> >>>>> Perhaps it would help _not_ to think of suppression of innovation >>>>> as a failure. Innovation isn't an end in itself. People don't >>>>> want innovation to the exclusion of all else. People want a >>>>> balance between innovative new stuff and uninterrupted, cheap, >>>>> robust, hassle-free enjoyment of previous innovations. >>>>> >>>>> Surely the real requirement is for a distributed computing >>>>> internetwork that can be temporarily or locally closed to milk >>>>> the fruits of an innovation without having to be permanently and >>>>> ubiquitously closed. That is, locally open or locally closed by >>>>> policy control. That's a heroic research challenge in its own >>>>> right - and not impossible - here's some case studies that have >>>>> (sometimes unconsciously) achieved this: >>>>> >>>>> >>>>> A desire to embed _only_ openness into the architecture to the >>>>> exclusion of thinking how to do closedness is the problem, not >>>>> the solution. So, I for one won't be joining you in this venture, >>>>> even though my initial reflex action would be (and always was) >>>>> openness. I'd ask you to reconsider too. >>>>> >>>>> If you disagree with this 'Tussle in Cyberspace' argument, I >>>>> think you ought to say why, as I've not heard a good argument >>>>> against it. >>>>> >>>>> >>>>>> To save argument, I am not arguing that the IP layer could not >>>>>> evolve. >>>>>> I am arguing that the current research community and industry >>>>>> community that support the IP layer *will not* allow it to evolve. >>>>> >>>>> You don't need to start out deciding that, whatever the solution, >>>>> it won't be an evolution from where we are. That doesn't need to >>>>> be decided until you know what the solution might look like. >>>>> >>>>> >>>>>> But that need not matter. If necessary, we can do this >>>>>> inefficiently, creating a new class of routers that sit at the >>>>>> edge of the IP network and sit in end user sites. We can >>>>>> encrypt the traffic, so that the IP monopoly (analogous to the >>>>>> ATT monopoly) cannot tell what our layer is doing, and we can >>>>>> use protocols that are more aggressively defensive since the IP >>>>>> layer has indeed gotten very aggressive in blocking traffic and >>>>>> attempting to prevent user-to-user connectivity. >>>>> >>>>> If this is what you want you don't need a new Internet. You >>>>> already have the power to encrypt and the power to be >>>>> aggressively defensive with the current Internet (as your TOR and >>>>> Joost examples demonstrate). >>>>> >>>>> You want to use the infrastructure those nasty routerheads have >>>>> invested in, presumably to benefit from the network effect their >>>>> investments (and your previous inventiveness) helped to create. >>>>> And if they try to stop you, are they not justified? What is the >>>>> difference then between your traffic and an attack - from /their/ >>>>> point of view? >>>>> >>>>> Or are you claiming a higher moral right to abuse the policies >>>>> they impose on their networks because you have honourable >>>>> intentions, in /your/ opinion? Universal connectivity isn't a >>>>> human right that trumps their policies. It's just something you >>>>> (& I) care about a lot. Isn't this getting close to an analogy >>>>> with animal rights activists conspiring to kill vivisectionists. >>>>> >>>>> Reversing this, what if someone launches a DoS attack against an >>>>> unforeseen vulnerability in your new Internet? Would your >>>>> architecture never allow it to be blocked, because that would >>>>> damage universal connectivity? >>>>> >>>>> I think you need to take a step back and reconsider the >>>>> aspersions you're casting on routerheads. They understand the >>>>> value of universal connectivity too. But they also understand the >>>>> higher value of some connectivities than others. Given the tools >>>>> they have at their disposal right now, the best they can do is >>>>> block some stuff to keep other stuff going. It's as much the >>>>> fault of you and me that they have no other option, as it is >>>>> their fault for blocking stuff. >>>>> >>>>> You are blaming operators for acting in their own self-interest. >>>>> Shouldn't you blame the designers of the architecture for not >>>>> expecting operators to act in their own interests? Again, what is >>>>> your argument against 'Tussle in Cyberspace'? >>>>> >>>>> >>>>> Bob >>>>> >>>>> >>>>> >>>> >>>> >> cheers jon From dpreed at reed.com Tue May 22 05:28:53 2007 From: dpreed at reed.com (David P. Reed) Date: Tue, 22 May 2007 08:28:53 -0400 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: References: <5.2.1.1.2.20070521191233.05982d30@pop3.jungle.bt.co.uk> <4652469D.8040301@reed.com> <97B67348-C6C7-4065-AEC8-C9DB997F44B4@pch.net> Message-ID: <4652E205.1030407@reed.com> Can tussles be quantified? In principle if not in practice? Levers in physics can, which is why I am asking. The reason for asking is to understand whether the "tussle" idea is like the "end to end argument" (which was originally derived from observation of its use in practice, proposed as a general structure for thinking about designs and modularity, not as a quantitative tool, but evolved to become a principle of sorts, and even to be quantified in the language of "real options" by Mark Gaynor, and in other words in the language of "system dynamics" by Charlie Fine's Clockspeed concept). I think it would be great if the tussle concept could migrate from a descriptive term to a term of art or even quantified analysis. But since I don't understand the term enough to use it correctly in the manner meant by its advocates, I urge them (not just the original authors) to consider it. Jon Crowcroft wrote: > Tussles: > > Network architectures have levers that > interface between the consumer and the producer > and in the internet case, between competitors - > > These levers implement tussles > > Some examples of levers: > > 1. Congestion control is a lever which matches demand and supply on a short time frame, > while trying to create (however illusory) some notion of fairness/transparency (neutrality?) > as in Frank Kelly's dual model (or the caltech optimization formulation) > > 2. BGP policy control is a lever that controls ingress/egress/transit and > create ways for businesses to compete or cooperate (peering or customer/provider) > at the aggregate level (allegedly - of course, many policies are no such thing) > > 3. Obviously, recursive levers exist such as the interaction between TCP and TE > or the interaction between IGP and BGP (Hot Potatoe versus other policy). > > 4. Since levers are coupled (e.g. change of route changes latency, > and selects different bottlenecks, so changes TCP throughput), > mutual recursion between tussles is obviously a given. > > Change the architecture, (e.g. do source routing) > you may change the levers (allow uses to bypass BGP or TE) > and you may make some tussles collapse (remove possible market) > or you may improve the market efficiency > (e.g. by giving more information so that the market operates better) > depending on the details. > > Of course all this assumes that a market (in networks) is a good thing > which some people have asserted, but some people may just be (in the words > of Some Like it Hot) getting too big for their boots... > > Some reasons markets are inefficient: > > information hiding (BGP is good at this, which may be good or bad). > mismatch in timescales (TE v. TCP) > cartels (NANOG, anyone?) > shortages esp. false scarcity (address space is pretty good example) > over regulation > under regulation > etc etc > > Some reasons to be cheerful, part 3. > > Most people in the world dont use the internet so there's a big opportunity > to build something useful for the 4/5 of the worlds population where > all this chat is irrelevant and other things (buildings and food, maybe peace) > matter more right now > > Some countries that are quite big dont subscribe to markets being the only way to do things > and some of the same countries build their own quite good and quite cheap routers (can you say China) > > The weather is quite good and I'm going on vacation...ciao. > > > In missive <97B67348-C6C7-4065-AEC8-C9DB997F44B4 at pch.net>, Tom Vest typed: > > >>On May 21, 2007, at 9:25 PM, David P. Reed wrote: > >> > >>> I'm now completely confused. Perhaps those who understand the > >>> "tussle" principle could tease out these concepts in a way that the > >>> rest of us can understand? A small start would be explaining in > >>> what way that "tussle is inherently recursive"? > >> > >>Since I just made the idea up, I'll be happy to clarify -- perhaps > >>that'll make it easier for others to spot the flaws in my logic. > >> > >>The idea is a nested game, where two tusslers are tussling over the > >>exploitation/control of a given interface, the value of which is > >>determined by the outcome of a more basic/encompassing game, one in > >>which two tusslers are tussling.... repeat ad infinitum. > >> > >>Replace the word "game" with your favorite approximation of OSI/ > >>protocol stack, and you arrive at what I had in mind. > >>In fact, the point I was flogging to excess on the list last week was > >>an application of this idea/explanatory framework to the origins of > >>the Internet, and its relationship to the timing and location of > >>telecom regulatory changes (e.g., on 1968, 1976, 1984, 1994, 1996) > >>that slowly expanded, and then multiplied, the functional and spatial > >>domain within which it was possible, and useful, to provision > >>"independent" IP networks, routing blobs, ASes, etc. -- meaning > >>operationally independent from each other as well as from the > >>underlying telecom facilities owner(s), who previously were the only > >>institutions that could make use of the facilities platform. > >> > >>Happy to provide more details/illustrations if that would help ;-) > >> > >>TV > >> > >> > >>> Tom Vest wrote: > >>>> Being a big fan and frequent user/abuser of the tussle concept, > >>>> let me be the first person to observe some obvious problems that > >>>> follow from using it as a normative principle: > >>>> > >>>> 1. Although the concept of tussle is inherently recursive, it's > >>>> typically only used (e.g., by network architects and systems > >>>> theory people) to discuss the upper elements of the protocol/ > >>>> service stack. Too often people forget, or maybe fail to notice, > >>>> that the Internet itself only exists in its "current canonical > >>>> form" in places when & where a prior/foundational tussle over > >>>> control of communications facilities/infrastructure inputs > >>>> resulted in certain sorts of outcomes. In places where all or > >>>> almost of the interfaces are hidden/controlled by a single > >>>> monolithic entity (e.g., like hierarchical/horizontal > >>>> infrastructure segments within a territorial monopoly PSTN), > >>>> tussle may still exist, but it has approximately zero impact/ > >>>> significance to outsiders. > >>>> > >>>> 2. As soon as "tusslers" become aware of the idea, they tend to > >>>> incorporate it, rhetorically if not operationally, into their > >>>> future actions. Granting that I am no game theory expert (and > >>>> would love to hear a better informed comparison here), this seems > >>>> like just another example of an iterative bargaining game, ala the > >>>> Prisoner's Dilemma. An appeal to the reasonableness of a "tussle- > >>>> friendly outcome" is just as likely as not to be a gambit to "win" > >>>> a larger piece of the pie... unless maybe the appeal is coming > >>>> from someone you already trust for some unrelated reason. > >>>> > >>>> Bottom line: tussle provides a great descriptive framework for > >>>> understanding how, when, and why things change (or don't change), > >>>> and would be a fine architectural guide for a monolithic Supreme > >>>> Being who has prior knowledge of "what good would be good" to > >>>> select as the criteria for winning in any particular tussle > >>>> instance -- but as soon as you have two Semi-Supreme Beings they > >>>> end up stuck in the same bargaining game described so crudely > >>>> above... > >>>> > >>>> Regards all, > >>>> > >>>> TV > >>>> > >>>> On May 21, 2007, at 7:10 PM, Bob Briscoe wrote: > >>>> > >>>>> David, > >>>>> > >>>>> Going back to your opening posting in this thread... > >>>>> > >>>>> At 15:57 15/05/2007, David P. Reed wrote: > >>>>>> I call for others to join me in constructing the next Internet, > >>>>>> not as an extension of the current Internet, because that > >>>>>> Internet is corrupted by people who do not value innovation, > >>>>>> connectivity, and the ability to absorb new ideas from the user > >>>>>> community. > >>>>> > >>>>> So, how do we make an Internet that can evolve to meet all sorts > >>>>> of future social and economic desires, except it mustn't evolve > >>>>> away from David Reed's original desires for it, and it mustn't > >>>>> evolve towards the desires of those who invest in it? Tough > >>>>> design brief :) > >>>>> > >>>>> My sarcasm is only intended to prevent you wasting a lot of years > >>>>> of your life on this project, without questioning whether the > >>>>> problem is with your aspirations, not with the Internet... > >>>>> > >>>>> Perhaps it would help _not_ to think of suppression of innovation > >>>>> as a failure. Innovation isn't an end in itself. People don't > >>>>> want innovation to the exclusion of all else. People want a > >>>>> balance between innovative new stuff and uninterrupted, cheap, > >>>>> robust, hassle-free enjoyment of previous innovations. > >>>>> > >>>>> Surely the real requirement is for a distributed computing > >>>>> internetwork that can be temporarily or locally closed to milk > >>>>> the fruits of an innovation without having to be permanently and > >>>>> ubiquitously closed. That is, locally open or locally closed by > >>>>> policy control. That's a heroic research challenge in its own > >>>>> right - and not impossible - here's some case studies that have > >>>>> (sometimes unconsciously) achieved this: > >>>>> > >>>>> > >>>>> A desire to embed _only_ openness into the architecture to the > >>>>> exclusion of thinking how to do closedness is the problem, not > >>>>> the solution. So, I for one won't be joining you in this venture, > >>>>> even though my initial reflex action would be (and always was) > >>>>> openness. I'd ask you to reconsider too. > >>>>> > >>>>> If you disagree with this 'Tussle in Cyberspace' argument, I > >>>>> think you ought to say why, as I've not heard a good argument > >>>>> against it. > >>>>> > >>>>> > >>>>>> To save argument, I am not arguing that the IP layer could not > >>>>>> evolve. > >>>>>> I am arguing that the current research community and industry > >>>>>> community that support the IP layer *will not* allow it to evolve. > >>>>> > >>>>> You don't need to start out deciding that, whatever the solution, > >>>>> it won't be an evolution from where we are. That doesn't need to > >>>>> be decided until you know what the solution might look like. > >>>>> > >>>>> > >>>>>> But that need not matter. If necessary, we can do this > >>>>>> inefficiently, creating a new class of routers that sit at the > >>>>>> edge of the IP network and sit in end user sites. We can > >>>>>> encrypt the traffic, so that the IP monopoly (analogous to the > >>>>>> ATT monopoly) cannot tell what our layer is doing, and we can > >>>>>> use protocols that are more aggressively defensive since the IP > >>>>>> layer has indeed gotten very aggressive in blocking traffic and > >>>>>> attempting to prevent user-to-user connectivity. > >>>>> > >>>>> If this is what you want you don't need a new Internet. You > >>>>> already have the power to encrypt and the power to be > >>>>> aggressively defensive with the current Internet (as your TOR and > >>>>> Joost examples demonstrate). > >>>>> > >>>>> You want to use the infrastructure those nasty routerheads have > >>>>> invested in, presumably to benefit from the network effect their > >>>>> investments (and your previous inventiveness) helped to create. > >>>>> And if they try to stop you, are they not justified? What is the > >>>>> difference then between your traffic and an attack - from /their/ > >>>>> point of view? > >>>>> > >>>>> Or are you claiming a higher moral right to abuse the policies > >>>>> they impose on their networks because you have honourable > >>>>> intentions, in /your/ opinion? Universal connectivity isn't a > >>>>> human right that trumps their policies. It's just something you > >>>>> (& I) care about a lot. Isn't this getting close to an analogy > >>>>> with animal rights activists conspiring to kill vivisectionists. > >>>>> > >>>>> Reversing this, what if someone launches a DoS attack against an > >>>>> unforeseen vulnerability in your new Internet? Would your > >>>>> architecture never allow it to be blocked, because that would > >>>>> damage universal connectivity? > >>>>> > >>>>> I think you need to take a step back and reconsider the > >>>>> aspersions you're casting on routerheads. They understand the > >>>>> value of universal connectivity too. But they also understand the > >>>>> higher value of some connectivities than others. Given the tools > >>>>> they have at their disposal right now, the best they can do is > >>>>> block some stuff to keep other stuff going. It's as much the > >>>>> fault of you and me that they have no other option, as it is > >>>>> their fault for blocking stuff. > >>>>> > >>>>> You are blaming operators for acting in their own self-interest. > >>>>> Shouldn't you blame the designers of the architecture for not > >>>>> expecting operators to act in their own interests? Again, what is > >>>>> your argument against 'Tussle in Cyberspace'? > >>>>> > >>>>> > >>>>> Bob > >>>>> > >>>>> > >>>>> > >>>> > >>>> > >> > > cheers > > jon > > > From peyman at MIT.EDU Tue May 22 11:23:11 2007 From: peyman at MIT.EDU (Peyman Faratin) Date: Tue, 22 May 2007 14:23:11 -0400 Subject: [e2e] end2end-interest Digest, Vol 39, Issue 32 In-Reply-To: <46521288.7010200@reed.com> References: <411A6C00-4CB8-4258-98C2-267D5F7FE480@mit.edu> <46521288.7010200@reed.com> Message-ID: No, it does not make me feel superior. In fact it has a negative effect to see that after 35 years networking community has failed to produce thoughtful insights in a fundamental problem as economics and regulation of networks. The arguments offered are often quasi- economic, ad-hoc, private and inconsistent in both terminology and historical facts. Not scientific, and more akin to Scientology. Economics and regulation of institutions (not just markets) has preoccupied society for centuries and concepts of natural monopoly, externalities, elasticities, return to scale, incentives, discrimination, utility, costs, marginalism, institutions, risk, market structure, market power, etc....have come to represent phenomena, and captured as primitives of models, that have a _shared_ meaning that permits some semblance of scientific method to be applied. Clearly, economics is not complete, has methodological problems, and has a long way to go, but as academics it is our responsibility to bring clarity to problems through the scientific method. Your posting was not only ambiguous in its usage of the concepts, but was also incomplete and incorrect in its interpretation of historical facts. As I mentioned there was asymmetric regulation of PSTN-IP overlay settlements that helped IP in the early days before infrastructure-based competition could begin and the new sunk cost economics could begin to play out. In fact even this is an incomplete picture. Usage insensitive pricing, interconnection standards (at layer 2 and 3), installed base of elastic applications (email and http), economies of scale of transit contracts, competitive backbones, emerging competition in access technology markets are just few other first-order reasons why IP scaled, in addition to the relative marginal costs induced by asymmetric regulation. Some even argue that lack of security was a first-order contributor to scaling. My post was intended to bring clarity to the discussion and provide references to those interested, and not be an attack. I simply do not share many of your economic conclusions. Ontology is the first place disciplines meet. Economic colleagues tell me that anti-trust lawyers have now after two or so decades come to finally understand the _basic_ concept of price discrimination. The truth is that the science of network economics (or for that matter any substantive discipline) is not unlike the economics of the infrastructure itself - it has high fixed costs of learning the science, a cost which is often not recoverable (i.e it is sunk) given CS departments do not incentivize truly multi-disciplinary research. So people substitute for low cost alternatives. It was never my intention to be flaming people on this list. It is a reflection on the state of our science, or lack thereof. best Peyman p.s. again, some useful text [1] H. E. Nuechterlein and P.J. Weiser (2005) Digital Crossroads: American Telecommunications Policy in the Internet Age, MIT Press, Cambridge, MA, US, 2005 [2] Handbook of Telecommunications Economics, Technology Evolution and the Internet, Vol.2, S.K. Majumdar, I Vogelsang and M. Cave (eds), Elsevier, 289?364, 2005. [3] J. Tirole (1988): Industrial Organization, MIT Press, Cambridge, MA, US [4] J.J. Laffont and J. Tirole (2001): Competition in Telecommunication. On May 21, 2007, at 5:43 PM, David P. Reed wrote: > Insulting the Media Lab as a playground is rather unnecessary, and > since I claimed no credibility from where I work, it really sounds > rather stupid as a rhetorical device. Does it make you feel superior? > > I got my terminology regarding network externalities and increasing > returns from discussions and writings with economists and business > school professors. It's very possible that I'm wrong in my usage - > and I'm happy to be corrected. > > However, I didn't claim that network externalities were the *same* > as increasing returns to scale. My fault for a non-parallel > construction in that sentence, which might suggest that I thought > that they were the same thing. They are different, and *both* > apply to my argument. > > I was using the economist term "network externalities" not the > regulatory law term. Sorry to be confusing if I was. If you are > instead taking the opportunity for gratuitous insult, my skin is > thick. > > > > Peyman Faratin wrote: >>> >> >> David >> >> I am not sure whether the folks are building computers from sand >> in the playground of the media lab, but I can say that you are >> "inventing your own science from scratch". >> Network externality is _not_ the same concept as increasing return >> to scale. One is to do with (desirable/undesirable) side-effects >> of actions of one agent on another not in the original contract >> (externalities) and the other is to do with efficiency gains of >> productions of a _single_ firm (where the average cost of >> production diminishes with increasing quantities produced). The >> marginal cost of checking these facts is insignificant in this day >> and age of google and wikipedia. >> Regulation and economics of networks are non-trivial and require >> attention to the details of the arguments that people so freely >> misrepresent. Government regulation did have a very significant >> impact on the Internet (through differential settlement structures >> between interconnecting PSTN and early dialup ISPs). This >> government regulation of settlements in fact _helped_ Internet >> scaling, not to mention their public investment in the >> interchanges and the backbone. MSN was rolled out and could've >> tipped to become the dominant standard (as many other inferior >> technologies have done so historically - VHS/Betmax, Gasoline/ >> steam,....). See [1] and [2] for some more recent text on the >> economics and regulation of Internet. >> Determination of causality in an (un)regulated economy is a very >> non-trivial task and, like all sciences, the validity of an >> economic (and the accompanying regulatory) hypothesis/proposition >> is conditioned on the semantics of the model primitives >> (externalities, returns to scale, etc). The devil is in the details. >> >> best >> >> Peyman >> >> [1] H. E. Nuechterlein and P.J. Weiser (2005) /Digital Crossroads: >> American Telecommunications Policy in the Internet Age/, MIT >> Press, Cambridge, MA, US, 2005 >> >> [2] Handbook of Telecommunications Economics, Technology Evolution >> and the Internet, Vol.2, S.K. Majumdar, I Vogelsang and M. Cave >> (eds), Elsevier, 289?364, 2005. >> >>>> >>> One can, at any time, create a non-interoperable network. >>> Corporations do it all the time - they create a boundary at their >>> corporate edge and block transit, entry, and exit of packets. >>> >>> That is not the Internet. It's a private network based on the >>> IP protocol stack. Things get muddier if there is limited >>> interconnection. Then, the Internet would be best defined as >>> the bounded system of endpoints that can each originate packets >>> to each other that *will* be delivered with best efforts. It's >>> hard to draw that boundary, but it exists. >>> >>> Given this view, I don't think government regulations played a >>> significant role in the growth of the Internet. We have had >>> lots of private networks, many larger than the Internet. I >>> know, for example, that Microsoft built a large x.25 based >>> network in 1985 to provide non-Internet dialup information >>> services for Windows. It was called MSN, and was designed to >>> compete with the large private AOL network. >>> >>> What the Internet had going for it was *scalability* and >>> *interoperability*. Metcalfe's Law and Reed's Law and other >>> "laws". Those created connectivity options that scaled faster >>> than private networks could. Economists calle these "network >>> externalites" or "increasing returns to scale". >>> >>> Gov't regulation *could* have killed the Internet's >>> scalability. Easy ways would be to make interconnection of >>> networks a point of control that was taxed or monitored (e.g. if >>> trans-border connectivity were viewed as a point for US Customs >>> do inspect or if CALEA were implemented at peering points). >>> >>> But lacking that, AOL and MSN just could not compete with the >>> scalability of the Internet. >>> >>> That has nothing to do with competition to supply IP software >>> stacks in operating systems, or competition among Ethernet >>> hardware vendors. >>> >>> However, increasing returns to scale is not Destiny. The >>> Internet was not destined to become the sole network. But all >>> the members of the Internet (people who can communicate with >>> anyone else on it) would be nuts to choose a lesser network >>> unless they suffer badly enough to outweigh the collective benefits. >>> >>> Individualist hermits don't get this, I guess. If you want to >>> be left alone, and don't depend on anyone else, there are no >>> returns to scale for you at any scale. Grow your own bits in >>> the backyard, make your own computers from sand, invent your own >>> science from scratch. If the walls are high enough, perhaps you >>> can survive without connectivity. >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20070522/c516415a/attachment-0001.html From rbriscoe at jungle.bt.co.uk Tue May 22 15:17:52 2007 From: rbriscoe at jungle.bt.co.uk (Bob Briscoe) Date: Tue, 22 May 2007 23:17:52 +0100 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: <4652E205.1030407@reed.com> References: <5.2.1.1.2.20070521191233.05982d30@pop3.jungle.bt.co.uk> <4652469D.8040301@reed.com> <97B67348-C6C7-4065-AEC8-C9DB997F44B4@pch.net> Message-ID: <5.2.1.1.2.20070522220029.047a84c8@pop3.jungle.bt.co.uk> David, At 13:28 22/05/2007, David P. Reed wrote: >Can tussles be quantified? In principle if not in practice? Levers in >physics can, which is why I am asking. The question "what class of players can control a lever?" isn't quantitative, but below I point to meagre attempts I've made in the past to make this more scientific. >The reason for asking is to understand whether the "tussle" idea is like >the "end to end argument" (which was originally derived from observation >of its use in practice, proposed as a general structure for thinking about >designs and modularity, not as a quantitative tool, but evolved to become >a principle of sorts, and even to be quantified in the language of "real >options" by Mark Gaynor, and in other words in the language of "system >dynamics" by Charlie Fine's Clockspeed concept). I have tried to start making the articulation of tussle more exact in my own small way. It relates (loosely) to clockspeed actually, tho I hadn't seen Charlie's work when I wrote it. My early thoughts on a QoS/congestion control example (a tech report actually mostly written in Mar 2002 before the tussle paper was published) are here: Market Managed Multi-service Internet (M3I): Architecture Principles In particular section 2.3.3 "Control assumptions in QoS technologies" A synopsis: * A protocol allows a party without inherent control to influence the party with inherent control. * Between signals, the party with inherent control is in charge. * We can place each QoS architecture's assumptions about who is in control on this spectrum, based on the time granularity of the protocols (per packet, per flow, per SLA etc.) * and more insights, but you can read them yourself... * But first the architecture has to be 'designed for tussle', which means that the main players in conflict both have access to the same control information, with the same timeliness, and both have the /ability/ to control some feature (a good example is my own re-feedback, which was of course explicitly designed with this intent). * Then, once either of two conflicting players can take control, outside forces can determine what actually happens in different parts of the net at different times. Where "outside forces" = market selection, govt regulation, social control etc. Altho the paper title is "Market Managed..." I was careful to ensure this was not the only option (so Jon's concern that not every big country in the world likes markets is catered for). But, we're actually a long way off an analytical understanding. Unlike the e2e principle, the original tussle paper doesn't really even have a good concrete example (there are some high level arm-wavy examples, but not anything like as concrete as the TCP reliability example in the e2e principle paper). And certainly not enough to be able to guide any debate about what tussles are _not_ worth designing the architecture around. Oor how we might trade off the cost of allowing for tussle against the benefit it might bring - e.g. is it worth the cost to feed end systems routing information so they can make source routing choices? Can a cut-down subset serve sufficiently? etc. To add to Jon's examples, in my original posting to you I gave a link to slideware I did back in 2004 to give some case studies. There's examples of DoS control, message routing systems, QoS and so on in there. Some others I've come across since: - the choice of link technologies that IP originally enabled (related to TV's point about tussles having already taken place below IP) - SIP end-end vs SIP proxy mode - IPSec transport vs tunnel mode - source routing vs router routing - and so on. The current Internet architecture has some key parts that I would contend aren't designed for tussle: - only network operators can route (e.g. by preventing end systems seeing routing information) - only end systems can control load (e.g. by hiding path feedback information from routers). >I think it would be great if the tussle concept could migrate from a >descriptive term to a term of art or even quantified analysis. But since >I don't understand the term enough to use it correctly in the manner meant >by its advocates, I urge them (not just the original authors) to consider it. We've just put a proposal in to the EU Future Internet call (to get our hands on those tax euros Vadim so loves ;), which includes developing analytical techniques to be able to assess how well an architecture is designed for tussle. And also to learn lessons from the success or otherwise of case studies from the past where tussle has intuitively been included in a design (or not). And whether/how/why this correlates with successful outcomes. So watch this space. Cheers Bob >Jon Crowcroft wrote: >>Tussles: >> >>Network architectures have levers that interface between the consumer and >>the producer >>and in the internet case, between competitors - >>These levers implement tussles >> >>Some examples of levers: >> >>1. Congestion control is a lever which matches demand and supply on a >>short time frame, while trying to create (however illusory) some notion >>of fairness/transparency (neutrality?) >>as in Frank Kelly's dual model (or the caltech optimization formulation) >> >>2. BGP policy control is a lever that controls ingress/egress/transit and >>create ways for businesses to compete or cooperate (peering or >>customer/provider) >>at the aggregate level (allegedly - of course, many policies are no such >>thing) >> >>3. Obviously, recursive levers exist such as the interaction between TCP >>and TE >>or the interaction between IGP and BGP (Hot Potatoe versus other policy). >> >>4. Since levers are coupled (e.g. change of route changes latency, and >>selects different bottlenecks, so changes TCP throughput), >>mutual recursion between tussles is obviously a given. >> >>Change the architecture, (e.g. do source routing) >>you may change the levers (allow uses to bypass BGP or TE) >>and you may make some tussles collapse (remove possible market) or you >>may improve the market efficiency (e.g. by giving more information so >>that the market operates better) >>depending on the details. >> >>Of course all this assumes that a market (in networks) is a good thing >>which some people have asserted, but some people may just be (in the >>words of Some Like it Hot) getting too big for their boots... >> >>Some reasons markets are inefficient: >> information hiding (BGP is good at this, which may be good or bad). >> mismatch in timescales (TE v. TCP) >> cartels (NANOG, anyone?) >> shortages esp. false scarcity (address space is pretty good example) >> over regulation >> under regulation >> etc etc >> >>Some reasons to be cheerful, part 3. >> >> Most people in the world dont use the internet so there's a big >> opportunity to build something useful for the 4/5 of the worlds >> population where all this chat is irrelevant and other things >> (buildings and food, maybe peace) matter more right now >> >> Some countries that are quite big dont subscribe to markets >> being the only way to do things >> and some of the same countries build their own quite good and >> quite cheap routers (can you say China) >> >> The weather is quite good and I'm going on vacation...ciao. >> >> >>In missive <97B67348-C6C7-4065-AEC8-C9DB997F44B4 at pch.net>, Tom Vest typed: >> >> >>On May 21, 2007, at 9:25 PM, David P. Reed wrote: >> >> >> >>> I'm now completely confused. Perhaps those who understand the >> >>> "tussle" principle could tease out these concepts in a way that the >> >>> rest of us can understand? A small start would be explaining in >> >>> what way that "tussle is inherently recursive"? >> >> >> >>Since I just made the idea up, I'll be happy to clarify -- perhaps >> >>that'll make it easier for others to spot the flaws in my logic. >> >> >> >>The idea is a nested game, where two tusslers are tussling over the >> >>exploitation/control of a given interface, the value of which is >> >>determined by the outcome of a more basic/encompassing game, one in >> >>which two tusslers are tussling.... repeat ad infinitum. >> >> >> >>Replace the word "game" with your favorite approximation of >> OSI/ >>protocol stack, and you arrive at what I had in mind. >> >>In fact, the point I was flogging to excess on the list last week was >> >>an application of this idea/explanatory framework to the origins of >> >>the Internet, and its relationship to the timing and location of >> >>telecom regulatory changes (e.g., on 1968, 1976, 1984, 1994, 1996) >> >>that slowly expanded, and then multiplied, the functional and spatial >> >>domain within which it was possible, and useful, to provision >> >>"independent" IP networks, routing blobs, ASes, etc. -- meaning >> >>operationally independent from each other as well as from the >> >>underlying telecom facilities owner(s), who previously were the only >> >>institutions that could make use of the facilities platform. >> >> >> >>Happy to provide more details/illustrations if that would help ;-) >> >> >> >>TV >> >> >> >> >> >>> Tom Vest wrote: >> >>>> Being a big fan and frequent user/abuser of the tussle concept, >> >>>> let me be the first person to observe some obvious problems that >> >>>> follow from using it as a normative principle: >> >>>> >> >>>> 1. Although the concept of tussle is inherently recursive, it's >> >>>> typically only used (e.g., by network architects and systems >> >>>> theory people) to discuss the upper elements of the >> protocol/ >>>> service stack. Too often people forget, or maybe fail to >> notice, >> >>>> that the Internet itself only exists in its "current canonical >> >>>> form" in places when & where a prior/foundational tussle over >> >>>> control of communications facilities/infrastructure inputs >> >>>> resulted in certain sorts of outcomes. In places where all or >> >>>> almost of the interfaces are hidden/controlled by a single >> >>>> monolithic entity (e.g., like hierarchical/horizontal >> >>>> infrastructure segments within a territorial monopoly PSTN), >> >>>> tussle may still exist, but it has approximately zero >> impact/ >>>> significance to outsiders. >> >>>> >> >>>> 2. As soon as "tusslers" become aware of the idea, they tend to >> >>>> incorporate it, rhetorically if not operationally, into their >> >>>> future actions. Granting that I am no game theory expert (and >> >>>> would love to hear a better informed comparison here), this seems >> >>>> like just another example of an iterative bargaining game, ala the >> >>>> Prisoner's Dilemma. An appeal to the reasonableness of a >> "tussle- >>>> friendly outcome" is just as likely as not to be a gambit >> to "win" >> >>>> a larger piece of the pie... unless maybe the appeal is coming >> >>>> from someone you already trust for some unrelated reason. >> >>>> >> >>>> Bottom line: tussle provides a great descriptive framework for >> >>>> understanding how, when, and why things change (or don't change), >> >>>> and would be a fine architectural guide for a monolithic Supreme >> >>>> Being who has prior knowledge of "what good would be good" to >> >>>> select as the criteria for winning in any particular tussle >> >>>> instance -- but as soon as you have two Semi-Supreme Beings they >> >>>> end up stuck in the same bargaining game described so crudely >> >>>> above... >> >>>> >> >>>> Regards all, >> >>>> >> >>>> TV >> >>>> >> >>>> On May 21, 2007, at 7:10 PM, Bob Briscoe wrote: >> >>>> >> >>>>> David, >> >>>>> >> >>>>> Going back to your opening posting in this thread... >> >>>>> >> >>>>> At 15:57 15/05/2007, David P. Reed wrote: >> >>>>>> I call for others to join me in constructing the next Internet, >> >>>>>> not as an extension of the current Internet, because that >> >>>>>> Internet is corrupted by people who do not value innovation, >> >>>>>> connectivity, and the ability to absorb new ideas from the user >> >>>>>> community. >> >>>>> >> >>>>> So, how do we make an Internet that can evolve to meet all sorts >> >>>>> of future social and economic desires, except it mustn't evolve >> >>>>> away from David Reed's original desires for it, and it mustn't >> >>>>> evolve towards the desires of those who invest in it? Tough >> >>>>> design brief :) >> >>>>> >> >>>>> My sarcasm is only intended to prevent you wasting a lot of years >> >>>>> of your life on this project, without questioning whether the >> >>>>> problem is with your aspirations, not with the Internet... >> >>>>> >> >>>>> Perhaps it would help _not_ to think of suppression of innovation >> >>>>> as a failure. Innovation isn't an end in itself. People don't >> >>>>> want innovation to the exclusion of all else. People want a >> >>>>> balance between innovative new stuff and uninterrupted, cheap, >> >>>>> robust, hassle-free enjoyment of previous innovations. >> >>>>> >> >>>>> Surely the real requirement is for a distributed computing >> >>>>> internetwork that can be temporarily or locally closed to milk >> >>>>> the fruits of an innovation without having to be permanently and >> >>>>> ubiquitously closed. That is, locally open or locally closed by >> >>>>> policy control. That's a heroic research challenge in its own >> >>>>> right - and not impossible - here's some case studies that have >> >>>>> (sometimes unconsciously) achieved this: >> >>>>> >> >>>>> >> >>>>> A desire to embed _only_ openness into the architecture to the >> >>>>> exclusion of thinking how to do closedness is the problem, not >> >>>>> the solution. So, I for one won't be joining you in this venture, >> >>>>> even though my initial reflex action would be (and always was) >> >>>>> openness. I'd ask you to reconsider too. >> >>>>> >> >>>>> If you disagree with this 'Tussle in Cyberspace' argument, I >> >>>>> think you ought to say why, as I've not heard a good argument >> >>>>> against it. >> >>>>> >> >>>>> >> >>>>>> To save argument, I am not arguing that the IP layer could not >> >>>>>> evolve. >> >>>>>> I am arguing that the current research community and industry >> >>>>>> community that support the IP layer *will not* allow it to evolve. >> >>>>> >> >>>>> You don't need to start out deciding that, whatever the solution, >> >>>>> it won't be an evolution from where we are. That doesn't need to >> >>>>> be decided until you know what the solution might look like. >> >>>>> >> >>>>> >> >>>>>> But that need not matter. If necessary, we can do this >> >>>>>> inefficiently, creating a new class of routers that sit at the >> >>>>>> edge of the IP network and sit in end user sites. We can >> >>>>>> encrypt the traffic, so that the IP monopoly (analogous to the >> >>>>>> ATT monopoly) cannot tell what our layer is doing, and we can >> >>>>>> use protocols that are more aggressively defensive since the IP >> >>>>>> layer has indeed gotten very aggressive in blocking traffic and >> >>>>>> attempting to prevent user-to-user connectivity. >> >>>>> >> >>>>> If this is what you want you don't need a new Internet. You >> >>>>> already have the power to encrypt and the power to be >> >>>>> aggressively defensive with the current Internet (as your TOR and >> >>>>> Joost examples demonstrate). >> >>>>> >> >>>>> You want to use the infrastructure those nasty routerheads have >> >>>>> invested in, presumably to benefit from the network effect their >> >>>>> investments (and your previous inventiveness) helped to create. >> >>>>> And if they try to stop you, are they not justified? What is the >> >>>>> difference then between your traffic and an attack - from /their/ >> >>>>> point of view? >> >>>>> >> >>>>> Or are you claiming a higher moral right to abuse the policies >> >>>>> they impose on their networks because you have honourable >> >>>>> intentions, in /your/ opinion? Universal connectivity isn't a >> >>>>> human right that trumps their policies. It's just something you >> >>>>> (& I) care about a lot. Isn't this getting close to an analogy >> >>>>> with animal rights activists conspiring to kill vivisectionists. >> >>>>> >> >>>>> Reversing this, what if someone launches a DoS attack against an >> >>>>> unforeseen vulnerability in your new Internet? Would your >> >>>>> architecture never allow it to be blocked, because that would >> >>>>> damage universal connectivity? >> >>>>> >> >>>>> I think you need to take a step back and reconsider the >> >>>>> aspersions you're casting on routerheads. They understand the >> >>>>> value of universal connectivity too. But they also understand the >> >>>>> higher value of some connectivities than others. Given the tools >> >>>>> they have at their disposal right now, the best they can do is >> >>>>> block some stuff to keep other stuff going. It's as much the >> >>>>> fault of you and me that they have no other option, as it is >> >>>>> their fault for blocking stuff. >> >>>>> >> >>>>> You are blaming operators for acting in their own self-interest. >> >>>>> Shouldn't you blame the designers of the architecture for not >> >>>>> expecting operators to act in their own interests? Again, what is >> >>>>> your argument against 'Tussle in Cyberspace'? >> >>>>> >> >>>>> >> >>>>> Bob >> >>>>> >> >>>>> >> >>>>> >> >>>> >> >>>> >> >> >> >> cheers >> >> jon >> >> >> > >____________________________________________________________________________ >Bob Briscoe, Networks Research Centre, BT Research >B54/77 Adastral Park,Martlesham Heath,Ipswich,IP5 3RE,UK. +44 1473 645196 From rbriscoe at jungle.bt.co.uk Wed May 30 11:21:51 2007 From: rbriscoe at jungle.bt.co.uk (Bob Briscoe) Date: Wed, 30 May 2007 19:21:51 +0100 Subject: [e2e] Time for a new Internet Protocol In-Reply-To: References: <5.2.1.1.2.20070521191233.05982d30@pop3.jungle.bt.co.uk> <5.2.1.1.2.20070521191233.05982d30@pop3.jungle.bt.co.uk> Message-ID: <5.2.1.1.2.20070530191238.045f0b28@pop3.jungle.bt.co.uk> Tom, Been away from mail... At 02:09 22/05/2007, Tom Vest wrote: >Being a big fan and frequent user/abuser of the tussle concept, let >me be the first person to observe some obvious problems that follow >from using it as a normative principle: > >1. Although the concept of tussle is inherently recursive, it's >typically only used (e.g., by network architects and systems theory >people) to discuss the upper elements of the protocol/service stack. >Too often people forget, or maybe fail to notice, that the Internet >itself only exists in its "current canonical form" in places when & >where a prior/foundational tussle over control of communications >facilities/infrastructure inputs resulted in certain sorts of >outcomes. In places where all or almost of the interfaces are hidden/ >controlled by a single monolithic entity (e.g., like hierarchical/ >horizontal infrastructure segments within a territorial monopoly >PSTN), tussle may still exist, but it has approximately zero impact/ >significance to outsiders. See slide 24 of the tussle case studies I linked to earlier in the thread: The layered tiles and the 'while loop' represent your recursion - previous infrastructure products (architectures) on which the present tussle is being played out then the next one will play out on this one. >2. As soon as "tusslers" become aware of the idea, they tend to >incorporate it, rhetorically if not operationally, into their future >actions. Granting that I am no game theory expert (and would love to >hear a better informed comparison here), this seems like just another >example of an iterative bargaining game, ala the Prisoner's Dilemma. >An appeal to the reasonableness of a "tussle-friendly outcome" is >just as likely as not to be a gambit to "win" a larger piece of the >pie... unless maybe the appeal is coming from someone you already >trust for some unrelated reason. Yes. In my case, you just have to work out whether I'm doing that or not :) I work for a telco, but I've got a beard, and I have been seen with a little black badge with a big red 'A' in a circle. But may-be I've sold out? May-be I don't know I've sold out? May-be I'm a double agent? Triple? Bob ____________________________________________________________________________ Bob Briscoe, Networks Research Centre, BT Research B54/77 Adastral Park,Martlesham Heath,Ipswich,IP5 3RE,UK. +44 1473 645196