From jeanjour at comcast.net Thu May 2 06:22:42 2013 From: jeanjour at comcast.net (John Day) Date: Thu, 2 May 2013 09:22:42 -0400 Subject: [e2e] Port numbers in the network layer? In-Reply-To: <517DE887.5060709@isi.edu> References: <5176DFFE.50404@isi.edu> <1366853677.483822482@apps.rackspace.com> <51797A0E.5050608@isi.edu> <517B1F0C.3080604@isi.edu> <517DE887.5060709@isi.edu> Message-ID: Sorry, I got a bit busy and then this got lost on my desktop. At 8:27 PM -0700 4/28/13, Joe Touch wrote: >On 4/26/2013 9:11 PM, John Day wrote: >>At 5:42 PM -0700 4/26/13, Joe Touch wrote: >>>On 4/26/2013 3:30 PM, John Day wrote: >>>>At 11:46 AM -0700 4/25/13, Joe Touch wrote: >>>>>One issue is the need for ports; there's a summary of that here: >>>>> >>>>>http://tools.ietf.org/html/draft-ietf-tsvwg-port-use-01 >>>>> >>>>>Its use evolved to overload: >>>>> >>>>> - part of the stream identifier used to associate groups >>>>> of packets (with IP addresses) >>>> >>>>The source and destination port-ids form a connection(flow)-identifier. >>> >>>In the Internet, the IP addresses are part of that connection ID, >>>called the socket pair. >> >>I was speaking generally. With point to point links, you can have >>multiple connections but no need for addresses. But a connection-id is >>still necessary. > >If you want to go that far off the map, connection-id would be >needed only if that multipoint link supported different connections, >either concurrently or in series. No, one does not need to go as far as multipoint links. All that is needed are multiple applications trying to use the same media on a point-to-point link. In that case, a connection-id or flow-id is required; however, addresses are not required. It is true, that if it is a multi-drop or multi-point link, then addresses are required. Basically, addresses are necessary, if 'the wire" ;-) has more than 2 ends! as wireless clearly does. > >>Strictly speaking, the proper way to define connection-id is the >>concatenation of the port-ids to form a connection-id that is unique >>within the scope of the pair of source/destination addresses. > >Your definition implies that a connection ID differentiates >connections only within one pair of source/destination addresses. >That definition precludes connections that span multiple addresses, >e.g., striping, or those that can shift addresses. Not in the least, owing to the distinctions noted earlier in this thread, so-called striping and changing addresses are handled quite elegantly. > >>Given that IP is in a different layer than TCP that would seem to be the >>definition that is consistent with that construction. > >The TCP header includes the IP pseudoheader - notably for the >purpose of connection identification. So TCP is already inconsistent >with your proposed definition (it includes addresses in its >connection ID, not merely as context for its connection ID), and >consistent with the way I already described connections. Actually not. That is one of the more interesting aspects. The argument for the pseudoheader has always been a bit shaky. No other Transport Protocol except UDP either within or outside the IETF found the need for it. There have been many discussions about removing it, but as you know tradition has always won out. But putting that aside, separating IP from TCP caused more problems than it solved. Frankly, looking at the range of experience with this class of protocols and a careful analysis of their structure indicates TCP was split in the wrong direction. It would be much more productive to separate it along lines of control and data. i.e. data transfer and feedback control. Then UDP is simply a degenerate case. > >>According to this socket pair definition then, is the connection id a >>Network Layer identifier or a Transport Layer identifier? > >Transport layer ID that is based on a transport header that subsumes >a subset of the network header. Huh? Next you are going to tell me that it is "small, green and split three ways"! ;-) It is also worth noting that it is important that for security reasons, that identifiers shared between layers not be carried in protocol and that identifiers used by a protocol machine to distinguish flows be identifiers it created. >>>>> - an identifier for the upper layer protocol (service) >>>>> (the dest port, in all UDP packets or in the TCP SYN) >>>> >>>>Actually, it does not identify the upper protocol or application, but a >>>>path to the upper protocol or application. It is significant that this >>>>identified the type but there was no means (unless application specific) >>>>to establish communication with a particular instance of an application. >>> >>>No protocol has a "means" to initiate connection with anything that >>>isn't waiting for it on the other end. In this case, it would be an >>>application listening on the socket. The assumption is both that the >>>application is listening AND that the service (application protocol) >>>is as expected. >> >>Do you mean the something has to be listening before the requesting >>application initiates the connection? In that case, it is not true. It >>is possible to do and there are protocols operating today that do it. >>Admittedly, they are not Internet protocols but that doesn't matter. > >Something has to listen in order for communication to proceed. That >event need not precede the request to initiate from the other end, >but it does precede the connection. I tried to be careful in how I worded that. It is true that something must respond to a request for connection and create a binding between the requested application and the flow that may be created. There is not necessary for the application to have done anything. >... >>>>>Similarly we could allocate a new ethertype to "IPv4:TCP" or >>>>>"IPv4:SCTP". >>>> >>>>What about IPv4:TCP1 thru IPv4:TCP1000? >>> >>>I was clearly trying to extrapolate only different protocol stack >>>combinations. If you're referring to variants of TCP, those could >>>either be in a single ethertype or 1000 different ones. If you're >>>referring to specific connections, that's a different semantic that >>>doesn't map to my example about ethertypes. >> >>If Protocol-id is to identify the Protocol in the layer above (and not >>just the syntax), then I would assume that I should be able to have >>multiple instances of the same protocol. > >It might be informative for you to explain that logic. See below. >>For example, I might want a >>different one for different security domains or something like that. So >>why not have a few hundred of the same protocol? > >It's a protocol-type, not a protocol-ID; types to not typically >indicate instances. If you want an instance ID at that layer (i.e., >within IP to demux different TCP instances rather than connections >within one instance) you would need another field for that purpose. Ahhh, so the "protocol-id" field in IP is not a protocol-id field. Actually, that was my point. Given how it is used, it can't be a protocol-id field. That it is used to identify the kind of protocol in the layer above, i.e. as you say, the type. But why is it necessary to identify the "protocol-type"? Why are there such fields in IP and Ethernet? "Type" is not really required for multiplexing. There are much more flexible ways to identify flows for multiplexing. Why does the receiver need to know the type of protocol? Perhaps so it knows how to interpret the header in the layer above? > >>>>>So any "mix and match" architecture needs to have some indication of >>>>>what the particular mix is, but it need not be cascaded layer-by-layer. >>>> >>>>If I understand what you mean by "some indication" I would have to say >>>>no it is not required. >>> >>>"some indication" means either a preexisting agreement at the >>>endpoints or a label either in-band or out-of-band. You can't >>>differentiate various types of stack combinations without any >>>information. >> >>You would be surprised what can happen. > >Assertions do not surprise me; only counterexamples. > >;-) Indeed. But this email is already too long. ;-) I think there is a reference someplace for this. I will have to find it. Take care, John >Joe From touch at isi.edu Thu May 2 10:13:58 2013 From: touch at isi.edu (Joe Touch) Date: Thu, 02 May 2013 10:13:58 -0700 Subject: [e2e] Port numbers in the network layer? In-Reply-To: References: <5176DFFE.50404@isi.edu> <1366853677.483822482@apps.rackspace.com> <51797A0E.5050608@isi.edu> <517B1F0C.3080604@isi.edu> <517DE887.5060709@isi.edu> Message-ID: <51829ED6.5070301@isi.edu> Hi, John, On 5/2/2013 6:22 AM, John Day wrote: > Sorry, I got a bit busy and then this got lost on my desktop. ... >> If you want to go that far off the map, connection-id would be needed >> only if that multipoint link supported different connections, either >> concurrently or in series. > > No, one does not need to go as far as multipoint links. Agreed; I mistyped; I meant "only if that point-to-point link". > All that is > needed are multiple applications trying to use the same media on a > point-to-point link. In that case, a connection-id or flow-id is > required; however, addresses are not required. It is true, that if it > is a multi-drop or multi-point link, then addresses are required. > Basically, addresses are necessary, if 'the wire" ;-) has more than 2 > ends! as wireless clearly does. Wireless includes point-to-point links, in which the information is either collimated (e.g., space-based laser links) or otherwise restricted within the link layer (e.g., CDMA). But I think we now agree - connection ID is a connection demultiplexer within a context. It is required only if there is more than one connection to a given endpoint. An endpoint ID is an endpoint demultiplexer, and it is required only if there is more than one receiver on a link. >>> Strictly speaking, the proper way to define connection-id is the >>> concatenation of the port-ids to form a connection-id that is unique >>> within the scope of the pair of source/destination addresses. >> >> Your definition implies that a connection ID differentiates >> connections only within one pair of source/destination addresses. That >> definition precludes connections that span multiple addresses, e.g., >> striping, or those that can shift addresses. > > Not in the least, owing to the distinctions noted earlier in this > thread, so-called striping and changing addresses are handled quite > elegantly. In order to stripe connections or shift them across links or endpoints within a link, a connection ID needs to be context-independent, i.e., unique across the potentially shared uses. That's a distinct feature of some, but not all, connection IDs. >>> Given that IP is in a different layer than TCP that would seem to be the >>> definition that is consistent with that construction. >> >> The TCP header includes the IP pseudoheader - notably for the purpose >> of connection identification. So TCP is already inconsistent with your >> proposed definition (it includes addresses in its connection ID, not >> merely as context for its connection ID), and consistent with the way >> I already described connections. > > Actually not. That is one of the more interesting aspects. > > The argument for the pseudoheader has always been a bit shaky. No other > Transport Protocol except UDP either within or outside the IETF found > the need for it. There have been many discussions about removing it, > but as you know tradition has always won out. The pseudoheader is an artifact of the TCP/IP split, which isn't as clean as often claimed. It is used in other transport protocols built on IP, e.g., DCCP and SCTP. It is not a matter of tradition; it is deeply entrenched with the notion of endpoint and that this notion exists at two different layers that share at least some of the context (IP addresses). > But putting that aside, separating IP from TCP caused more problems than > it solved. Frankly, looking at the range of experience with this class > of protocols and a careful analysis of their structure indicates TCP was > split in the wrong direction. It would be much more productive to > separate it along lines of control and data. i.e. data transfer and > feedback control. Then UDP is simply a degenerate case. I'm not sure why you consider TCP not to have separate control and data, but if it's not distinct enough, consider the other protocols cited above. >>> According to this socket pair definition then, is the connection id a >>> Network Layer identifier or a Transport Layer identifier? >> >> Transport layer ID that is based on a transport header that subsumes a >> subset of the network header. > > Huh? Next you are going to tell me that it is "small, green and split > three ways"! ;-) The transport layer flow is *defined* as the socket pair, which is defined in TCP (and used in other Internet transports) as combining the transport header context (port pair) with the IP context (IP address pair), the latter of which is part of the pseudoheader for that reason. > It is also worth noting that it is important that for security reasons, > that identifiers shared between layers not be carried in protocol and > that identifiers used by a protocol machine to distinguish flows be > identifiers it created. If the IDs are not carried within the messages, how are multiplexed messages to be demultiplexed? >>>>>> - an identifier for the upper layer protocol (service) >>>>>> (the dest port, in all UDP packets or in the TCP SYN) >>>>> >>>>> Actually, it does not identify the upper protocol or application, >>>>> but a >>>>> path to the upper protocol or application. It is significant that >>>>> this >>>>> identified the type but there was no means (unless application >>>>> specific) >>>>> to establish communication with a particular instance of an >>>>> application. >>>> >>>> No protocol has a "means" to initiate connection with anything that >>>> isn't waiting for it on the other end. In this case, it would be an >>>> application listening on the socket. The assumption is both that the >>>> application is listening AND that the service (application protocol) >>>> is as expected. >>> >>> Do you mean the something has to be listening before the requesting >>> application initiates the connection? In that case, it is not true. It >>> is possible to do and there are protocols operating today that do it. >>> Admittedly, they are not Internet protocols but that doesn't matter. >> >> Something has to listen in order for communication to proceed. That >> event need not precede the request to initiate from the other end, but >> it does precede the connection. > > I tried to be careful in how I worded that. It is true that something > must respond to a request for connection and create a binding between > the requested application and the flow that may be created. There is > not necessary for the application to have done anything. In the context of a transport protocol, the layer above that interacts with the protocol to create flows, respond to them, and send/receive messages is defined as the "application". That may or may not be L7. ... >>> For example, I might want a >>> different one for different security domains or something like that. So >>> why not have a few hundred of the same protocol? >> >> It's a protocol-type, not a protocol-ID; types to not typically >> indicate instances. If you want an instance ID at that layer (i.e., >> within IP to demux different TCP instances rather than connections >> within one instance) you would need another field for that purpose. > > Ahhh, so the "protocol-id" field in IP is not a protocol-id field. > Actually, that was my point. That depends on your parsing of "-id". I think you interpret "protocol-id" as meaning "protocol-instance-identifier", where the more common interpretation (AFAICT) is "protocol-type-identifier". That's because we don't typically have current cases with multiple instances of a single protocol. We have flows, but that's a different thing. There could be multiple protocol instances each with multiple flows. So I interpret "protocol-id" as I think most people do. > Given how it is used, it can't be a protocol-id field. It can't be a protocol instance field. But you haven't explained why this is important or useful. That's a separate thread, most likely. > That it is used > to identify the kind of protocol in the layer above, i.e. as you say, > the type. But why is it necessary to identify the "protocol-type"? Why > are there such fields in IP and Ethernet? "Type" is not really required > for multiplexing. There are much more flexible ways to identify flows > for multiplexing. Type is required to demultiplex different upper layer protocols (here, network layers) that share a lower layer protocol (here, ethernet). Again, type is distinct from flow. > Why does the receiver need to know the type of protocol? Perhaps so it > knows how to interpret the header in the layer above? Yes, that's what I said several times ;-) The only way around that is to indicate it out-of-band, e.g., in a different layer, and tie it to identifiers (demultiplexing of type or instance) at other layers or to a physical entity (a single physical link that isn't demuxed). The latter is more like a circuit. Joe From jeanjour at comcast.net Thu May 2 16:42:05 2013 From: jeanjour at comcast.net (John Day) Date: Thu, 2 May 2013 19:42:05 -0400 Subject: [e2e] Port numbers in the network layer? In-Reply-To: <51829ED6.5070301@isi.edu> References: <5176DFFE.50404@isi.edu> <1366853677.483822482@apps.rackspace.com> <51797A0E.5050608@isi.edu> <517B1F0C.3080604@isi.edu> <517DE887.5060709@isi.edu> <51829ED6.5070301@isi.edu> Message-ID: At 10:13 AM -0700 5/2/13, Joe Touch wrote: >Hi, John, > >On 5/2/2013 6:22 AM, John Day wrote: >>Sorry, I got a bit busy and then this got lost on my desktop. >... >>>If you want to go that far off the map, connection-id would be needed >>>only if that multipoint link supported different connections, either >>>concurrently or in series. >> >>No, one does not need to go as far as multipoint links. > >Agreed; I mistyped; I meant "only if that point-to-point link". Right. > >>All that is >>needed are multiple applications trying to use the same media on a >>point-to-point link. In that case, a connection-id or flow-id is >>required; however, addresses are not required. It is true, that if it >>is a multi-drop or multi-point link, then addresses are required. >>Basically, addresses are necessary, if 'the wire" ;-) has more than 2 >>ends! as wireless clearly does. > >Wireless includes point-to-point links, in which the information is >either collimated (e.g., space-based laser links) or otherwise >restricted within the link layer (e.g., CDMA). Indeed, wireless can be point-to-point as well. > >But I think we now agree - connection ID is a connection >demultiplexer within a context. It is required only if there is more >than one connection to a given endpoint. An endpoint ID is an >endpoint demultiplexer, and it is required only if there is more >than one receiver on a link. Excellent. > >>>>Strictly speaking, the proper way to define connection-id is the >>>>concatenation of the port-ids to form a connection-id that is unique >>>>within the scope of the pair of source/destination addresses. >>> >>>Your definition implies that a connection ID differentiates >>>connections only within one pair of source/destination addresses. That >>>definition precludes connections that span multiple addresses, e.g., >>>striping, or those that can shift addresses. >> >>Not in the least, owing to the distinctions noted earlier in this >>thread, so-called striping and changing addresses are handled quite >>elegantly. > >In order to stripe connections or shift them across links or >endpoints within a link, a connection ID needs to be >context-independent, i.e., unique across the potentially shared >uses. That's a distinct feature of some, but not all, connection IDs. Alas, but you are right. The constraints of software are sufficiently weak that unlike those of physics which are more constraining, it is still often possible to do it wrong. > >>>>Given that IP is in a different layer than TCP that would seem to be the >>>>definition that is consistent with that construction. >>> >>>The TCP header includes the IP pseudoheader - notably for the purpose >>>of connection identification. So TCP is already inconsistent with your >>>proposed definition (it includes addresses in its connection ID, not >>>merely as context for its connection ID), and consistent with the way >>>I already described connections. >> >>Actually not. That is one of the more interesting aspects. >> >>The argument for the pseudoheader has always been a bit shaky. No other >>Transport Protocol except UDP either within or outside the IETF found >>the need for it. There have been many discussions about removing it, >>but as you know tradition has always won out. > >The pseudoheader is an artifact of the TCP/IP split, which isn't as >clean as often claimed. > >It is used in other transport protocols built on IP, e.g., DCCP and SCTP. > >It is not a matter of tradition; it is deeply entrenched with the >notion of endpoint and that this notion exists at two different >layers that share at least some of the context (IP addresses). "Deeply entrenched," indeed. (Some would say a synonym for tradition.) Strongly reasoned, less so. As recently described by its author, it is this binding that thwarts mobility. Indeed this "notion" exists in two different layers. In fact, it is a general property of all layers. But I fail to see how that is an argument to require a pseudo-header. > >>But putting that aside, separating IP from TCP caused more problems than >>it solved. Frankly, looking at the range of experience with this class >>of protocols and a careful analysis of their structure indicates TCP was >>split in the wrong direction. It would be much more productive to >>separate it along lines of control and data. i.e. data transfer and >>feedback control. Then UDP is simply a degenerate case. > >I'm not sure why you consider TCP not to have separate control and >data, but if it's not distinct enough, consider the other protocols >cited above. It is far from distinct. The two are tightly bound by header if nothing else. Yes, most other protocols have not adopted that approach. But the coupling between the two is very very loose. > >>>>According to this socket pair definition then, is the connection id a >>>>Network Layer identifier or a Transport Layer identifier? >>> >>>Transport layer ID that is based on a transport header that subsumes a >>>subset of the network header. >> >>Huh? Next you are going to tell me that it is "small, green and split >>three ways"! ;-) > >The transport layer flow is *defined* as the socket pair, which is >defined in TCP (and used in other Internet transports) as combining >the transport header context (port pair) with the IP context (IP >address pair), the latter of which is part of the pseudoheader for >that reason. I realize this is the definition, and I would even agree with including the same error-check-code, if they were in the same layer. But since it is said they aren't, I fail to see the need and I definitely see the constraints. Is there some condition in which a router might put a TCP packet on a different IP packet? The only reason I can see for having it. > >>It is also worth noting that it is important that for security reasons, >>that identifiers shared between layers not be carried in protocol and >>that identifiers used by a protocol machine to distinguish flows be >>identifiers it created. > >If the IDs are not carried within the messages, how are multiplexed >messages to be demultiplexed? This was previously covered. > >>>>>>> - an identifier for the upper layer protocol (service) >>>>>>> (the dest port, in all UDP packets or in the TCP SYN) >>>>>> >>>>>>Actually, it does not identify the upper protocol or application, >>>>>>but a >>>>>>path to the upper protocol or application. It is significant that >>>>>>this >>>>>>identified the type but there was no means (unless application >>>>>>specific) >>>>>>to establish communication with a particular instance of an >>>>>>application. >>>>> >>>>>No protocol has a "means" to initiate connection with anything that >>>>>isn't waiting for it on the other end. In this case, it would be an >>>>>application listening on the socket. The assumption is both that the >>>>>application is listening AND that the service (application protocol) >>>>>is as expected. >>>> >>>>Do you mean the something has to be listening before the requesting >>>>application initiates the connection? In that case, it is not true. It >>>>is possible to do and there are protocols operating today that do it. >>>>Admittedly, they are not Internet protocols but that doesn't matter. >>> >>>Something has to listen in order for communication to proceed. That >>>event need not precede the request to initiate from the other end, but >>>it does precede the connection. >> >>I tried to be careful in how I worded that. It is true that something >>must respond to a request for connection and create a binding between >>the requested application and the flow that may be created. There is >>not necessary for the application to have done anything. > >In the context of a transport protocol, the layer above that >interacts with the protocol to create flows, respond to them, and >send/receive messages is defined as the "application". That may or >may not be L7. There is no L7 or L6. Those were fictions that were dispensed with in 1983. >... >>>>For example, I might want a >>>>different one for different security domains or something like that. So >>>>why not have a few hundred of the same protocol? >>> >>>It's a protocol-type, not a protocol-ID; types to not typically >>>indicate instances. If you want an instance ID at that layer (i.e., >>>within IP to demux different TCP instances rather than connections >>>within one instance) you would need another field for that purpose. >> >>Ahhh, so the "protocol-id" field in IP is not a protocol-id field. >>Actually, that was my point. > >That depends on your parsing of "-id". > >I think you interpret "protocol-id" as meaning >"protocol-instance-identifier", where the more common interpretation >(AFAICT) is "protocol-type-identifier". > >That's because we don't typically have current cases with multiple >instances of a single protocol. We have flows, but that's a >different thing. There could be multiple protocol instances each >with multiple flows. > >So I interpret "protocol-id" as I think most people do. Excellent. That was my point. The field has the wrong name. > >>Given how it is used, it can't be a protocol-id field. > >It can't be a protocol instance field. But you haven't explained why >this is important or useful. That's a separate thread, most likely. Actually no. It isn't useful and was only important to make the point that it was a "protocol-type" as opposed to merely a "protocol-id." If there was a "protocol-instance-id" I would think it would be more useful if one left the protocol out of it entirely. Of course, then it would be a port-id. ;-) > >>That it is used >>to identify the kind of protocol in the layer above, i.e. as you say, >>the type. But why is it necessary to identify the "protocol-type"? Why >>are there such fields in IP and Ethernet? "Type" is not really required >>for multiplexing. There are much more flexible ways to identify flows >>for multiplexing. > >Type is required to demultiplex different upper layer protocols >(here, network layers) that share a lower layer protocol (here, >ethernet). > >Again, type is distinct from flow. Most definitely. That is my point. > >>Why does the receiver need to know the type of protocol? Perhaps so it >>knows how to interpret the header in the layer above? > >Yes, that's what I said several times ;-) Excellent! then you agree! The purpose of the protocol-id field in IP is identify the syntax of the encapsulated protocol. >The only way around that is to indicate it out-of-band, e.g., in a >different layer, and tie it to identifiers (demultiplexing of type >or instance) at other layers or to a physical entity (a single >physical link that isn't demuxed). The latter is more like a circuit. Not of my concern. Good discussion, Joe. Take care, John >Joe From touch at isi.edu Thu May 2 16:58:26 2013 From: touch at isi.edu (Joe Touch) Date: Thu, 02 May 2013 16:58:26 -0700 Subject: [e2e] Port numbers in the network layer? In-Reply-To: References: <5176DFFE.50404@isi.edu> <1366853677.483822482@apps.rackspace.com> <51797A0E.5050608@isi.edu> <517B1F0C.3080604@isi.edu> <517DE887.5060709@isi.edu> <51829ED6.5070301@isi.edu> Message-ID: <5182FDA2.6030003@isi.edu> Hi, John, On 5/2/2013 4:42 PM, John Day wrote: ... >>> The argument for the pseudoheader has always been a bit shaky. No other >>> Transport Protocol except UDP either within or outside the IETF found >>> the need for it. There have been many discussions about removing it, >>> but as you know tradition has always won out. >> >> The pseudoheader is an artifact of the TCP/IP split, which isn't as >> clean as often claimed. >> >> It is used in other transport protocols built on IP, e.g., DCCP and SCTP. >> >> It is not a matter of tradition; it is deeply entrenched with the >> notion of endpoint and that this notion exists at two different layers >> that share at least some of the context (IP addresses). > > "Deeply entrenched," indeed. (Some would say a synonym for tradition.) > Strongly reasoned, less so. As recently described by its author, it is > this binding that thwarts mobility. > > Indeed this "notion" exists in two different layers. In fact, it is a > general property of all layers. But I fail to see how that is an > argument to require a pseudo-header. It's an argument that it IS required in all current Internet transports. It is NOT an argument that it has to be required in a new transport protocol in the Internet, or in other new stacks. ... >>>>> According to this socket pair definition then, is the connection id a >>>>> Network Layer identifier or a Transport Layer identifier? >>>> >>>> Transport layer ID that is based on a transport header that subsumes a >>>> subset of the network header. >>> >>> Huh? Next you are going to tell me that it is "small, green and split >>> three ways"! ;-) >> >> The transport layer flow is *defined* as the socket pair, which is >> defined in TCP (and used in other Internet transports) as combining >> the transport header context (port pair) with the IP context (IP >> address pair), the latter of which is part of the pseudoheader for >> that reason. > > I realize this is the definition, and I would even agree with including > the same error-check-code, if they were in the same layer. But since it > is said they aren't, I fail to see the need and I definitely see the > constraints. > > Is there some condition in which a router might put a TCP packet on a > different IP packet? The only reason I can see for having it. A router should never do anything to the contents of an IP payload, IMO. However, isn't that the definition of how a NAT works? ... >> So I interpret "protocol-id" as I think most people do. > > Excellent. That was my point. The field has the wrong name. Yes, but when we're talking about existing protocols, the name is there. New protocols might use more accurate terms, just as IPv6 has a "hopcount" instead of a "TTL". ... > If there was a "protocol-instance-id" I would think it would be more > useful if one left the protocol out of it entirely. Of course, then it > would be a port-id. ;-) The destination port ID in a SYN encodes the service protocol, which is a protocol-type-id. It isn't an instance-id; the instance-id is the combination of the IP addresses and port numbers taken as a set. >>> Why does the receiver need to know the type of protocol? Perhaps so it >>> knows how to interpret the header in the layer above? >> >> Yes, that's what I said several times ;-) > > Excellent! then you agree! The purpose of the protocol-id field in IP > is identify the syntax of the encapsulated protocol. Yes. Same for the destination port of the initial SYN in TCP. ... > Good discussion, Joe. Back at ya' ;-) Joe From jeanjour at comcast.net Sat May 4 15:25:21 2013 From: jeanjour at comcast.net (John Day) Date: Sat, 4 May 2013 18:25:21 -0400 Subject: [e2e] Port numbers in the network layer? In-Reply-To: <1367581994.059624141@apps.rackspace.com> References: <5176DFFE.50404@isi.edu> <1366853677.483822482@apps.rackspace.com> <51797A0E.5050608@isi.edu> <517B1F0C.3080604@isi.edu> <517DE887.5060709@isi.edu> <51829ED6.5070301@isi.edu> <5182FDA2.6030003@isi.edu> <1367581994.059624141@apps.rackspace.com> Message-ID: Thanks, Dave. Those emails clarify a lot. At 7:53 AM -0400 5/3/13, dpreed at reed.com wrote: >The binding of a pseudoheader does not thwart mobility. The >binding that thwarts mobility was the binding between the 32-bit >address and network physical topology. The actual authors of the >pseudoheader *idea* which was proposed in the same Marina del Rey >meeting where the split of layers was sketched (by about 5 of us).to >create TCP, UDP, .... on top of a common IP actually occurred at a >time when routing was not *defined* to be prefix-based. In fact, at >that same meeting, source routing (with route services providing >routes on demand along the way to be cached) - separate from the >32-bit address space, which was expected to be "managerial" not >topological under all proposals - was under active discussion as >*the* ultimate routing approach. The alternative that those of us >focused on "internetworking" and not just ARPANET improvement were >also considering was some form of "hash based" routing at gateways >(i.e. totally non-physical). > > > >Source routing, like the pseudoheader, was very much a modular element of IP. > > > >Now this rewrite of history that implies that conflating addressing >and routing was inherent, so that "mobility" was omitted is just not >valid. > > > >I do agree technically that there might have been a better "endpoint >identifier" on an end-to-end basis than the pseudoheader turned out >to be (especially due to the much later decision to conflate >endpoint-identifier with route that got made by default at BBN, >perhaps just to get the thing bootstrapped). > > > >I can provide more evidence of this: for example, it was expected >that MIT's address space included mobile devices and subnets (we >were doing packet radio and LANs), as did Xerox PARC's PUP. We were >planning to handle efficient routing to these across alternative >paths by using the source-routing mechanism. (as you may know I >wrote, with Jerry Saltzer, a well-known paper on why source routing >was good). > > > >But hey, no one wants to understand the actual design. In fact, a >bunch of idiots call UDP the "unreliable" datagram protocol, which >was not at all the point. Instead it was a demultiplexing layer >that enabled a range of useful non-circuit oriented (M2M) protocols >to be designed. > > > >But the inmates grabbed hold of the asylum at some point in the >history of IETF. Rather ignorant ones, who did not ever read >Parnas, did not understand control theory, imagined that TCP was >"sender controlled", etc. > > > > > > > > > >On Thursday, May 2, 2013 7:58pm, "Joe Touch" said: > > > Hi, John, >> >> On 5/2/2013 4:42 PM, John Day wrote: >> ... >> >>> The argument for the pseudoheader has always been a bit shaky. No >> other >> >>> Transport Protocol except UDP either within or outside the IETF >> found >> >>> the need for it. There have been many discussions about removing >> it, >> >>> but as you know tradition has always won out. >> >> >> >> The pseudoheader is an artifact of the TCP/IP split, which isn't as >> >> clean as often claimed. >> >> >> >> It is used in other transport protocols built on IP, e.g., DCCP and >> SCTP. >> >> >> >> It is not a matter of tradition; it is deeply entrenched with the >> >> notion of endpoint and that this notion exists at two different layers >> >> that share at least some of the context (IP addresses). >> > >> > "Deeply entrenched," indeed. (Some would say a synonym for tradition.) >> > Strongly reasoned, less so. As recently described by its author, it is >> > this binding that thwarts mobility. >> > >> > Indeed this "notion" exists in two different layers. In fact, it is a >> > general property of all layers. But I fail to see how that is an >> > argument to require a pseudo-header. >> >> It's an argument that it IS required in all current Internet transports. >> It is NOT an argument that it has to be required in a new transport > > protocol in the Internet, or in other new stacks. >> >> ... >> >>>>> According to this socket pair definition then, is the >> connection id a >> >>>>> Network Layer identifier or a Transport Layer identifier? >> >>>> >> >>>> Transport layer ID that is based on a transport header that >> subsumes a >> >>>> subset of the network header. >> >>> >> >>> Huh? Next you are going to tell me that it is "small, green and >> split >> >>> three ways"! ;-) >> >> >> >> The transport layer flow is *defined* as the socket pair, which is >> >> defined in TCP (and used in other Internet transports) as combining >> >> the transport header context (port pair) with the IP context (IP >> >> address pair), the latter of which is part of the pseudoheader for >> >> that reason. >> > >> > I realize this is the definition, and I would even agree with including >> > the same error-check-code, if they were in the same layer. But since it >> > is said they aren't, I fail to see the need and I definitely see the >> > constraints. >> > >> > Is there some condition in which a router might put a TCP packet on a >> > different IP packet? The only reason I can see for having it. >> >> A router should never do anything to the contents of an IP payload, IMO. >> >> However, isn't that the definition of how a NAT works? >> >> ... >> >> So I interpret "protocol-id" as I think most people do. >> > >> > Excellent. That was my point. The field has the wrong name. >> >> Yes, but when we're talking about existing protocols, the name is there. >> New protocols might use more accurate terms, just as IPv6 has a >> "hopcount" instead of a "TTL". >> >> ... >> > If there was a "protocol-instance-id" I would think it would be more >> > useful if one left the protocol out of it entirely. Of course, then it >> > would be a port-id. ;-) >> >> The destination port ID in a SYN encodes the service protocol, which is >> a protocol-type-id. It isn't an instance-id; the instance-id is the >> combination of the IP addresses and port numbers taken as a set. >> >> >>> Why does the receiver need to know the type of protocol? Perhaps so >> it >> >>> knows how to interpret the header in the layer above? >> >> >> >> Yes, that's what I said several times ;-) >> > >> > Excellent! then you agree! The purpose of the protocol-id field in IP >> > is identify the syntax of the encapsulated protocol. >> >> Yes. Same for the destination port of the initial SYN in TCP. >> >> ... >> > Good discussion, Joe. >> >> Back at ya' ;-) >> >> Joe >> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130504/1412ef2f/attachment.html From jnc at mercury.lcs.mit.edu Wed May 8 11:52:37 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 8 May 2013 14:52:37 -0400 (EDT) Subject: [e2e] Port numbers in the network layer? Message-ID: <20130508185237.618B018C190@mercury.lcs.mit.edu> > At 7:53 AM -0400 5/3/13, dpreed at reed.com wrote: > (especially due to the much later decision to conflate > endpoint-identifier with route that got made by default at BBN, perhaps > just to get the thing bootstrapped) I see that differently... The thing is that path selection, to scale, just has to have a namespace in which things can be aggregated. (You just can't give everyone a complete map of every last physical asset in the entire network.) IP _only had one namespace available_. So, yeah, as the network grew it had to be organized in a way that was aggregatable. That started to happen early on, with subnets, although it didn't come into full flower until later, with CIDR. Yes, the very early network did have addresses that (at the 'network' level) were flat - but even then, hosts were not free to move from network to network and keep their 'identity' - because internet-wide path selection didn't (couldn't?) track individual hosts. It was inevitable that that aggregation would ensue as the network got larger - as a direct, and unavoidable consequence of the fact that IPv4 had only one namespace. A number of related issues were pointed out by Jerry in the paper that eventually became RFC-1498, and those too pointed the same way: not enough namespaces. Noel From jeanjour at comcast.net Wed May 8 13:45:33 2013 From: jeanjour at comcast.net (John Day) Date: Wed, 8 May 2013 16:45:33 -0400 Subject: [e2e] Port numbers in the network layer? In-Reply-To: <20130508185237.618B018C190@mercury.lcs.mit.edu> References: <20130508185237.618B018C190@mercury.lcs.mit.edu> Message-ID: Wasn't this obvious? This is why they were called network "addresses" and not names. The parallel was to OSs. As Shoch put it, Application names indicate "what" and addresses indicate "where" and routes were "how to get there." He didn't quite have the whole picture but close enough for this discussion. Application names are suppose to be location-independent. Except on broken OSs, you don't need to know what medium a file is on. Addresses are suppose to be location-dependent, where given two addresses you should be able to tell if they are "near" each other for some definition of "near." (not necessarily implying physical location, that while it can be useful is just the naive first thought.) Although street addresses often have this property to varying degrees. ;-) Chicago more so than Boston! ;-) Although as I finally realized, application names are location-dependent too, just with a different meaning of "location." At 2:52 PM -0400 5/8/13, Noel Chiappa wrote: > > At 7:53 AM -0400 5/3/13, dpreed at reed.com wrote: > > > (especially due to the much later decision to conflate > > endpoint-identifier with route that got made by default at BBN, perhaps > > just to get the thing bootstrapped) > >I see that differently... > >The thing is that path selection, to scale, just has to have a namespace in >which things can be aggregated. (You just can't give everyone a complete map >of every last physical asset in the entire network.) IP _only had one >namespace available_. So, yeah, as the network grew it had to be organized in >a way that was aggregatable. > >That started to happen early on, with subnets, although it didn't come into >full flower until later, with CIDR. Yes, the very early network did have >addresses that (at the 'network' level) were flat - but even then, hosts were >not free to move from network to network and keep their 'identity' - because >internet-wide path selection didn't (couldn't?) track individual hosts. > >It was inevitable that that aggregation would ensue as the network got larger >- as a direct, and unavoidable consequence of the fact that IPv4 had only one >namespace. > >A number of related issues were pointed out by Jerry in the paper that >eventually became RFC-1498, and those too pointed the same way: not enough >namespaces. > > Noel From touch at isi.edu Fri May 10 08:27:46 2013 From: touch at isi.edu (Joe Touch) Date: Fri, 10 May 2013 08:27:46 -0700 Subject: [e2e] Port numbers in the network layer? In-Reply-To: References: <20130508185237.618B018C190@mercury.lcs.mit.edu> Message-ID: <1835A012-7FDA-4363-9F5E-262C4278F427@isi.edu> On May 8, 2013, at 1:45 PM, John Day wrote: > Application names are suppose to be location-independent. Except on > broken OSs, you don't need to know what medium a file is on. Agreed; this, however, is one of the key failures of the "slice" model of network virtualization. It binds network interfaces and to OS components (slivers), and maps slivers to virtual networks. That inherently inhibits gateways - devices (or slivers) that bridge traffic between different VNs. > Addresses are suppose to be location-dependent, where given two > addresses you should be able to tell if they are "near" each other > for some definition of "near." You're conflating "address" with a knowledge of the topology of its location space. Location spaces need not be Euclidean or even continuous. Consider street addresses in Tokyo; two addresses on the same street typically satisfy no spatial "nearness" metric (the numbers are sometimes assigned in the order they are built, so 'near' is a temporal metric, rather than spatial). And not all addresses support aggregation. Finally, addresses *are* names. A "name" is just an identifier; once you assign meaning, it becomes something else - an endpoint identifier, a location identifier, etc. Joe From jeanjour at comcast.net Fri May 10 15:22:05 2013 From: jeanjour at comcast.net (John Day) Date: Fri, 10 May 2013 18:22:05 -0400 Subject: [e2e] Port numbers in the network layer? In-Reply-To: <1835A012-7FDA-4363-9F5E-262C4278F427@isi.edu> References: <20130508185237.618B018C190@mercury.lcs.mit.edu> <1835A012-7FDA-4363-9F5E-262C4278F427@isi.edu> Message-ID: At 8:27 AM -0700 5/10/13, Joe Touch wrote: >On May 8, 2013, at 1:45 PM, John Day wrote: > >> Application names are suppose to be location-independent. Except on >> broken OSs, you don't need to know what medium a file is on. > >Agreed; this, however, is one of the key failures of the "slice" >model of network virtualization. It binds network interfaces and to >OS components (slivers), and maps slivers to virtual networks. That >inherently inhibits gateways - devices (or slivers) that bridge >traffic between different VNs. Boy, you are right about that! It is *one* of the failures. ;-) > >> Addresses are suppose to be location-dependent, where given two >> addresses you should be able to tell if they are "near" each other >> for some definition of "near." > >You're conflating "address" with a knowledge of the topology of its >location space. No, I am conflating address with the homeomorphism used to create the address space. > >Location spaces need not be Euclidean or even continuous. No one suggested they did. The peculiar thing about examples is that they are *examples* and represent only a single point in what could be a very large space. They don't necessarily have all the properties of all the other examples from the same space. Very peculiar. >Consider street addresses in Tokyo; two addresses on the same street >typically satisfy no spatial "nearness" metric (the numbers are >sometimes assigned in the order they are built, so 'near' is a >temporal metric, rather than spatial). > >And not all addresses support aggregation. Actually, this is incorrect. All addresses support aggregation (given the previous definitions I used). Although not all aggregations are useful for finding routes or establishing "physical nearness." > >Finally, addresses *are* names. A "name" is just an identifier; once >you assign meaning, it becomes something else - an endpoint >identifier, a location identifier, etc. Absolutely. An address is a type of name. Nothing new there. That has been well accepted as long as I have been around. John > >Joe From detlef.bosau at web.de Fri May 24 08:34:27 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 24 May 2013 17:34:27 +0200 Subject: [e2e] Historical question: Link layer flow control / silent discard Message-ID: <519F8883.8060604@web.de> O.k., perhaps this is for all readers with grey hair (if there is still hair at all....) and grey beards ;-) When I read the original catenet work by Cerf, the Catenet employed link layer flow control. To my understanding, this was abandoned when the ARPAnet turned into the Internet (in 1981?). After this change, the link layer flow control was replaced by a "silent discard" of packets which cannot be accepted for delivery. Is this correct? What was the reason for this decision and have there been any alternative approaches? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From jeanjour at comcast.net Fri May 24 10:16:07 2013 From: jeanjour at comcast.net (John Day) Date: Fri, 24 May 2013 13:16:07 -0400 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <519F8883.8060604@web.de> References: <519F8883.8060604@web.de> Message-ID: The nature of the link layer protocol is mostly determined by the characteristics of the media. In the early networks, it was not uncommon for the link layer to be a protocol that looked something like what we think of HDLC, i.e. ack/retransmission with fixed (usually small) window size. In these early fixed window protocols, ack and flow control were seen as all part of the window scheme. The quality of the lines pretty much dictated the use of these sorts of protocols over long distances. As the data rate increased, the delay imposed by this class of protocols made them inadequate. With the advent of LANs, these HDLC-like protocols were not really required. But to say, they went completely out of use by 1981 would probably be incorrect. In the US, some of the longer runs of the ARPANET, such as Illinois to Utah were relatively error free. At the same time, short runs like Rome, NY to Cambridge had much higher rates. It would be certainly be incorrect to conclude that a decision was made not to use them, especially since the ARPANET was not turned off until 1990 or thereabouts. At 5:34 PM +0200 5/24/13, Detlef Bosau wrote: >O.k., perhaps this is for all readers with grey >hair (if there is still hair at all....) and >grey beards ;-) > > >When I read the original catenet work by Cerf, >the Catenet employed link layer flow control. > >To my understanding, this was abandoned when the >ARPAnet turned into the Internet (in 1981?). >After this change, the link layer flow control >was replaced by a "silent discard" of packets >which cannot be accepted for delivery. > >Is this correct? > >What was the reason for this decision and have >there been any alternative approaches? > >-- >------------------------------------------------------------------ >Detlef Bosau >Galileistra?e 30 >70565 Stuttgart Tel.: +49 711 5208031 > mobile: +49 172 6819937 > skype: detlef.bosau > ICQ: 566129673 >detlef.bosau at web.de http://www.detlef-bosau.de From jnc at mercury.lcs.mit.edu Fri May 24 10:48:32 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Fri, 24 May 2013 13:48:32 -0400 (EDT) Subject: [e2e] Historical question: Link layer flow control / silent discard Message-ID: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> > From: Detlef Bosau First, 'flow control' these days is taken to mean 'end-end' - i.e. the original source not sending data faster than the ultimate destination can use it. This has always been handled at the tranport layer - in the case of TCP, by the TCP window. Rate control based on the ability of the _network_ to carry traffic (which is what you seem to be asking about below) is now usually called 'congestion control'. I don't recall how carefully we differentiated between the two back then, although I am quite sure we already understood the difference. > When I read the original catenet work by Cerf, the Catenet employed > link layer flow control. I'm not sure that's quite correct: without checking the documents for myself, I suspect they would have understood that if the source is connected to network A (a fast network), and the next hop is network B (a slow network), the link layer flow control on network B would be no use in slowing down the host - since it's not connected to network B. As I recall, we thought ICMP Source Quench would be the way congestion control would be propogated back to the host. > To my understanding, this was abandoned when the ARPAnet turned into > the Internet (in 1981?). After this change, the link layer flow control > was replaced by a "silent discard" of packets which cannot be accepted for > delivery. > Is this correct? > What was the reason for this decision Source quench turned out not to work (for reasons I don't recall clearly any more - Google will probably turn some things up). Possibly we just didn't understand enough about congestion control at that early stage to make it work 'well'. I don't recall when we stopped trying to use it - I think it was a little later than the cutover, actually. We then ran without any congestion control at all for a while, and that caused massive problems. Finally Van Jacobsen turned up and saved the day. His approach turned out to only need packet drops as a congestion signal, so SQ was not needed any more (and IIRC it has been deprecated). > have there been any alternative approaches? Well, there have been some alternatives explored, I'm not sure how widely any are used. RED detects incipient congestion, and 'signals' it by dropping packets (dropped packets being the typical 'congestion signal' post-Van). I classify this as a different approach because although the _signal_ is the same, the _triggering mechanism_ is different. ECN is another approach. It's different from the early Source Quench stuff in that i) I don't think it waits until queues are full (as SQ did), and it doesn't return a separate packet to the source. This is all from (dim) memory. Google will probably turn up more. Noel From detlef.bosau at web.de Fri May 24 11:03:36 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 24 May 2013 20:03:36 +0200 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> Message-ID: <519FAB78.9090700@web.de> Am 24.05.2013 19:48, schrieb Noel Chiappa: > > From: Detlef Bosau > > First, 'flow control' these days is taken to mean 'end-end' - i.e. the > original source not sending data faster than the ultimate destination can use That's certainly correct for TCP. I've not read that much papers on hop to hop flow control - although I think it could well make sense. > it. This has always been handled at the tranport layer - in the case of TCP, In the case of TCP. > by the TCP window. Rate control based on the ability of the _network_ to > carry traffic (which is what you seem to be asking about below) is now > usually called 'congestion control'. ;-) As you may know, I'm putting a little question mark behind end to end congestion control. > > I don't recall how carefully we differentiated between the two back then, > although I am quite sure we already understood the difference. No doubt here. > > When I read the original catenet work by Cerf, the Catenet employed > > link layer flow control. > > I'm not sure that's quite correct: without checking the documents for myself, > I suspect they would have understood that if the source is connected to > network A (a fast network), and the next hop is network B (a slow network), > the link layer flow control on network B would be no use in slowing down the > host - since it's not connected to network B. > > As I recall, we thought ICMP Source Quench would be the way congestion > control would be propogated back to the host. > And (this is again history ;-)) why was this abandoned? (Does anyone happen to recall the reasons?) -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Fri May 24 11:16:07 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 24 May 2013 20:16:07 +0200 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: References: <519F8883.8060604@web.de> Message-ID: <519FAE67.5040000@web.de> Am 24.05.2013 19:16, schrieb John Day: > The nature of the link layer protocol is mostly determined by the > characteristics of the media. In the early networks, it was not > uncommon for the link layer to be a protocol that looked something > like what we think of HDLC, i.e. ack/retransmission with fixed > (usually small) window size. In these early fixed window protocols, > ack and flow control were seen as all part of the window scheme. The > quality of the lines pretty much dictated the use of these sorts of > protocols over long distances. As the data rate increased, the delay > imposed by this class of protocols made them inadequate. However, the delay could be kept quite small - as wee see in TCP flow control. > > With the advent of LANs, these HDLC-like protocols were not really > required. At least in CSMA/CD Ethernet, it is implicitly given by the MAC scheme: There can only be one packet on the media. > > It would be certainly be incorrect to conclude that a decision was > made not to use them, especially since the ARPANET was not turned off > until 1990 or thereabouts. As I said, I would like to understand these decisions, particularly as many of them are not self evident and perhaps may be questioned. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From braden at isi.edu Fri May 24 12:22:40 2013 From: braden at isi.edu (Bob Braden) Date: Fri, 24 May 2013 12:22:40 -0700 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> Message-ID: <519FBE00.3020901@isi.edu> On 5/24/2013 10:48 AM, Noel Chiappa wrote: > > > Source quench turned out not to work (for reasons I don't recall clearly any > more - Noel, Jon Postel really wanted Source Quench to work. Walt Prue under Jon Postel's direction made an exhaustive study of possible source quench algorithms in their RFC 1016, July 1987, with the hopeful title: "Something a Host Could Do with Source Quench: The Source Quench Introduced Delay (SQuID) The following sentence from that document says it all; "All of our algorithms oscillate, some worse than others." Dave Clark is fond of saying that the distinctive property of research is that it is allowed to fail. Source Quench was a part of the TCP/IP development that failed. Another indication of its failure is that a SQ may go to a host that is not causing the congestion. > Google will probably turn some things up). Possibly we just didn't > understand enough about congestion control at that early stage to make it > work 'well'. > > I don't recall when we stopped trying to use it - I think it was a little > later than the cutover, actually. AFAIK,no one ever used Source Quench, because of its apparent defects. Sending a packet when there is an overload must be a losing strategy! > > We then ran without any congestion control at all for a while, and that > caused massive problems. Finally Van Jacobsen turned up and saved the day. > His approach turned out to only need packet drops as a congestion signal, > so SQ was not needed any more (and IIRC it has been deprecated). Yes, although we (and I think you were involved) did not have the courage to deprecate SQ when we wrote Host Requirements, a couple of years after 1016. I suspect SQ was finally killed in Router Requirements, RFC 1812, in 1995. Bob Braden From vern at ee.lbl.gov Fri May 24 13:31:46 2013 From: vern at ee.lbl.gov (vern@ee.lbl.gov) Date: Fri, 24 May 2013 13:31:46 -0700 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <519FBE00.3020901@isi.edu> (Fri, 24 May 2013 12:22:40 PDT). Message-ID: <20130524203146.AE8D42C4022@rock.ICSI.Berkeley.EDU> > AFAIK,no one ever used Source Quench, because of its apparent > defects. FWIW, in section 6.2 of ftp://ftp.ee.lbl.gov/papers/vp-tcpanaly-sigcomm97.ps.gz there's discussion of inferring that way back in the Dark Ages (mid-1990s) multiple TCP implementations did indeed respond to Source Quench. Vern From jeanjour at comcast.net Fri May 24 14:09:13 2013 From: jeanjour at comcast.net (John Day) Date: Fri, 24 May 2013 17:09:13 -0400 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> Message-ID: At 1:48 PM -0400 5/24/13, Noel Chiappa wrote: > > From: Detlef Bosau > >First, 'flow control' these days is taken to mean 'end-end' - i.e. the >original source not sending data faster than the ultimate destination can use >it. This has always been handled at the tranport layer - in the case of TCP, >by the TCP window. Rate control based on the ability of the _network_ to >carry traffic (which is what you seem to be asking about below) is now >usually called 'congestion control'. I am sorry Noel, but I have to disagree at least with the attitude. Flow control can occur at any layer and pretty much has. The fact that it may not currently doesn't mean it never will. This is precisely how the craft mentality that permeates this field got started. One of the things I impress on my students (with examples) is that old solutions never go away, they return, often with somewhat different parameters. At today's data rates, it is unlikely to see flow control at the data link layer, or elsewhere in between, but who knows! And why I define flow control as feedback co-located with the resource being controlled and congestion control as feedback not co-located with the resource and try to stay away from the more vocational definitions. > >I don't recall how carefully we differentiated between the two back then, >although I am quite sure we already understood the difference. Yes, it was quite clear at the time that flow control at the data link layer had a different purpose than flow control at the Transport layer. (I should probably qualify that. At least, it was quite clear to some people. I have had a few surprises of late of things that I thought were quite clear early on and seem to have been totally missed.) > > > When I read the original catenet work by Cerf, the Catenet employed > > link layer flow control. > >I'm not sure that's quite correct: without checking the documents for myself, >I suspect they would have understood that if the source is connected to >network A (a fast network), and the next hop is network B (a slow network), >the link layer flow control on network B would be no use in slowing down the >host - since it's not connected to network B. At the very least all of the networks at the time (ARPANET, NPL, CYCLADES, etc.) used a data link protocol that did retransmission and flow control. By the end of the 70s, it was probably disappearing. IEN#1 (1977) conjectures that ingress flow control at gateways is probably going to be important. By this time, quite a lot of research had been done on congestion control in these kinds of networks. Take care, John > >As I recall, we thought ICMP Source Quench would be the way congestion >control would be propogated back to the host. > > > To my understanding, this was abandoned when the ARPAnet turned into > > the Internet (in 1981?). After this change, the link layer flow control > > was replaced by a "silent discard" of packets which cannot be >accepted for > > delivery. > > Is this correct? > > What was the reason for this decision > >Source quench turned out not to work (for reasons I don't recall clearly any >more - Google will probably turn some things up). Possibly we just didn't >understand enough about congestion control at that early stage to make it >work 'well'. > >I don't recall when we stopped trying to use it - I think it was a little >later than the cutover, actually. > >We then ran without any congestion control at all for a while, and that >caused massive problems. Finally Van Jacobsen turned up and saved the day. >His approach turned out to only need packet drops as a congestion signal, >so SQ was not needed any more (and IIRC it has been deprecated). > > > have there been any alternative approaches? > >Well, there have been some alternatives explored, I'm not sure how widely >any are used. > >RED detects incipient congestion, and 'signals' it by dropping packets >(dropped packets being the typical 'congestion signal' post-Van). I classify >this as a different approach because although the _signal_ is the same, >the _triggering mechanism_ is different. > >ECN is another approach. It's different from the early Source Quench stuff >in that i) I don't think it waits until queues are full (as SQ did), and >it doesn't return a separate packet to the source. > >This is all from (dim) memory. Google will probably turn up more. > > Noel From touch at isi.edu Fri May 24 14:10:57 2013 From: touch at isi.edu (Joe Touch) Date: Fri, 24 May 2013 14:10:57 -0700 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <519F8883.8060604@web.de> References: <519F8883.8060604@web.de> Message-ID: <519FD761.7050909@isi.edu> FWIW, history questions may be more usefully discussed here: internet-history at postel.org Joe On 5/24/2013 8:34 AM, Detlef Bosau wrote: > O.k., perhaps this is for all readers with grey hair (if there is still > hair at all....) and grey beards ;-) > > > When I read the original catenet work by Cerf, the Catenet employed link > layer flow control. > > To my understanding, this was abandoned when the ARPAnet turned into the > Internet (in 1981?). After this change, the link layer flow control was > replaced by a "silent discard" of packets which cannot be accepted for > delivery. > > Is this correct? > > What was the reason for this decision and have there been any > alternative approaches? > From jon.crowcroft at cl.cam.ac.uk Sat May 25 01:45:39 2013 From: jon.crowcroft at cl.cam.ac.uk (Jon Crowcroft) Date: Sat, 25 May 2013 09:45:39 +0100 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <519F8883.8060604@web.de> References: <519F8883.8060604@web.de> Message-ID: if by catenet you mean the specific deployed technologies of the arpanet, milnet, arpa packet radio, satnet, and IPSS, then the answer is a) no, it didn't specifiy lin layer flow control but b) yes, sone of the links had flow control but i) its sometimes a bad idea - due to poor interactions between nested control loops but ii) occasionally, it helps, but you have to get quite lucky... for example, IP over X.25 (treating x.25' layer 3 packet protocol as a link layer in true cavalier fdashion, as one does), causes weird things to happen to TCP's end2end behaviour due to the sudden step functions in measured RTT up and down as the link layer does odd stuff - thi hurt badly in the UK academic early IP deployments which had to run this way.. on the other hand, there was nice work at bell labs (debasis mitra et al) that showed you could get optimal traffic distribution in a homogenous enough network by using link layer flow control to sprad out traffic load....but it does NOT work in a catenet (or internet) when all the links are very heterogeneous....youre better off doing multipath and e2e flow (and congrestion control) - of course, we don't have much catenet/internet layer multipath yet, which is a shame as it was in the origianl thinking and has re-emerged recently with lots of nice results that show it would benefit in may places (edge, core, and data center nets) - for references on multipath, see http://nrg.cs.ucl.ac.uk/mptcp/ but maybe I am just treading on the toes of giants again.... bald and grey j. On Fri, May 24, 2013 at 4:34 PM, Detlef Bosau wrote: > O.k., perhaps this is for all readers with grey hair (if there is still > hair at all....) and grey beards ;-) > > > When I read the original catenet work by Cerf, the Catenet employed link > layer flow control. > > To my understanding, this was abandoned when the ARPAnet turned into the > Internet (in 1981?). After this change, the link layer flow control was > replaced by a "silent discard" of packets which cannot be accepted for > delivery. > > Is this correct? > > What was the reason for this decision and have there been any alternative > approaches? > > -- > ------------------------------**------------------------------**------ > Detlef Bosau > Galileistra?e 30 > 70565 Stuttgart Tel.: +49 711 5208031 > mobile: +49 172 6819937 > skype: detlef.bosau > ICQ: 566129673 > detlef.bosau at web.de http://www.detlef-bosau.de > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130525/70896739/attachment.html From detlef.bosau at web.de Sat May 25 03:39:48 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 25 May 2013 12:39:48 +0200 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <519FBE00.3020901@isi.edu> References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> <519FBE00.3020901@isi.edu> Message-ID: <51A094F4.3010004@web.de> Am 24.05.2013 21:22, schrieb Bob Braden: > On 5/24/2013 10:48 AM, Noel Chiappa wrote: >> >> >> Source quench turned out not to work (for reasons I don't recall >> clearly any >> more - > > Noel, > > Jon Postel really wanted Source Quench to work. Walt Prue under Jon > Postel's direction > made an exhaustive study of possible source quench > algorithms in their RFC 1016, July 1987, with the hopeful title: > > "Something a Host Could Do with Source Quench: > The Source Quench Introduced Delay (SQuID) > > The following sentence from that document says it all; > > "All of our algorithms oscillate, some worse than others." Source quench made the sender decrease it's rate, correct? So, with source quench, congestion control was achieved rate based instead of window based as in VJCC? Is this understanding correct? From l.wood at surrey.ac.uk Sat May 25 05:46:05 2013 From: l.wood at surrey.ac.uk (l.wood@surrey.ac.uk) Date: Sat, 25 May 2013 13:46:05 +0100 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <519FBE00.3020901@isi.edu> References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu>, <519FBE00.3020901@isi.edu> Message-ID: <290E20B455C66743BE178C5C84F12408223F494E8E@EXMB01CMS.surrey.ac.uk> Bob Brden said: > Sending a packet when there is an overload must be a > losing strategy! (cough) ECN (cough) which is basically source quench, and about as useful Lloyd Wood http://sat-net.com/L.Wood/ as a chocolate teapot. From jwbensley at gmail.com Sat May 25 07:27:12 2013 From: jwbensley at gmail.com (James Bensley) Date: Sat, 25 May 2013 15:27:12 +0100 Subject: [e2e] Network Research Message-ID: Hello everyone, I am currently undertaking a research project as part of a masters degree in advanced networking. I want the input of the community and industry at large. I have created a small on-line survey and would be very grateful to anyone that could give 3 minutes to fill it out. You will be benefiting networking research so I'm sure you are all wanting to participate; The survey is here: https://docs.google.com/forms/d/1lqigAHYHEgLLHr2kifiyBwgJ9Nw5AFS6d_XVXfhKkTw/viewform When the research project is complete I will be uploading it for all and sundry to read and distribute. All results will be anonymised! If you want a copy of the anonymised results, or have any queries about the research or the survey, please email me off-list to save on-list noise. Kind regards, James P.S. Apologies if you are on multiple mailing lists and receive this email multiple times! From jnc at mercury.lcs.mit.edu Sat May 25 08:41:15 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sat, 25 May 2013 11:41:15 -0400 (EDT) Subject: [e2e] Historical question: Link layer flow control / silent discard Message-ID: <20130525154115.BF7C618C16A@mercury.lcs.mit.edu> > From: Bob Braden > The following sentence from that document says it all; > "All of our algorithms oscillate, some worse than others." That just means we didn't have a good enough control theorist working with us! :-) (My memory of control theory is very dim now, but I seem to recall that _any_ system with feedback will oscillate unless damped properly... and usually even when damped, there is _some_ oscillation, it's just contolled to the point that it's not harmful, but instead is helpful, as 'probing'.) But seriously, I do think we probably still don't really understand if SQ would work or not. (Although the ECN mechanism explores a somewhat similar space, it depends on the packet getting to the ultimate destination, and then back, successfully, to get the congestion signal back to the originating host). > Sending a packet when there is an overload must be a losing strategy! But the SQ travels over a path which is not (yet) congested... > From: John Day > I define flow control as feedback co-located with the resource being > controlled and congestion control as feedback not co-located with the > resource That's an interesting - and perhaps useful - distinction, but I fear we are trying to put too many meanings onto too few terms. I still maintain that because of the grubby low-level engineering distinctions (different causes, different mechanisms, etc) it's useful to distinguish between end-end control at the transport layer, and indications of congestion on the path, at the internetwork layer. I guess I was committing the same sin as you, in re-purposing existing terms (flow and congestion control) to the cases I was interested in distinguishing. Maybe instead we need a whole new taxonomy/terminology for this area? Noel From detlef.bosau at web.de Sat May 25 09:45:18 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 25 May 2013 18:45:18 +0200 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <20130525154115.BF7C618C16A@mercury.lcs.mit.edu> References: <20130525154115.BF7C618C16A@mercury.lcs.mit.edu> Message-ID: <51A0EA9E.5010202@web.de> Am 25.05.2013 17:41, schrieb Noel Chiappa: > > From: Bob Braden > > > The following sentence from that document says it all; > > "All of our algorithms oscillate, some worse than others." > > That just means we didn't have a good enough control theorist working with > us! :-) > > (My memory of control theory is very dim now, but I seem to recall that _any_ > system with feedback will oscillate unless damped properly... and usually Exaclty. So it is the best idea possible to avoid unnecessary feed back :-) (In German: "Steuern ist besser als regeln." Because I always mix up open- and closed-loop etc. perhaps someone could help me out here ;-)) -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From lachlan.andrew at gmail.com Sat May 25 16:34:05 2013 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sun, 26 May 2013 09:34:05 +1000 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> Message-ID: On 25 May 2013 07:09, John Day wrote: > At today's data > rates, it is unlikely to see flow control at the data link layer, or > elsewhere in between, but who knows! At 10Gbps, it is in FCoE: http://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet http://en.wikipedia.org/wiki/Ethernet_flow_control However, perhaps that isn't the link layer. I've always wondered why switched ethernet (which does the ISO layer-3 tasks of addressing, routing over multiple point-to-point links and buffering) is called a "link layer" by the internet community... Cheers, Lachlan -- Lachlan Andrew Centre for Advanced Internet Architectures (CAIA) Swinburne University of Technology, Melbourne, Australia Ph +61 3 9214 4837 From mattmathis at google.com Sun May 26 09:50:00 2013 From: mattmathis at google.com (Matt Mathis) Date: Sun, 26 May 2013 09:50:00 -0700 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> Message-ID: On Sat, May 25, 2013 at 4:34 PM, Lachlan Andrew wrote: > I've always wondered why > switched ethernet (which does the ISO layer-3 tasks of addressing, > routing over multiple point-to-point links and buffering) is called a > "link layer" by the internet community... > It's called "feature creep". Ethernet 2.0 (predates 802.*) was clearly a link layer. IEEE keeps adding stuff. Many Internet purists complained. We now are paying for doing nearly everything twice: once in silicon by way of the IEEE and once in the Internet proper. As far as the internet is concerned, the extra complexity in the lower layer is mostly a waste. However there are a couple of really important exceptions: there is a multiplicative scale increase from doing both routing and switching. Today a single router with dozens of interfaces connected to switching fabrics can have hundreds or thousands of peers. This is in part how the Internet itself beats Moore's law. (The switches typically use MPLS) Otherwise there would be some key router components that need to beat Moore's law squared: router buffer memory size needs to scale with the data rate, router FIB memory needs to scale with routing table size, and access time for both also needs to scale with data rate. We at least partially ease the scale pressure on these components by increasing the Internet branchiness rather than path lengths. As for link layer flow control, it doesn't work. You can buy it today in many technologies, but if you turn it on, you will rediscover head-of-line blocking and global resource contention. e.g. If you have one overloaded exit, the cascaded backlogs at each stage through the fabric guarantee congestion and/or resource starvation everywhere, even if the rest of the fabric is otherwise lightly loaded. Thanks, --MM-- The best way to predict the future is to create it. - Alan Kay Privacy matters! We know from recent events that people are using our services to speak in defiance of unjust governments. We treat privacy and security as matters of life and death, because for some users, they are. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130526/5f14bb99/attachment.html From anoop at alumni.duke.edu Sun May 26 13:48:37 2013 From: anoop at alumni.duke.edu (Anoop Ghanwani) Date: Sun, 26 May 2013 13:48:37 -0700 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> Message-ID: On Sun, May 26, 2013 at 9:50 AM, Matt Mathis wrote: > > > As for link layer flow control, it doesn't work. You can buy it today in > many technologies, but if you turn it on, you will rediscover head-of-line > blocking and global resource contention. e.g. If you have one overloaded > exit, the cascaded backlogs at each stage through the fabric guarantee > congestion and/or resource starvation everywhere, even if the rest of the > fabric is otherwise lightly loaded. > > I don't know that I would say that LLFC doesn't work. It works for what is it designed to do...operate within usec to very-short term congestion to guarantee a near lossless link network. It is definitely not designed to handle sustained overloads...end-to-end flow control works best there. This presentations discusses some of these issues on slides 16-23. http://www.c3.lanl.gov/cac2007/presentations/hendel.pdf Anoop -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130526/4c60cedb/attachment.html From detlef.bosau at web.de Sun May 26 23:11:28 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 27 May 2013 08:11:28 +0200 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> Message-ID: <51A2F910.6010208@web.de> To my understanding, these slides are about meshs? From detlef.bosau at web.de Sun May 26 23:22:55 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 27 May 2013 08:22:55 +0200 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> Message-ID: <51A2FBBF.9060504@web.de> Am 26.05.2013 18:50, schrieb Matt Mathis: > On Sat, May 25, 2013 at 4:34 PM, Lachlan Andrew wrote: > >> I've always wondered why >> switched ethernet (which does the ISO layer-3 tasks of addressing, >> routing over multiple point-to-point links and buffering) is called a >> "link layer" by the internet community... >> > It's called "feature creep". Ethernet 2.0 (predates 802.*) was clearly a > link layer. IEEE keeps adding stuff. Many Internet purists complained. > We now are paying for doing nearly everything twice: once in silicon by > way of the IEEE and once in the Internet proper. As far as the internet > is concerned, the extra complexity in the lower layer is mostly a waste. I wouldn't talk about feature creep. Actually, we're talking about the good old end to end discussion: Where are things to be done? A prominent example is the embarrassing BIC/CUBIC discussion. When there is a "long fat link" along the path, why do we need any kind of "estimation" to use, to even detect it? Hence: Why are the end points responsible to "estimate" a proper congestion window? When at the same time the network operator knows about the LFN link and we can a, sometimes helpless, sometimes educated guess by sound knowledge? And why is scheduling of TCP flows left to a self scheduling mechanism, which may have some scalability issues and some memory issues, as we discussed and agreed some weeks ago, when we have working scheduling algorithms? I'm still not convinced that we're doing the right things at the right places here! From braden at isi.edu Tue May 28 09:51:42 2013 From: braden at isi.edu (Bob Braden) Date: Tue, 28 May 2013 09:51:42 -0700 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> Message-ID: <51A4E09E.5030501@isi.edu> On 5/25/2013 4:34 PM, Lachlan Andrew wrote: > However, perhaps that isn't the link layer. I've always wondered why > switched ethernet (which does the ISO layer-3 tasks of addressing, > routing over multiple point-to-point links and buffering) is called a > "link layer" by the internet community... Cheers, Lachlan Ah, interesting point. Of course an Ethernet, whether switched or bussed, is a network. It has addressing, routing, and flow control. Ethernet, or rather its predecessor X.25, was at the core of the OSI 7 layer model. But the pesky internetwork crowd came along and said, we need a network of networks, so we need a new end-end INTERnetwork protocol, let's call it IP." So what are the poor OSI devotees, who believe in One Network to Rule them All, to do? We IETFers are pragmatists, not layer model purists. (as illustrated by our lack of embarassment about splitting the layers with MPLS, IPsec, TLS, etc.) We became careless and smudged over the network/internetwork distinction. So, IETFers generally refer to the Internetwork layer as the "network layer." Then the Internet protocol stack sort of looks like the OSI stack, and there is an illusion that the OSI stack has something to do with reality. If IP is the "network layer" (we are too lazy to say "internetwork layer"), then what is the Ethernet? In IP land, it is a subnet(work). But for those who believe that IP is really our network layer, then the next layer below IP was dubbed the Link Layer, because it seems to correspond to the OSI Data Link Layer. That is the answer to your question. Note that the Internet's Link Layer should not be (but often is) confused with the OSI Data Link Layer. It co ntains "everything" in Vint's famous phrase " I P over everything". Section 1.1.3 of Host Requirements RFC 1122 defines the internetwork layer model carefully. And MAP was the only internet community member who bothered to straighten this out. (You are an Internet pioneer iff you know who MAP was). Bob Braden From detlef.bosau at web.de Tue May 28 12:25:55 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 28 May 2013 21:25:55 +0200 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <51A4E09E.5030501@isi.edu> References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> <51A4E09E.5030501@isi.edu> Message-ID: <51A504C3.4070005@web.de> Am 28.05.2013 18:51, schrieb Bob Braden: > > > So what are the poor OSI devotees, who believe in One Network to Rule > them All, to do? > The point is: The OSI model is a *reference model*. I apologize, when I'm carrying cowls to Newcastle here, but it took me sometime to accept this myself. There is no need to fit anything into the OSI model and - vice versa - the OSI model may differ from reality. Sometimes, things are done twice. That's annoying. Sometimes, things are missing, then it's a "problem". > We IETFers are pragmatists, not layer model purists. (as illustrated > by our lack of > embarassment about splitting the layers with MPLS, IPsec, TLS, etc.) > We became careless and smudged over > the network/internetwork distinction. So, IETFers generally refer to > the Internetwork layer as the "network layer." Then the Internet > protocol stack sort of looks like the OSI stack, and there is an > illusion that the OSI stack has something to do with reality. > > If IP is the "network layer" (we are too lazy to say "internetwork > layer"), then what is the Ethernet? In IP land, it is a subnet(work). But > for those who believe that IP is really our network layer, then the > next layer below IP was dubbed the Link Layer, because it > seems to correspond to the OSI Data Link Layer. That is the answer to > your question. However, we have to make sure that mechanism on the "IP internetworking layer" and the "Ethernet network layer" do not interfere. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130528/1f854b80/attachment.html From jeanjour at comcast.net Tue May 28 14:02:57 2013 From: jeanjour at comcast.net (John Day) Date: Tue, 28 May 2013 17:02:57 -0400 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <51A4E09E.5030501@isi.edu> References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> <51A4E09E.5030501@isi.edu> Message-ID: Just for the record and then I will let this discussion go on, but X.25 was not at the core of the OSI Model. It is true that there were some people Bob's age (we called them the old guard) who thought they wanted to X.25 products and say it was OSI, but no one else had any intention of doing OSI products with X.25. Nor was it at the core of OSI Network Layer. Most of those people would have said CLNP was the core of the OSI Network Layer. In fact, OSI was designed to handle multiple network technologies, which is why the Network Layer was structured the way it was. OSI allowed both connection-oriented and connectionless operation. The fact that no one ever defined for OSI a connection-oriented network layer protocol but did define a Connectionless Network Layer Protocol (CLNP) speaks for itself. End of myth-busting. Take care, John At 9:51 AM -0700 5/28/13, Bob Braden wrote: >On 5/25/2013 4:34 PM, Lachlan Andrew wrote: >>However, perhaps that isn't the link layer. I've always wondered >>why switched ethernet (which does the ISO layer-3 tasks of >>addressing, routing over multiple point-to-point links and >>buffering) is called a "link layer" by the internet community... >>Cheers, Lachlan > >Ah, interesting point. Of course an Ethernet, whether switched or >bussed, is a network. It has addressing, routing, >and flow control. Ethernet, or rather its predecessor X.25, was at >the core of the OSI 7 layer model. >But the pesky internetwork crowd came along and said, we need a >network of networks, so we need a new end-end >INTERnetwork protocol, let's call it IP." > >So what are the poor OSI devotees, who believe in One Network to >Rule them All, to do? > >We IETFers are pragmatists, not layer model purists. (as illustrated >by our lack of >embarassment about splitting the layers with MPLS, IPsec, TLS, etc.) >We became careless and smudged over >the network/internetwork distinction. So, IETFers generally refer to >the Internetwork layer as the "network layer." Then the Internet >protocol stack sort of looks like the OSI stack, and there is an >illusion that the OSI stack has something to do with reality. > >If IP is the "network layer" (we are too lazy to say "internetwork >layer"), then what is the Ethernet? In IP land, it is a >subnet(work). But >for those who believe that IP is really our network layer, then the >next layer below IP was dubbed the Link Layer, because it >seems to correspond to the OSI Data Link Layer. That is the answer >to your question. > >Note that the Internet's Link Layer should not be (but often is) >confused with the OSI Data Link Layer. It co ntains >"everything" in Vint's famous phrase " I P over everything". > >Section 1.1.3 of Host Requirements RFC 1122 defines the internetwork >layer model carefully. >And MAP was the only internet community member who bothered to >straighten this out. >(You are an Internet pioneer iff you know who MAP was). > >Bob Braden From barney at databus.com Tue May 28 18:12:14 2013 From: barney at databus.com (Barney Wolff) Date: Tue, 28 May 2013 21:12:14 -0400 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> <51A4E09E.5030501@isi.edu> Message-ID: <20130529011214.GA79172@pit.databus.com> Some poor deluded folks (I among them) did implement ISO TP classes 1 & 3 over X.25 . Years later, replaced (me again) by TP0 over TCP. As this was at Western Union and then ATT, perhaps excusable as Bellheadness, or youthful folly. On Tue, May 28, 2013 at 05:02:57PM -0400, John Day wrote: > Just for the record and then I will let this discussion go on, but > X.25 was not at the core of the OSI Model. It is true that there > were some people Bob's age (we called them the old guard) who thought > they wanted to X.25 products and say it was OSI, but no one else had > any intention of doing OSI products with X.25. Nor was it at the > core of OSI Network Layer. Most of those people would have said CLNP > was the core of the OSI Network Layer. In fact, OSI was designed to > handle multiple network technologies, which is why the Network Layer > was structured the way it was. OSI allowed both connection-oriented > and connectionless operation. The fact that no one ever defined for > OSI a connection-oriented network layer protocol but did define a > Connectionless Network Layer Protocol (CLNP) speaks for itself. > > End of myth-busting. From jeanjour at comcast.net Tue May 28 20:47:33 2013 From: jeanjour at comcast.net (John Day) Date: Tue, 28 May 2013 23:47:33 -0400 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <20130529011214.GA79172@pit.databus.com> References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> <51A4E09E.5030501@isi.edu> <20130529011214.GA79172@pit.databus.com> Message-ID: Why?!! No one took anything but Class 4 seriously. Class 0 was to keep CCITT SGVIII quiet, Class 1 was to placate CCITT SGVII, Class 2 was for the European users who had to buy X.25 and thought (wrongly, which we knew but couldn't prove at the time) that it would be simpler because X.25 was "kind of" reliable (not), Class 3 was to keep the Germans quiet. Class 4 was an improved INWG96 that incorporated at least partially Watson's results from delta-t. INWG96 was basically the transport protocol for CYCLADES. The day Western Union and CSC were awarded the AutoDinII contract, I predicted it would fail. After all, CSC were the ones that tried to use the NCP control channel to send mail and built a copy of the ARPANET that ran 7 times slower! That takes talent. Take care, John At 9:12 PM -0400 5/28/13, Barney Wolff wrote: >Some poor deluded folks (I among them) did implement ISO TP classes 1 & 3 >over X.25 . Years later, replaced (me again) by TP0 over TCP. As this >was at Western Union and then ATT, perhaps excusable as Bellheadness, or >youthful folly. > >On Tue, May 28, 2013 at 05:02:57PM -0400, John Day wrote: >> Just for the record and then I will let this discussion go on, but >> X.25 was not at the core of the OSI Model. It is true that there >> were some people Bob's age (we called them the old guard) who thought >> they wanted to X.25 products and say it was OSI, but no one else had >> any intention of doing OSI products with X.25. Nor was it at the >> core of OSI Network Layer. Most of those people would have said CLNP >> was the core of the OSI Network Layer. In fact, OSI was designed to >> handle multiple network technologies, which is why the Network Layer >> was structured the way it was. OSI allowed both connection-oriented >> and connectionless operation. The fact that no one ever defined for >> OSI a connection-oriented network layer protocol but did define a >> Connectionless Network Layer Protocol (CLNP) speaks for itself. >> >> End of myth-busting. From touch at isi.edu Wed May 29 08:31:58 2013 From: touch at isi.edu (Joe Touch) Date: Wed, 29 May 2013 08:31:58 -0700 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> <51A4E09E.5030501@isi.edu> Message-ID: <51A61F6E.5000209@isi.edu> On 5/28/2013 2:02 PM, John Day wrote: > Just for the record and then I will let this discussion go on, but X.25 > was not at the core of the OSI Model. FWIW, there was an implementation of ISO - ISODE (the ISO development environment). UPenn was snail-mailing out 9-track tapes and 8mm cassettes back in the early 90's when I was there. I still have one of the enamel pins. It implemented layers 3-6, and could be configured to run over X.25 - thus the possible confusion that X.25 was its L2. Joe From jeanjour at comcast.net Wed May 29 09:04:41 2013 From: jeanjour at comcast.net (John Day) Date: Wed, 29 May 2013 12:04:41 -0400 Subject: [e2e] Historical question: Link layer flow control / silent discard In-Reply-To: <51A61F6E.5000209@isi.edu> References: <20130524174832.4B57218C11C@mercury.lcs.mit.edu> <51A4E09E.5030501@isi.edu> <51A61F6E.5000209@isi.edu> Message-ID: OSI divided the Network Layer into 3 sub-layers (not all of which were present for all networks): 3a Subnet Access, 3b Subnet Dependent, and 3c Subnet Independent. (see the Internal Organization of the Network Layer, ISO 8648). X.25 was (according to its title) 3a Subnet Access. The PTTs had the "foresight" ;-) to call it a Data-Terminating-Equipment (DTE) to Data Communicating Equipment (DCE) interface. (Don't you love the nomenclature!) X.25 was just at the boundary of the network. In other words, Host to Network protocol, the equivalent of BBN1822! ;-) So OSI took them at their word. ;-) If the network had X.25, then it was at 3a. Whether a PTT used X.25 internal to its network was its business and not within the purview of CCITT. I believe most X.25 networks at the time heavily modified it beyond what the Recommendation said. (CCITT's habit of defining its recommendations as the interfaces between boxes is why I refer to this as the beads-on-a-string model! boxes strung together with a wire!) With X.25, LAPB (also known as HDLC) was the Data Link Layer. CLNP was 3c, Subnet Independent. One can think of 3a/3b as a traditional network layer for networks that had that; and 3c/Transport as the Internet Layer. 3c addresses were global, while addresses in 3a/3b were only unambiguous within the network. Think of 3a/3b as points of attachment addresses, and 3c as node addresses. (see the Saltzer paper RFC 1498 for background on this) Take care, John At 8:31 AM -0700 5/29/13, Joe Touch wrote: >On 5/28/2013 2:02 PM, John Day wrote: >>Just for the record and then I will let this discussion go on, but X.25 >>was not at the core of the OSI Model. > >FWIW, there was an implementation of ISO - ISODE (the ISO >development environment). UPenn was snail-mailing out 9-track tapes >and 8mm cassettes back in the early 90's when I was there. I still >have one of the enamel pins. > >It implemented layers 3-6, and could be configured to run over X.25 >- thus the possible confusion that X.25 was its L2. > >Joe From jshen_cad at yahoo.com.cn Thu May 30 02:21:39 2013 From: jshen_cad at yahoo.com.cn (Jing Shen) Date: Thu, 30 May 2013 17:21:39 +0800 (CST) Subject: [e2e] why does pppoe connection has session timeout ? Message-ID: <1369905699.60517.YahooMailNeo@web15208.mail.cnb.yahoo.com> hi, ? maybe this is not a right place to ask this question,? if intrusive accept my applogy please. ? Doing with those access network ,?? there is always a max session timeout? for each pppoe or dial-up link. According to document of broadband access server, a default max session timeout value is designed. ? In?radius protocol document, there is also max session timeout attribute. ? To my experience, no matter how we do? dialup connection (? ppp over fiber or ppp over telephone?network) internet connection will has a max session time.?? no matter how we do , after that time? ppp connection will be cut by access server. ? so why? does protocol designer set this parameter ? ? Jing ? ' spamcontrol ' -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130530/89eb5d6c/attachment.html From detlef.bosau at web.de Thu May 30 04:43:13 2013 From: detlef.bosau at web.de (Detlef Bosau) Date: Thu, 30 May 2013 13:43:13 +0200 Subject: [e2e] why does pppoe connection has session timeout ? In-Reply-To: <1369905699.60517.YahooMailNeo@web15208.mail.cnb.yahoo.com> References: <1369905699.60517.YahooMailNeo@web15208.mail.cnb.yahoo.com> Message-ID: <51A73B51.7060905@web.de> Am 30.05.2013 11:21, schrieb Jing Shen: > hi, > > maybe this is not a right place to ask this question, if intrusive accept my applogy please. > > Doing with those access network , there is always a max session timeout for each pppoe or dial-up link. > According to document of broadband access server, a default max session timeout value is designed. > > In radius protocol document, there is also max session timeout attribute. > > To my experience, no matter how we do dialup connection ( ppp over fiber or ppp over telephone network) > internet connection will has a max session time. no matter how we do , after that time ppp connection will be > cut by access server. > > so why does protocol designer set this parameter ? > > Jing > Either a pppoe server will keep any pppoe conversation he has ever started, more precisely: answered to, or you'll find a way to shut down a pppoe conversation which is no longer needed. Neither Ethernet nor an Internet is "reliable", so a "shut down request" by the client may be lost. Therefore you introduce a time out. What you, however, may mix up is the time out for a ppp connection (don't ask me how it is called, I would have to look it up), i.e. the time until a server would shut down a connection if no packets from the peer arived, or the maximum connection time used by some German providers to discrimiate dial up connections from permanent ones. The first has technical reasons the latter has economical ones. > > > > ' spamcontrol' -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From debarshisanyal at gmail.com Fri May 31 09:07:54 2013 From: debarshisanyal at gmail.com (Debarshi Sanyal) Date: Fri, 31 May 2013 21:37:54 +0530 Subject: [e2e] Research on game-theoretic protocols for MAC layer in wireless networks Message-ID: Hi, I joined this group more than a year ago and often find the discussions quite interesting. Over the past decade, a lot of work has been done on game-theoretic models of telecom networks but I haven't heard much on this from this group. I am particularly interested in game-theory applied to wireless networks. Among other works, I found the game-theoretic MAC protocols for wireless networks developed by Lijun Chen at Caltech and Mung Chiang at Princeton ( http://www.princeton.edu/~chiangm/publicationsselect.html) very elegant and powerful. It would be nice if people share their opinions on the current state of the art in game-theoretic models (especially at MAC layer) of wireless networks and their future prospects in real life. Are the models practical? Are the algorithms efficient? Will we really have these models embedded in real networks? Regards, Debarshi Kumar Sanyal -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130531/d0a8876e/attachment.html From debarshisanyal at gmail.com Fri May 31 09:17:09 2013 From: debarshisanyal at gmail.com (Debarshi Sanyal) Date: Fri, 31 May 2013 21:47:09 +0530 Subject: [e2e] Research on game-theoretic protocols for MAC layer in wireless networks In-Reply-To: References: Message-ID: Just wanted to add that we at Jadavpur University, India worked on coming up with performance-enhancing utility functions in game-theoretic models of wireless MAC. In this regard, we had a couple of modest publications: 1. Improved performance with novel utility functions in a game-theoretic model of medium access control in wireless networks 2. Performance Improvement of Wireless MAC Using Non-Cooperative Games 3. Recovering a game model from an optimal channel access scheme for WLANs 4. A Novel QoS Differentiation Framework for IEEE 802.11 WLANs: A Game-Theoretic Approach Using an Optimal Channel Access Scheme These papers are quite simple but they explore some novel aspects of wireless MAC using game-theoretic approach. If any of you are working in this area, I would love to hear from you whether you find these papers interesting / useful. Thanks, Debarshi On 31 May 2013 21:37, Debarshi Sanyal wrote: > Hi, > > I joined this group more than a year ago and often find the discussions > quite interesting. > > Over the past decade, a lot of work has been done on game-theoretic models > of telecom networks but I haven't heard much on this from this group. I am > particularly interested in game-theory applied to wireless networks. Among > other works, I found the game-theoretic MAC protocols for wireless networks > developed by Lijun Chen at Caltech and Mung Chiang at Princeton ( > http://www.princeton.edu/~chiangm/publicationsselect.html) very elegant > and powerful. > > It would be nice if people share their opinions on the current state of > the art in game-theoretic models (especially at MAC layer) of wireless > networks and their future prospects in real life. Are the models practical? > Are the algorithms efficient? Will we really have these models embedded in > real networks? > > > > Regards, > Debarshi Kumar Sanyal > > -- Regards, Debarshi -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130531/7be477f3/attachment.html From ratul at microsoft.com Fri May 31 09:46:06 2013 From: ratul at microsoft.com (Ratul Mahajan) Date: Fri, 31 May 2013 16:46:06 +0000 Subject: [e2e] Research on game-theoretic protocols for MAC layer in wireless networks In-Reply-To: References: Message-ID: Debarshi - Some of us went down that path a while ago, and our experience is captured here: Experiences applying game theory to system design Ratul Mahajan, Maya Rodrig, David Wetherall, John Zahorjan Workshop on Practice and theory of incentives in networked systems (PINS), 2004 In a nutshell, we didn't find it easy/productive to arrive at practical designs based on game theoretic models. But we did find it useful to analyze our designs (at a high-level, not formally) based on some of the models. Cheers. From: end2end-interest-bounces at postel.org [mailto:end2end-interest-bounces at postel.org] On Behalf Of Debarshi Sanyal Sent: Friday, May 31, 2013 9:08 AM To: end2end-interest at postel.org Subject: [e2e] Research on game-theoretic protocols for MAC layer in wireless networks Hi, I joined this group more than a year ago and often find the discussions quite interesting. Over the past decade, a lot of work has been done on game-theoretic models of telecom networks but I haven't heard much on this from this group. I am particularly interested in game-theory applied to wireless networks. Among other works, I found the game-theoretic MAC protocols for wireless networks developed by Lijun Chen at Caltech and Mung Chiang at Princeton (http://www.princeton.edu/~chiangm/publicationsselect.html) very elegant and powerful. It would be nice if people share their opinions on the current state of the art in game-theoretic models (especially at MAC layer) of wireless networks and their future prospects in real life. Are the models practical? Are the algorithms efficient? Will we really have these models embedded in real networks? Regards, Debarshi Kumar Sanyal -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130531/ca424b7d/attachment.html From debarshisanyal at gmail.com Fri May 31 23:19:56 2013 From: debarshisanyal at gmail.com (Debarshi Sanyal) Date: Sat, 1 Jun 2013 11:49:56 +0530 Subject: [e2e] Research on game-theoretic protocols for MAC layer in wireless networks In-Reply-To: References: Message-ID: Hi Ratul, Thanks for the paper. Appreciate that beneath the apparent elegance and simplicity of Nash equilibrium, there lie umpteen computational challenges. So I wonder whether game theory with its ubiquity in Science and Economics can truly help in system design especially wireless networks. Waiting for more comments and experience reports... -Debarshi On 31 May 2013 22:16, Ratul Mahajan wrote: > ** ** > > ** ** > > Debarshi ?**** > > ** ** > > Some of us went down that path a while ago, and our experience is captured > here:**** > > ** ** > > *Experiences applying game theory to system design* > Ratul Mahajan, Maya Rodrig, David Wetherall, John Zahorjan > Workshop on Practice and theory of incentives in networked systems (PINS), > 2004**** > > ** ** > > In a nutshell, we didn?t find it easy/productive to arrive at practical > designs based on game theoretic models. But we did find it useful to > analyze our designs (at a high-level, not formally) based on some of the > models.**** > > ** ** > > Cheers.**** > > ** ** > > *From:* end2end-interest-bounces at postel.org [mailto: > end2end-interest-bounces at postel.org] *On Behalf Of *Debarshi Sanyal > *Sent:* Friday, May 31, 2013 9:08 AM > *To:* end2end-interest at postel.org > *Subject:* [e2e] Research on game-theoretic protocols for MAC layer in > wireless networks**** > > ** ** > > Hi,**** > > ** ** > > I joined this group more than a year ago and often find the discussions > quite interesting.**** > > ** ** > > Over the past decade, a lot of work has been done on game-theoretic models > of telecom networks but I haven't heard much on this from this group. I am > particularly interested in game-theory applied to wireless networks. Among > other works, I found the game-theoretic MAC protocols for wireless networks > developed by Lijun Chen at Caltech and Mung Chiang at Princeton ( > http://www.princeton.edu/~chiangm/publicationsselect.html) very elegant > and powerful.**** > > ** ** > > It would be nice if people share their opinions on the current state of > the art in game-theoretic models (especially at MAC layer) of wireless > networks and their future prospects in real life. Are the models practical? > Are the algorithms efficient? Will we really have these models embedded in > real networks?**** > > ** ** > > ** ** > > ** ** > > Regards, > Debarshi Kumar Sanyal**** > > ** ** > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20130601/71b1eea9/attachment.html