From frasker at hotmail.com  Wed Aug  2 03:34:30 2006
From: frasker at hotmail.com (F. Bouy)
Date: Wed, 02 Aug 2006 10:34:30 +0000
Subject: [e2e] TCP ACK HeuristicII
Message-ID: <BAY113-F5A15D46D06258CA5E2419B5520@phx.gbl>

Hi,

I implemented a downscaled TCP NewReno with ACK heuristic based on RFC 3782. 
I notice that a timeout very likely triggers subsequent timeouts. Upon 
debugging the code, I notice ACK heuristic successfully avoid re-entering 
fast-retransmit (the three duplicates are due to unnecessary retransmission 
of three packets), but packets in flight quickly drain out because the code 
handling the duplicate ACK does not inject new packets into the network. 
Consequently, a timeout occurs.

Although it is common sense that one should inflate cwnd by three MSS 
whenever ACK heuristic determines that it was an unnecessary 
retransmissions, RFC 3782 does not state about the cwnd apart from "what not 
to do".



From jtk at northwestern.edu  Wed Aug  2 15:17:05 2006
From: jtk at northwestern.edu (John Kristoff)
Date: Wed, 2 Aug 2006 17:17:05 -0500
Subject: [e2e] What if there were no well known numbers?
Message-ID: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>

Could the removal of well known numbers actually be a rousing change
more fundamental to the Internet architecture than anything we've seen
before, even more so than commercialization, Microsoft Windows
implementation nuances, NATs and multihoming.  Indulge me for a momment.

There is a Internet Draft that has as part of the file name
"no-more-well-known-ports".  The basic idea is that DNS SRV lookups
should be used to determine a unique port with which to get service
from the intended destination server.

In some ways this approach is appealing.  I thought it might be a
nice way to slow the tide of arbitrary protocol port filtering and
hamper common remote attacks against a particular well known service.

Looking ahead a bit howver, if this were widely implemented, what
other outcomes might we see given some time?  DNS would become
increasingly important of course.  Maybe even enough for a small
boom market within that sector.  I can envision companies selling
boxes that "mangle" or proxy SRV responses in the name of some
defined site policy.

In short, couldn't this, wouldn't this, lead to a rapid rise in DNS-
based walled gardens (or if you prefer the quick and steady rise of
a fractured root, eventual modus operandi) as everyone moves to
replace their udp/tcp packet manglers with RR-scrubbers?

Am I way off here?

John

From dpreed at reed.com  Wed Aug  2 19:39:27 2006
From: dpreed at reed.com (David P. Reed)
Date: Wed, 02 Aug 2006 22:39:27 -0400
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
Message-ID: <44D161DF.9070805@reed.com>

In summary, the security emperor would be exposed to the elements, as 
follows:

well-known ports are indeed a bad idea, at least in my personal opinion.

In my ancient proposed protocol (which I dropped in favor or helping 
build TCP), called DSP, the concept was that an endpoint's name did not 
contain a separate "host" boundary.  Instead it was the name of a 
process port.   One could not tell by the endpoint-name structure 
whether two named endpoints were on the same machine.

The advantage of endpoint unique-naming was that it increased 
"information hiding" and therefore enhanced modularity.  The same would 
be true if well-known ports were dropped.

The concept of "well-known ports" was built for a world in which there 
was no lookup function, no DNS - in fact for a world where people typed 
addresses in octal, long before there was a naming service (even before 
the hosts file).

Then operating systems API types decided that they should only allow 
processes of certain privileges to listen on certain predefined ports.   
This would have been unnecessary if all services were offered on 
randomly chosen endpoint addresses that were cataloged upon creation in 
a DNS-like service.  

But you asked what would go wrong.   Here's what would break.   The 
entire Internet security community operates under the assumption that 
port numbers have magical, mysterious properties such that one can 
"block ports" and achieve perfect security.

In fact, blocking ports achieves no security to speak of.   But you'd be 
threatening to expose the Emperor's nakedness with this proposal.


John Kristoff wrote:
> Could the removal of well known numbers actually be a rousing change
> more fundamental to the Internet architecture than anything we've seen
> before, even more so than commercialization, Microsoft Windows
> implementation nuances, NATs and multihoming.  Indulge me for a momment.
>
> There is a Internet Draft that has as part of the file name
> "no-more-well-known-ports".  The basic idea is that DNS SRV lookups
> should be used to determine a unique port with which to get service
> from the intended destination server.
>
> In some ways this approach is appealing.  I thought it might be a
> nice way to slow the tide of arbitrary protocol port filtering and
> hamper common remote attacks against a particular well known service.
>
> Looking ahead a bit howver, if this were widely implemented, what
> other outcomes might we see given some time?  DNS would become
> increasingly important of course.  Maybe even enough for a small
> boom market within that sector.  I can envision companies selling
> boxes that "mangle" or proxy SRV responses in the name of some
> defined site policy.
>
> In short, couldn't this, wouldn't this, lead to a rapid rise in DNS-
> based walled gardens (or if you prefer the quick and steady rise of
> a fractured root, eventual modus operandi) as everyone moves to
> replace their udp/tcp packet manglers with RR-scrubbers?
>
> Am I way off here?
>
> John
>
>
>   


From huitema at windows.microsoft.com  Wed Aug  2 21:18:47 2006
From: huitema at windows.microsoft.com (Christian Huitema)
Date: Wed, 2 Aug 2006 21:18:47 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <44D161DF.9070805@reed.com>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D161DF.9070805@reed.com>
Message-ID: <70C6EFCDFC8AAD418EF7063CD132D064017E316B@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com>

> In fact, blocking ports achieves no security to speak of.   But you'd
> be threatening to expose the Emperor's nakedness with this proposal.

Blocking ports is a "black list" approach, i.e. mark something as
dangerous, and then block it. Many edge firewalls follow a "white list"
approach, i.e. mark something as innocuous and then allow it. In that
case, being able to quickly identify the application actually enhances
connectivity.

Of course, I am well aware of the games that can be played, e.g. running
HTTP on some random port number, or running some random application on
port 80...

-- Christian Huitema

From fergdawg at netzero.net  Wed Aug  2 21:44:43 2006
From: fergdawg at netzero.net (Fergie)
Date: Thu, 3 Aug 2006 04:44:43 GMT
Subject: [e2e] What if there were no well known numbers?
Message-ID: <20060802.214446.19107.586678@webmail50.lax.untd.com>

An embedded and charset-unspecified text was scrubbed...
Name: not available
Url: http://mailman.postel.org/pipermail/end2end-interest/attachments/20060803/5b23a45f/attachment.ksh

From spencer at mcsr-labs.org  Thu Aug  3 05:15:24 2006
From: spencer at mcsr-labs.org (Spencer Dawkins)
Date: Thu, 3 Aug 2006 07:15:24 -0500
Subject: [e2e] What if there were no well known numbers?
References: <20060802.214446.19107.586678@webmail50.lax.untd.com>
Message-ID: <06ce01c6b6f6$80032e60$4f087c0a@china.huawei.com>

Hi, Fergie,

I was confused when I read this the first time, so kept reading. I think I 
understand where you're coming from now. Please let me try to restate...


> Not responding necessarily to Christian, but more to the fallacy
> that blocking ports (paraphrased) "...doesn't achieve anything."
>
> That's a ridiculous assumption.
>
> When threat intelligence is gleaned in (near) real-time, and
> aged appropriately (bad stuff is taken off-line), blocking it
> (or perhaps, access to it, as the case may be) achives a great
> deal. Depending on what you want to achive.

You're coming from previous experience where people closed down specific 
ports, based on attacks that were exploiting the availability of specific 
ports.

If this is what you are saying, I agree. Detecting 135/TCP scans was the 
documented detection method for Blaster, for example.

I think the "...doesn't achieve anything" is looking a bit further down the 
road, and a bit further from side to side going down the road:

- attacks are forced onto the same (usually open) ports as well-known 
applications, as network administrators move to "white lists" for ports, and

- as more and more application protocols are port-agile, you have less and 
less clue about what the traffic actually is, if you care about more than 
"is this an attack?".

with "everything over port 80" being the terminal condition (there is only 
one port that you can count on, so all application protocols and all attacks 
use port 80).

Does this make sense?

Thank you,

Spencer 



From Jon.Crowcroft at cl.cam.ac.uk  Thu Aug  3 05:45:13 2006
From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft)
Date: Thu, 03 Aug 2006 13:45:13 +0100
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: Message from "Spencer Dawkins" <spencer@mcsr-labs.org> of "Thu,
	03 Aug 2006 07:15:24 CDT."
	<06ce01c6b6f6$80032e60$4f087c0a@china.huawei.com> 
Message-ID: <E1G8cZe-0007mt-00@mta1.cl.cam.ac.uk>

with IPv6, who needs ports?

From fergdawg at netzero.net  Thu Aug  3 06:34:10 2006
From: fergdawg at netzero.net (Fergie)
Date: Thu, 3 Aug 2006 13:34:10 GMT
Subject: [e2e] What if there were no well known numbers?
Message-ID: <20060803.063439.793.56436@webmail17.lax.untd.com>

An embedded and charset-unspecified text was scrubbed...
Name: not available
Url: http://mailman.postel.org/pipermail/end2end-interest/attachments/20060803/ffc7654d/attachment.ksh

From fergdawg at netzero.net  Thu Aug  3 06:36:29 2006
From: fergdawg at netzero.net (Fergie)
Date: Thu, 3 Aug 2006 13:36:29 GMT
Subject: [e2e] What if there were no well known numbers?
Message-ID: <20060803.063647.793.56464@webmail17.lax.untd.com>

An embedded and charset-unspecified text was scrubbed...
Name: not available
Url: http://mailman.postel.org/pipermail/end2end-interest/attachments/20060803/156bb7a2/attachment.ksh

From spencer at mcsr-labs.org  Thu Aug  3 07:23:33 2006
From: spencer at mcsr-labs.org (Spencer Dawkins)
Date: Thu, 3 Aug 2006 09:23:33 -0500
Subject: [e2e] What if there were no well known numbers?
References: <20060803.063647.793.56464@webmail17.lax.untd.com>
Message-ID: <008601c6b708$67121ee0$0600a8c0@china.huawei.com>

Thanks for following up - I understand better now, and think we are in 
agreement.


> -- Jon Crowcroft <Jon.Crowcroft at cl.cam.ac.uk> wrote:
>
>>with IPv6, who needs ports?
>>
>
> Well, true that. :-)
>
> As I mentioned in an earlier missive, it doesn't really matter
> one way or the other. I'm fairly certain that the only reason(s)
> that we have them now is due to legacy, and out of convenience.
> Not necessarily in that order. :-)
>
> - ferg 



From jtk at northwestern.edu  Thu Aug  3 07:51:11 2006
From: jtk at northwestern.edu (John Kristoff)
Date: Thu, 3 Aug 2006 09:51:11 -0500
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <44D161DF.9070805@reed.com>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D161DF.9070805@reed.com>
Message-ID: <20060803145112.F208D136C82@aharp.ittns.northwestern.edu>

On Wed, 02 Aug 2006 22:39:27 -0400
"David P. Reed" <dpreed at reed.com> wrote:

[...]
> In fact, blocking ports achieves no security to speak of.   But you'd
> be threatening to expose the Emperor's nakedness with this proposal.

The thread immediately went to a place I wasn't expecting and actually
had not intended it to go.  I think the basic premise that filtering
by magic numbers, be they ports, protocols or even some pattern match
is an effort in futility and a large number of people believe that
(though certainly not all).  Though this does raise another point I had
originally wanted to raise.  What if, well known numbers and even the
protocol semantics themselves, at least those that traditionally matter
on an end-to-end basis, were used in unexpected ways?  So for example,
what if I start setting up systems in which TCP is IP protocol 17 or I
rewrite my TCP stacks so that window is effectively hard coded to
infinity and ACKs are only used to pander to the middle boxes that want
to see them?  It might not be very nice, but what protocol police are
going to stop me from doing this?  I think the exposure of nakedness
you described would likely be the outcome again if these sorts of things
ever got off the ground in any significant way.

> > In short, couldn't this, wouldn't this, lead to a rapid rise in DNS-
> > based walled gardens (or if you prefer the quick and steady rise of
> > a fractured root, eventual modus operandi) as everyone moves to
> > replace their udp/tcp packet manglers with RR-scrubbers?

So I'd like to try to highlight the specific point on the use of DNS
that I was trying to make.  I had taken a quick look for previous
discussion on this (sorry, always bad form to do that _after_ a post)
and realized I forgot about Joe Touch's draft-touch-tcp-portnames.
He already mentions the challenge of autonomy with SRV records.  It
certainly seems like the widespread use of SRV records could well be
the sort of fundamental change end2end lovers would fear most.  Or
maybe I'm just here to stir up some noise on the otherwise unusually
quiet end2end list?  :-)

John

From huitema at windows.microsoft.com  Thu Aug  3 09:50:19 2006
From: huitema at windows.microsoft.com (Christian Huitema)
Date: Thu, 3 Aug 2006 09:50:19 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <20060803.063439.793.56436@webmail17.lax.untd.com>
References: <20060803.063439.793.56436@webmail17.lax.untd.com>
Message-ID: <70C6EFCDFC8AAD418EF7063CD132D064017E3295@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com>

> In other words, what if Microsoft loc-srv/epmap did not use tcp/135?
> What if it just used any various potr at any time? 

The end point mapper enables RPC based applications to run on variable
ports, allocated when the application starts. So, it already enables the
absence of "well known port numbers" for these applications. But to do
that, you need a rendezvous service at some well known location...

-- Christian Huitema 


From saikat at cs.cornell.edu  Thu Aug  3 11:14:41 2006
From: saikat at cs.cornell.edu (Saikat Guha)
Date: Thu, 03 Aug 2006 19:14:41 +0100
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <70C6EFCDFC8AAD418EF7063CD132D064017E3295@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com>
References: <20060803.063439.793.56436@webmail17.lax.untd.com>
	<70C6EFCDFC8AAD418EF7063CD132D064017E3295@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com>
Message-ID: <1154628881.21466.58.camel@localhost.localdomain>

On Thu, 2006-08-03 at 09:50 -0700, Christian Huitema wrote:
> > In other words, what if Microsoft loc-srv/epmap did not use tcp/135?
> > What if it just used any various potr at any time? 
> 
> The end point mapper enables RPC based applications to run on variable
> ports, allocated when the application starts. So, it already enables the
> absence of "well known port numbers" for these applications. But to do
> that, you need a rendezvous service at some well known location...

In a private network, one can get around the "well known
location"/rendezvous bootstrapping limitation with some local broadcast
-- UPnP, Apple's rendezvous/Zeroconf, and even DHCP to some extent
perhaps. That further reduces the problem to that of fishing out well
known packet formats on the wire.

Across network boundaries, however, it starts looking more like Plutarch
with interstitial functions/middleboxes translating
configuration-requests on one side to that on the other side.

-- 
Saikat
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 191 bytes
Desc: This is a digitally signed message part
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060803/29da6127/attachment.bin

From saikat at cs.cornell.edu  Thu Aug  3 11:25:24 2006
From: saikat at cs.cornell.edu (Saikat Guha)
Date: Thu, 03 Aug 2006 19:25:24 +0100
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <20060803.063647.793.56464@webmail17.lax.untd.com>
References: <20060803.063647.793.56464@webmail17.lax.untd.com>
Message-ID: <1154629524.21466.69.camel@localhost.localdomain>

On Thu, 2006-08-03 at 13:36 +0000, Fergie wrote:
> -- Jon Crowcroft <Jon.Crowcroft at cl.cam.ac.uk> wrote:
> 
> >with IPv6, who needs ports?
> >
> 
> Well, true that. :-)
> 
> As I mentioned in an earlier missive, it doesn't really matter
> one way or the other. I'm fairly certain that the only reason(s)
> that we have them now is due to legacy, and out of convenience.
> Not necessarily in that order. :-)

Actually, if there are no well-known ports then there must presumably be
some mechanism to discover what dynamic port the app is using. 

Why stop at ports then? The same mechanism can be used to discover what
IP address that app is using. 
With end-to-end discovery, who needs static IP(v6) addresses?

-- 
Saikat
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 191 bytes
Desc: This is a digitally signed message part
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060803/0f46be0b/attachment.bin

From touch at ISI.EDU  Thu Aug  3 15:20:00 2006
From: touch at ISI.EDU (Joe Touch)
Date: Thu, 03 Aug 2006 15:20:00 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
Message-ID: <44D27690.30507@isi.edu>



John Kristoff wrote:
> Could the removal of well known numbers actually be a rousing change
> more fundamental to the Internet architecture than anything we've seen
> before, even more so than commercialization, Microsoft Windows
> implementation nuances, NATs and multihoming.  Indulge me for a momment.
> 
> There is a Internet Draft that has as part of the file name
> "no-more-well-known-ports".

There's a somewhat related one called "draft-touch-tcp-portnames-00.txt".

> The basic idea is that DNS SRV lookups
> should be used to determine a unique port with which to get service
> from the intended destination server.

The above document explains why SRV records are not, IMO, a viable
alternative. They add an extra round trip of delay for first use which
can be avoided, and they endorse using the DNS as a place in which to
register names which are fundamentally under control of the endpoint anyway.

It also explains why 'portmapper'-like solutions may be better in
keeping control at the endsystem, but still require additional round
trip times.

As to blocking/opening ports based on number, that makes the assumption
that port number has meaning outside the two endpoints of a connection,
which it does not.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060803/284b54f2/signature.bin

From Mikael.Latvala at nokia.com  Thu Aug  3 23:24:44 2006
From: Mikael.Latvala at nokia.com (Mikael.Latvala@nokia.com)
Date: Fri, 4 Aug 2006 09:24:44 +0300
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
Message-ID: <326A98184EC61140A2569345DB3614C1032B8F5A@esebe105.NOE.Nokia.com>

Getting back to the original question I would say no.

First of all, these RR-scrubbers as you call them would restore the
original network programming paradigm, i.e. e2e address transparency,
which would a make a huge difference when one considers all the problems
these udp/tcp packet manglers have created.

Don't know what you are you referring to when saying walled gardens. If
you mean architectures similar to WAP, then I would say that the use of
SRV RRs would not lead to a rapid rise in DNS-based walled gardens. If
you mean "protected" intranets, then maybe, depending on how one
interprets the word protected. One could easily complement basic SRV RR
service with some kind of an authorization mechanism where the service
provider could determine to which party the DNS server can expose the
binding between a logical service name and a port number.

IMHO the use of SRV RRs, which should be encouraged anyways, would lead
to a new breed of NAT boxes which provide service mapping using DNS SRV
RR in addition to traditional address mangling and possible firewall
functionality.

/Mikael

>Looking ahead a bit howver, if this were widely implemented, 
>what other outcomes might we see given some time?  DNS would 
>become increasingly important of course.  Maybe even enough 
>for a small boom market within that sector.  I can envision 
>companies selling boxes that "mangle" or proxy SRV responses 
>in the name of some defined site policy.
>
>In short, couldn't this, wouldn't this, lead to a rapid rise 
>in DNS- based walled gardens (or if you prefer the quick and 
>steady rise of a fractured root, eventual modus operandi) as 
>everyone moves to replace their udp/tcp packet manglers with 
>RR-scrubbers?
>
>Am I way off here?
>
>John
>

From day at std.com  Fri Aug  4 07:30:30 2006
From: day at std.com (John Day)
Date: Fri, 4 Aug 2006 10:30:30 -0400
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
Message-ID: <a06230975c0f90a488af7@[10.0.1.3]>

You are absolutely correct.   Well-known sockets were a kludge.  An 
expediency so that we could test the 3 applications we had and get 
ready for a demo.  They weren't intended to last forever, or even 
very long.  But since we went 20 years with no new applications, a 
lot of people began to make up myths as to why they were a great 
idea.  We have to guard those the stone tablets after all.

Well-known sockets are just one indication that what we have is an 
unfinished demo.

Take care,
John


At 17:17 -0500 2006/08/02, John Kristoff wrote:
>Could the removal of well known numbers actually be a rousing change
>more fundamental to the Internet architecture than anything we've seen
>before, even more so than commercialization, Microsoft Windows
>implementation nuances, NATs and multihoming.  Indulge me for a momment.
>
>There is a Internet Draft that has as part of the file name
>"no-more-well-known-ports".  The basic idea is that DNS SRV lookups
>should be used to determine a unique port with which to get service
>from the intended destination server.
>
>In some ways this approach is appealing.  I thought it might be a
>nice way to slow the tide of arbitrary protocol port filtering and
>hamper common remote attacks against a particular well known service.
>
>Looking ahead a bit howver, if this were widely implemented, what
>other outcomes might we see given some time?  DNS would become
>increasingly important of course.  Maybe even enough for a small
>boom market within that sector.  I can envision companies selling
>boxes that "mangle" or proxy SRV responses in the name of some
>defined site policy.
>
>In short, couldn't this, wouldn't this, lead to a rapid rise in DNS-
>based walled gardens (or if you prefer the quick and steady rise of
>a fractured root, eventual modus operandi) as everyone moves to
>replace their udp/tcp packet manglers with RR-scrubbers?
>
>Am I way off here?
>
>John


From braden at ISI.EDU  Fri Aug  4 09:53:45 2006
From: braden at ISI.EDU (Bob Braden)
Date: Fri, 04 Aug 2006 09:53:45 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <44D27690.30507@isi.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
Message-ID: <5.1.0.14.2.20060804095006.00ab4e58@boreas.isi.edu>


John Kristoff wrote:



> > The basic idea is that DNS SRV lookups
> > should be used to determine a unique port with which to get service
> > from the intended destination server.


I recall a discussion of this idea in the IAB in the early 1980s.  The argument
that won out was that we did not believe people would keep SRV records
current, and as a result things would break badly and often if we depended
upon them.

Bob Braden


From salman at cs.columbia.edu  Fri Aug  4 12:21:58 2006
From: salman at cs.columbia.edu (Salman Abdul Baset)
Date: Fri, 4 Aug 2006 15:21:58 -0400 (EDT)
Subject: [e2e] OS Implementation of Byte Counting during Congestion Avoidance
Message-ID: <Pine.GSO.4.58.0608041514330.27587@disco.cs.columbia.edu>

I was wondering if OSes implement byte counting during congestion
avoidance. From my investigation it appears:

1) Linux
   2.6.16 and higher implement RFC 3465 Appropriate Byte Counting during
   slow start. However, it does not implement byte counting during
   congestion avoidance.

2) Windows XP
   Does not implement byte counting during slow start and congestion
   avoidance.

3) Windows Vista
   Will implement RFC 3465 which means byte counting during slow start.
   Not sure about byte counting during CA.

3) MacOS, FreeBSD, OpenBSD?
   Does anyone who uses these OSes know if they implement byte counting
   during slow-start and congestion avoidance?

Thanks
Salman

From braden at ISI.EDU  Fri Aug  4 12:46:39 2006
From: braden at ISI.EDU (Bob Braden)
Date: Fri, 04 Aug 2006 12:46:39 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <44D161DF.9070805@reed.com>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
Message-ID: <5.1.0.14.2.20060804124410.00b02e50@boreas.isi.edu>

At 10:39 PM 8/2/2006 -0400, David P. Reed wrote:


>The concept of "well-known ports" was built for a world in which there was 
>no lookup function, no DNS - in fact for a world where people typed 
>addresses in octal, long before there was a naming service (even before 
>the hosts file).


Well, not exactly.  It was (deliberately) built for a world in which we did 
not want increased communication fragility resulting
  from DNS lookup failures.  Maybe that concern no longer seems real, but 
it was the concern.

Bob Braden


From jnc at mercury.lcs.mit.edu  Fri Aug  4 13:08:56 2006
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Fri,  4 Aug 2006 16:08:56 -0400 (EDT)
Subject: [e2e] What if there were no well known numbers?
Message-ID: <20060804200856.6D46F872EF@mercury.lcs.mit.edu>

    > From: Bob Braden <braden at ISI.EDU>

    > At 10:39 PM 8/2/2006 -0400, David P. Reed wrote:

    >> The concept of "well-known ports" was built for a world in which there
    >> was no lookup function, no DNS - in fact for a world where people
    >> typed addresses in octal, long before there was a naming service (even
    >> before the hosts file).

    > Well, not exactly. It was (deliberately) built for a world in which we
    > did not want increased communication fragility resulting from DNS
    > lookup failures.

Does it have to be one (no infrastructure) or the other (fragility)? I always
thought it was some of both.

In any event, I got the impression that TCP pretty much just followed NCP's
lead on this. Is there anyone here who was around for the NCP design who can
comment on what NCP's reasons were for well-known ports? My guess would be
lack of infrastructure (as DPR points out, that was before there was even
HOSTS.TXT).

	Noel

From huitema at windows.microsoft.com  Fri Aug  4 14:23:44 2006
From: huitema at windows.microsoft.com (Christian Huitema)
Date: Fri, 4 Aug 2006 14:23:44 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <20060804200856.6D46F872EF@mercury.lcs.mit.edu>
References: <20060804200856.6D46F872EF@mercury.lcs.mit.edu>
Message-ID: <70C6EFCDFC8AAD418EF7063CD132D064017E3AA6@WIN-MSG-21.wingroup.windeploy.ntdev.microsoft.com>

> In any event, I got the impression that TCP pretty much just followed
> NCP's lead on this. Is there anyone here who was around for the NCP
> design who can comment on what NCP's reasons were for well-known
ports?
> My guess would be lack of infrastructure (as DPR points out, that was
> before there was even HOSTS.TXT).

That was certainly not the only reason. At about the same time, the ISO
protocols were designed to use a "Transport Service Access Point
Identifier", with pretty much the semantic that Joe is proposing in
"draft-touch-tcp-portnames-00.txt". These protocols would negotiate a
pair of random identifiers for each connection. Some thought that having
port numbers that serve as both "end point identifier" and "connection
identifier" was a better design. Traffic analysis is definitely easier,
and there are advantages in some scenarios, e.g. simultaneous connect.

-- Christian Huitema 

From day at std.com  Fri Aug  4 15:26:58 2006
From: day at std.com (John Day)
Date: Fri, 4 Aug 2006 18:26:58 -0400
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <5.1.0.14.2.20060804124410.00b02e50@boreas.isi.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<5.1.0.14.2.20060804124410.00b02e50@boreas.isi.edu>
Message-ID: <a06230981c0f977fa40d6@[10.0.1.3]>

At 12:46 -0700 2006/08/04, Bob Braden wrote:
>At 10:39 PM 8/2/2006 -0400, David P. Reed wrote:
>
>>The concept of "well-known ports" was built for a world in which 
>>there was no lookup function, no DNS - in fact for a world where 
>>people typed addresses in octal, long before there was a naming 
>>service (even before the hosts file).
>
>
>Well, not exactly.  It was (deliberately) built for a world in which 
>we did not want increased communication fragility resulting
>  from DNS lookup failures.  Maybe that concern no longer seems real, 
>but it was the concern.

DNS had not even been dreamed of when well-known sockets were put in 
place.  It was a quick fix to get ready for ICCC '72.  These are 
after the fact excuses to maintain the status quo.

John

From braden at ISI.EDU  Fri Aug  4 15:54:40 2006
From: braden at ISI.EDU (Bob Braden)
Date: Fri, 04 Aug 2006 15:54:40 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <20060804200856.6D46F872EF@mercury.lcs.mit.edu>
Message-ID: <5.1.0.14.2.20060804154539.00b13e40@boreas.isi.edu>


>
>
>
>In any event, I got the impression that TCP pretty much just followed NCP's
>lead on this. Is there anyone here who was around for the NCP design who can
>comment on what NCP's reasons were for well-known ports? My guess would be
>lack of infrastructure (as DPR points out, that was before there was even
>HOSTS.TXT).


I was around during the NCP design, in the next building over from the CS
department where Crocker, Postel, Cerf, ... were laboring.  I attended
most of the NWG meetings and read the RFCs at the time.  So, here is
my opinion, but Steve Crocker himself is best qualified to answer this.

Remember that at the time there was no previous experience with designing
or implementing network protocols. (Well, I guess the Cyclades and maybe
the Cambridge folks were doing something, but it was not well known among
the UCLA grad students who designed NCP).  Crocker et al drew nice
schematic diagrams with process clouds communicating through ports.
How to name these ports?  Well, a 16 bit number (gosh, I forget,.. was it
an 8 bit number?) seemed like the most obvious thing to use, so that is
what they used.  A numeric port was a string of bits with no semantic
interpretation a priori, so it appealed to the reductionist approach that
was common in the early network designs.

Maybe this discussion should be posted to the history list.

Bob Braden

>         Noel


From braden at ISI.EDU  Fri Aug  4 16:07:09 2006
From: braden at ISI.EDU (Bob Braden)
Date: Fri, 04 Aug 2006 16:07:09 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <a06230981c0f977fa40d6@[10.0.1.3]>
References: <5.1.0.14.2.20060804124410.00b02e50@boreas.isi.edu>
	<20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<5.1.0.14.2.20060804124410.00b02e50@boreas.isi.edu>
Message-ID: <5.1.0.14.2.20060804160540.034094e0@boreas.isi.edu>


>
>
>DNS had not even been dreamed of when well-known sockets were put in 
>place.  It was a quick fix to get ready for ICCC '72.  These are after the 
>fact excuses to maintain the status quo.
>
>John

But there was a hosts.txt file, which served the same purpose.  My memory 
of ICCC 72 (yes, I was there) differs
from yours.  I recall Jon designing several versions of the ICP, which 
implicitly decided the issue of port
representation.

Bob


From day at std.com  Fri Aug  4 19:00:33 2006
From: day at std.com (John Day)
Date: Fri, 4 Aug 2006 22:00:33 -0400
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <5.1.0.14.2.20060804154539.00b13e40@boreas.isi.edu>
References: <5.1.0.14.2.20060804154539.00b13e40@boreas.isi.edu>
Message-ID: <a06230985c0f9aa661218@[10.0.1.3]>

I remember that we had already had conversations about 
application-names and network addresses and a directory.  I know that 
a lot of our thinking was using operating systems as a guide to how 
to do it.  But since we only had 3 applications and only one 
occurrence of each per host, and we needed to get something up and 
running, there wasn't time to do it right.  Perhaps we were having 
these conversations with people other than the UCLA people.  Maybe it 
was the Multics crowd.  I can believe that in the very very early 
days that was the logic, but by 1970 or so, we knew better.  From 
about then, I always considered well-known sockets to be the 
equivalent of "hard-wiring low core."

A kludge.

The port numbers in TCP and NCP function as a connection-identifier 
within the scope of the (src, dest) addresses, i.e. it distinguishes 
multiple connections/flows between the same two points.  They do not 
identify applications.  The well-known idea is just an expedient 
convention.  It clearly doesn't generalize unless you are McKenzie 
who believed that Telnet and FTP were all you needed. ;-)

At 15:54 -0700 2006/08/04, Bob Braden wrote:
>>In any event, I got the impression that TCP pretty much just followed NCP's
>>lead on this. Is there anyone here who was around for the NCP design who can
>>comment on what NCP's reasons were for well-known ports? My guess would be
>>lack of infrastructure (as DPR points out, that was before there was even
>>HOSTS.TXT).
>
>
>I was around during the NCP design, in the next building over from the CS
>department where Crocker, Postel, Cerf, ... were laboring.  I attended
>most of the NWG meetings and read the RFCs at the time.  So, here is
>my opinion, but Steve Crocker himself is best qualified to answer this.
>
>Remember that at the time there was no previous experience with designing
>or implementing network protocols. (Well, I guess the Cyclades and maybe
>the Cambridge folks were doing something, but it was not well known among
>the UCLA grad students who designed NCP).  Crocker et al drew nice
>schematic diagrams with process clouds communicating through ports.
>How to name these ports?  Well, a 16 bit number (gosh, I forget,.. was it
>an 8 bit number?) seemed like the most obvious thing to use, so that is
>what they used.  A numeric port was a string of bits with no semantic
>interpretation a priori, so it appealed to the reductionist approach that
>was common in the early network designs.
>
>Maybe this discussion should be posted to the history list.
>
>Bob Braden
>
>>         Noel


From gds at best.com  Fri Aug  4 21:04:24 2006
From: gds at best.com (Greg Skinner)
Date: Sat, 5 Aug 2006 04:04:24 +0000
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <a06230985c0f9aa661218@[10.0.1.3]>;
	from day@std.com on Fri, Aug 04, 2006 at 10:00:33PM -0400
References: <5.1.0.14.2.20060804154539.00b13e40@boreas.isi.edu>
	<a06230985c0f9aa661218@[10.0.1.3]>
Message-ID: <20060805040424.A84515@gds.best.vwh.net>

On Fri, Aug 04, 2006 at 10:00:33PM -0400, John Day wrote:
> I remember that we had already had conversations about 
> application-names and network addresses and a directory.  I know that 
> a lot of our thinking was using operating systems as a guide to how 
> to do it.  But since we only had 3 applications and only one 
> occurrence of each per host, and we needed to get something up and 
> running, there wasn't time to do it right.  Perhaps we were having 
> these conversations with people other than the UCLA people.  Maybe it 
> was the Multics crowd.  I can believe that in the very very early 
> days that was the logic, but by 1970 or so, we knew better.  From 
> about then, I always considered well-known sockets to be the 
> equivalent of "hard-wiring low core."
> 
> A kludge.
> 
> The port numbers in TCP and NCP function as a connection-identifier 
> within the scope of the (src, dest) addresses, i.e. it distinguishes 
> multiple connections/flows between the same two points.  They do not 
> identify applications.  The well-known idea is just an expedient 
> convention.  It clearly doesn't generalize unless you are McKenzie 
> who believed that Telnet and FTP were all you needed. ;-)

I wasn't involved in any of the ARPANET R&D, but I was able to piece
together a bit from the old RFCs.  The socket as connection identifier
made its debut in RFC 33.  It was a 8-bit field called AEN (Another
Eight-Bit Number).  The idea that there should be a "directory"
of sockets appeared in RFC 65.  Jon Postel posed the question of
whether standard protocols should have assigned/reserved socket
numbers in RFC 205.  The first call for well known socket numbers came
in RFC 322 (accompanying a concern about which socket numbers were
currently in use at which hosts).  JP proposed that he be the "czar"
for standard socket numbers in RFC 349.  So it seems as if well-known
sockets were an expediency, as you say, which was preserved in TCP and
UDP.

BTW, speaking of hosts.txt, RFC 226 contained the first cut at an
"official" hosts list.  (Before that, the RFC noted that each telnet
implementation provided its own list of host names.)

--gregbo

From day at std.com  Sat Aug  5 06:58:07 2006
From: day at std.com (John Day)
Date: Sat, 5 Aug 2006 09:58:07 -0400
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <20060805040424.A84515@gds.best.vwh.net>
References: <5.1.0.14.2.20060804154539.00b13e40@boreas.isi.edu>
	<a06230985c0f9aa661218@[10.0.1.3]>
	<20060805040424.A84515@gds.best.vwh.net>
Message-ID: <a06230907c0fa544ebf42@[10.0.1.10]>

Well, I came to the ARPANet a little late.  We didn't get involved 
until late 1969 early 1970.  There are a couple of things I have 
noticed from my study of history and being involved in events that 
people want to write about:  What one surmises from the written 
record was going on and what was actually going on are often very 
much at odds.  First of all, the historians have a hard time not 
seeing the written record in terms of the outcome, which of course 
was completely unknown at the time of the events. And for those of us 
in the field, it is even harder.  (I have found that most physicists 
I have talked with including a Nobel Laureate or two have a hard time 
imagining doing physics not knowing that many of our most obvious 
assumptions were not known.)

There was also a lot of independent invention going on.  For example, 
the socket as connection-id was probably invented independently 
several times.  It falls out pretty straightforwardly when you look 
at interprocess communication.  And remember, most of us were not 
data comm people, but operating system guys, so we were using OSs as 
the model for what the net should look like, especially since the 
goal was "resource sharing."

The other interesting thing which is fairly common in the history of 
science is that when there is a paradigm shift, there are usually 
individuals who can't get out of the old paradigm, ones with a foot 
in each as they struggle to make the transition, and people who are 
firmly in the knew one.  Often the originator(s) of the shift are in 
the middle group, and some make the shift and some don't.

You can see this in the early networking stuff as we shift from the 
old beads-on-a-string deterministic model to the new stochastic 
layered model.  All of the early network proposals for packet 
switching are virtual circuit (Baran, Davies, Kleinrock's thesis, IMP 
subnet are a foot in both camps: virtual circuit but layered).  Then 
with Pouzin, we get the clean break to connectionless.  (For example, 
if you look at the kinds of things Baran and Roberts have always done 
over the years, they never really embraced connectionless but have 
stayed pretty close to virtual circuit in their efforts.)

This sort of thing happens all the time in science.  It is neat to 
watch the transitions.

But back to this issue. there was a lot more communication going on 
than just the RFCs and little of it was written down or if it was has 
survived.  (A lot of us are pack rats but even we have limits!)  It 
is interesting that Postel in proposing a directory of sockets. 
Those RFCs aren't in the index. I will look for them further later 
today.  But I know we were thinking in terms of a directory of 
application names before 72.  But as I said, we all realized that we 
had a lot of work to do before we could do that and we weren't there 
yet.  Have to see what MAP or Ari remember.  I remember being 
disappointed in 71 or 72 when it became apparent we weren't going to 
push for the right solution immediately.  I know we always looked at 
it as a stop gap, and I know I wasn't alone.

Take care,
John


At 4:04 AM +0000 8/5/06, Greg Skinner wrote:
>On Fri, Aug 04, 2006 at 10:00:33PM -0400, John Day wrote:
>>  I remember that we had already had conversations about
>>  application-names and network addresses and a directory.  I know that
>>  a lot of our thinking was using operating systems as a guide to how
>>  to do it.  But since we only had 3 applications and only one
>>  occurrence of each per host, and we needed to get something up and
>>  running, there wasn't time to do it right.  Perhaps we were having
>>  these conversations with people other than the UCLA people.  Maybe it
>>  was the Multics crowd.  I can believe that in the very very early
>>  days that was the logic, but by 1970 or so, we knew better.  From
>>  about then, I always considered well-known sockets to be the
>>  equivalent of "hard-wiring low core."
>>
>>  A kludge.
>>
>>  The port numbers in TCP and NCP function as a connection-identifier
>  > within the scope of the (src, dest) addresses, i.e. it distinguishes
>>  multiple connections/flows between the same two points.  They do not
>>  identify applications.  The well-known idea is just an expedient
>>  convention.  It clearly doesn't generalize unless you are McKenzie
>>  who believed that Telnet and FTP were all you needed. ;-)
>
>I wasn't involved in any of the ARPANET R&D, but I was able to piece
>together a bit from the old RFCs.  The socket as connection identifier
>made its debut in RFC 33.  It was a 8-bit field called AEN (Another
>Eight-Bit Number).  The idea that there should be a "directory"
>of sockets appeared in RFC 65.  Jon Postel posed the question of
>whether standard protocols should have assigned/reserved socket
>numbers in RFC 205.  The first call for well known socket numbers came
>in RFC 322 (accompanying a concern about which socket numbers were
>currently in use at which hosts).  JP proposed that he be the "czar"
>for standard socket numbers in RFC 349.  So it seems as if well-known
>sockets were an expediency, as you say, which was preserved in TCP and
>UDP.
>
>BTW, speaking of hosts.txt, RFC 226 contained the first cut at an
>"official" hosts list.  (Before that, the RFC noted that each telnet
>implementation provided its own list of host names.)
>
>--gregbo


From dhc2 at dcrocker.net  Sat Aug  5 08:29:05 2006
From: dhc2 at dcrocker.net (Dave Crocker)
Date: Sat, 05 Aug 2006 08:29:05 -0700
Subject: [e2e] [Fwd: Re: [Fwd: Re: What if there were no well known
	numbers?]]
Message-ID: <44D4B941.7000302@dcrocker.net>


I decided to check with my next-door neighbor from that period...

d/

-------- Original Message --------
Subject: Re: [Fwd: Re: [e2e] What if there were no well known numbers?]
Date: Sat, 5 Aug 2006 11:06:58 -0400
From: Steve Crocker <steve at shinkuro.com>
To: Dave Crocker <dhc2 at dcrocker.net>

The concept of well known ports appeared early in the network
evolution.  The original thought -- at least it was my thinking --
was there would be known port numbers for specific services, just as
there is today, and that the server side would then deal off the
connection to another port, thereby freeing the port for another
connection.  Thus, if we're talking about Telnet, there would be a
standard Telnet port and each client who wanted to connect to that
host would initiate a connection to the Telnet port.  The server host
would accept one of those connections and then move it to another
port on either the same or a different host.  The reconnection part
of this design was forcibly killed on August 5, 1970 -- exactly 36
years ago today! -- in an extraordinary phone call from Barry Wessler
to me because of push back from the community that bubbled up to him
at ARPA.  That left a hole in the design which was eventually filled
by ICP.

Ignoring the politics and specific design, the concept of a well
known port was, I believe, a simple and obvious idea that was
generally accepted as part of the initial framework.  I don't think
there was any great discussion or controversy, so there probably
wasn't much written about it.

Even when you bring DNS into the picture, although one could argue
that each host could establish its own choice of ports for various
services, e.g. host foo.xx could choose port 897 for http service
instead of port 80 and use a resource record in DNS to advertise this
fact, the DNS service itself would still need a standard port
number.  (Well, I suppose you could include the port number along
with the address in the A records corresponding to NS records or have
a service record accompany the A record, but there would still need
to be a well known port for DNS service at the root servers or,
perhaps, the distribution of their port numbers as part of the hints
file and priming response.)

Does this help?  Does this matter?

Steve


Steve Crocker
steve at shinkuro.com

Try Shinkuro's collaboration technology.  Visit www.shinkuro.com.  I
am steve!shinkuro.com.


On Aug 5, 2006, at 10:50 AM, Dave Crocker wrote:

> This is from a long thread on the End to End discussion list.  As I  
> recall, NCP
> didn't have "well known ports" but rather had an Initial Connection  
> Protocol
> that chose connection 'ports' dynamically.  So the only 'well known  
> port' was
> for ICP.
>
> d/
>
> -------- Original Message --------
> Subject: Re: [e2e] What if there were no well known numbers?
> Date: Fri,  4 Aug 2006 16:08:56 -0400 (EDT)
> From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
> To: end2end-interest at postel.org
> CC: jnc at mercury.lcs.mit.edu
>
>> From: Bob Braden <braden at ISI.EDU>
>
>> At 10:39 PM 8/2/2006 -0400, David P. Reed wrote:
>
>>> The concept of "well-known ports" was built for a world in which  
>>> there
>>> was no lookup function, no DNS - in fact for a world where people
>>> typed addresses in octal, long before there was a naming service  
>>> (even
>>> before the hosts file).
>
>> Well, not exactly. It was (deliberately) built for a world in  
>> which we
>> did not want increased communication fragility resulting from DNS
>> lookup failures.
>
> Does it have to be one (no infrastructure) or the other  
> (fragility)? I always
> thought it was some of both.
>
> In any event, I got the impression that TCP pretty much just  
> followed NCP's
> lead on this. Is there anyone here who was around for the NCP  
> design who can
> comment on what NCP's reasons were for well-known ports? My guess  
> would be
> lack of infrastructure (as DPR points out, that was before there  
> was even
> HOSTS.TXT).
>
> 	Noel
>
>
> -- 
>
>   Dave Crocker
>   Brandenburg InternetWorking
>   bbiw.net


-- 

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net

From shemminger at osdl.org  Sat Aug  5 09:27:31 2006
From: shemminger at osdl.org (Stephen Hemminger)
Date: Sat, 5 Aug 2006 09:27:31 -0700
Subject: [e2e] OS Implementation of Byte Counting during Congestion
 Avoidance
In-Reply-To: <Pine.GSO.4.58.0608041514330.27587@disco.cs.columbia.edu>
References: <Pine.GSO.4.58.0608041514330.27587@disco.cs.columbia.edu>
Message-ID: <20060805092731.57a8e846@localhost.localdomain>

On Fri, 4 Aug 2006 15:21:58 -0400 (EDT)
Salman Abdul Baset <salman at cs.columbia.edu> wrote:

> I was wondering if OSes implement byte counting during congestion
> avoidance. From my investigation it appears:
> 
> 1) Linux
>    2.6.16 and higher implement RFC 3465 Appropriate Byte Counting during
>    slow start. However, it does not implement byte counting during
>    congestion avoidance.

Minor correction:  Linux implements multiple congestion control algorithms.
The default (BIC) algorithm does not implement byte counting during congestion
avoidance, but if  Reno is used it does do ABC during all phases.
Also, ABC can be configured on/off if necessary.

From dhc2 at dcrocker.net  Sat Aug  5 09:37:35 2006
From: dhc2 at dcrocker.net (Dave Crocker)
Date: Sat, 05 Aug 2006 09:37:35 -0700
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
Message-ID: <44D4C94F.8020801@dcrocker.net>

Folks,

John Kristoff wrote:
> Could the removal of well known numbers actually be a rousing change
> more fundamental to the Internet architecture ...
>
>...The basic idea is that DNS SRV lookups
> should be used to determine a unique port with which to get service
> from the intended destination server.
>
> In some ways this approach is appealing.  I thought it might be a
> nice way to slow the tide of arbitrary protocol port filtering and
> hamper common remote attacks against a particular well known service.
>
> Looking ahead a bit howver, if this were widely implemented, what
> other outcomes might we see given some time?  DNS would become
> increasingly important of course.  ...
> 
> In short, couldn't this, wouldn't this, lead to a rapid rise in DNS-
> based walled gardens (or if you prefer the quick and steady rise of
> a fractured root, eventual modus operandi) as everyone moves to
> replace their udp/tcp packet manglers with RR-scrubbers?


The thread seems to have veered off, into an interesting discussion of history.
 There were a few responses to the main line of your query, and I am hoping we
can consider it more extensively.

In what follows, I decided to make straight assertions, mostly to try to
engender focused responses.  I also decided to wander around the topic, a bit,
to see what might resonate...

At every layer of a protocol architecture, there is a means of distinguishing
services at the next layer up.  Without having done an exhaustive study, I will
nonetheless assert that there is always a field at the current layer, for
identifying among the entities at the next layer up.

That field always has at least one standard, fixed value.  Whether it has more
than one is the interesting question, and depends on how the standard value(s)
get used.  (If there is only one, then it will be used as a dynamic redirector.)

The questions of how and where the information is encoded both strike me as
completely irrelevant, for any basic discussion about the topic you raise.  The
questions are obviously essential for matters of packet parsing and possibly for
some aspects of scaling, but irrelevant to any sort of information theoretic
perspective.  Whether the bits are interpreted as a number or an ascii string
does not matter.  Whether the field in distinct or part of some other
"addressing" field also does not matter. They well might be extremely important
for human administration and/or encoding efficiency, but not for basic
capabilities.

(Caveat:  XNS had the equivalent of the port number be part of the network
address and this had an impact of what information its routers used.  It took
some years before developers decided to have IP routers started paying attention
to port number...)

What *does* matter is how to know what values to use. This, in turn, creates a
bootstrapping/startup task.  I believe the deciding factor in solving that task
is when the binding is done to particular values.  Later binding gives more
flexibility -- and possibly better scaling and easier administration -- but at
the cost of additional mechanism and -- probably always -- complexity, extra
round-trips and/or reliability.

Which is better, polling or interrupts?  The answer, of course, is that it
depends.  It depends on the number of participants -- both total possible and
currently active -- and their activity pattern.  Similarly, the choice between
relatively static, pre-registration versus dynamic assignment depends upon how
many services are involved and how quickly things change.  (Hmmm.  Dynamic
assignment requires pre-registration too...)

In discussing the differences between email and instant messaging, I came to
believe that we need to make a distinction between "protocol" and "service".
The same protocol can be used for very different services, according to how it
is implemented at operated.  Some folks will remember that in the 70's, email
had an instant messaging function.  While it involved a different FTP command
than what we call email, the protocol was otherwise identical. Today, the
service distinctions are immediacy and reliability.  That is, email is reliable
push, except that delivery is into a mailbox rather than the screen, thereby
making the last hop be "pull". This creates the view that email is not
immediate. But it *is* reliable, in that a message survives most crashes by the
host holding the message.  IM is push all the way, but a message does not
survive a crash.  My point, here, is that these are implementation and operation
distinctions, rather than inherent differences in the exchange protocols.

If a "protocol" does not automatically define a "service" then what does?  In
the world of ports, it is the port number.  In the world of SRVs, it is the SRV.
Either way, they permit repurposing a protocol for different uses.

Observation:  Our specifications usually fail to make the distinction between
protocol and service, either by conflating them or by ignoring the latter.  I
suspect we regularly get into trouble because of this.  At the least, we tend to
carry implicit assumptions about the service that fail to consider likely evolution.

So...

To say "What if there were no well known numbers" we cannot mean "What if there
were no initialization rendezvous mechanism?"

I'll suggest that the question is really "What are the requirements for such a
mechanism that might be better than our current model?"

Are we seriously interested in trying to "trick" firewalls, by eliminating
predictable service identifiers?  Do we think that will really work?  Do we
really need to solve this "problem"?

Are there scaling, reliability or performance issues that suggest problems with
the current problem?

Eliot's lear-iana-no-more-well-known-ports seems to focus on the problem that we
have a large number of unused and/or defunct well-known ports and that using SRV
would be better.  While it is certainly true that the SRV name space is much
larger, it is also true that the performance, complexity and reliability
differences in using SRVs are massive.

In other words, what problem do we have or do we anticipate?

When we have some sense of that, we can consider how to solve it.

d/
-- 

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net

From jassduec_sam at yahoo.com  Fri Aug  4 15:44:28 2006
From: jassduec_sam at yahoo.com (Saurabh Jain)
Date: Fri, 4 Aug 2006 15:44:28 -0700 (PDT)
Subject: [e2e] OS Implementation of Byte Counting during Congestion
	Avoidance
In-Reply-To: <Pine.GSO.4.58.0608041514330.27587@disco.cs.columbia.edu>
Message-ID: <20060804224428.44940.qmail@web55110.mail.re4.yahoo.com>


FreeBSD surely does it. At least the version (5.3),
which i looked at. I am not sure about NetBSD and
OpenBSD.

--- Salman Abdul Baset <salman at cs.columbia.edu> wrote:

> I was wondering if OSes implement byte counting
> during congestion
> avoidance. From my investigation it appears:
> 
> 1) Linux
>    2.6.16 and higher implement RFC 3465 Appropriate
> Byte Counting during
>    slow start. However, it does not implement byte
> counting during
>    congestion avoidance.
> 
> 2) Windows XP
>    Does not implement byte counting during slow
> start and congestion
>    avoidance.
> 
> 3) Windows Vista
>    Will implement RFC 3465 which means byte counting
> during slow start.
>    Not sure about byte counting during CA.
> 
> 3) MacOS, FreeBSD, OpenBSD?
>    Does anyone who uses these OSes know if they
> implement byte counting
>    during slow-start and congestion avoidance?
> 
> Thanks
> Salman
> 


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

From puddinghead_wilson007 at yahoo.co.uk  Fri Aug  4 06:15:08 2006
From: puddinghead_wilson007 at yahoo.co.uk (Puddinhead Wilson)
Date: Fri, 4 Aug 2006 14:15:08 +0100 (BST)
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <E1G8cZe-0007mt-00@mta1.cl.cam.ac.uk>
Message-ID: <20060804131508.26010.qmail@web25409.mail.ukl.yahoo.com>


--- Jon Crowcroft <Jon.Crowcroft at cl.cam.ac.uk> wrote:

> with IPv6, who needs ports?

...but then there would be well known numbers, we do
not want well known numbers!






	
	
		
___________________________________________________________ 
All new Yahoo! Mail "The new Interface is stunning in its simplicity and ease of use." - PC Magazine 
http://uk.docs.yahoo.com/nowyoucan.html

From touch at ISI.EDU  Sat Aug  5 10:10:22 2006
From: touch at ISI.EDU (Joe Touch)
Date: Sat, 05 Aug 2006 10:10:22 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <20060805040424.A84515@gds.best.vwh.net>
References: <5.1.0.14.2.20060804154539.00b13e40@boreas.isi.edu>	<a06230985c0f9aa661218@[10.0.1.3]>
	<20060805040424.A84515@gds.best.vwh.net>
Message-ID: <44D4D0FE.70703@isi.edu>

I had tried to develop a history for a ports usage doc I was working on;
it was posted to Internet-History a while ago. It's close to what Greg
found, but not exactly. Anyone care to help resolve the two where they
differ?

(cross-posted to Internet-history for obvious reasons)

Joe

Greg Skinner wrote:
> I wasn't involved in any of the ARPANET R&D, but I was able to piece
> together a bit from the old RFCs.  The socket as connection identifier
> made its debut in RFC 33.  It was a 8-bit field called AEN (Another
> Eight-Bit Number).  The idea that there should be a "directory"
> of sockets appeared in RFC 65. Jon Postel posed the question of
> whether standard protocols should have assigned/reserved socket
> numbers in RFC 205.  The first call for well known socket numbers came
> in RFC 322 (accompanying a concern about which socket numbers were
> currently in use at which hosts).  JP proposed that he be the "czar"
> for standard socket numbers in RFC 349.  So it seems as if well-known
> sockets were an expediency, as you say, which was preserved in TCP and
> UDP.
> 
> BTW, speaking of hosts.txt, RFC 226 contained the first cut at an
> "official" hosts list.  (Before that, the RFC noted that each telnet
> implementation provided its own list of host names.)

2. A Little History
The term ?port? was first used in RFC33 to describe a simplex
communication path from a process. At a meeting described in RFC37, an
idea was described to decouple connections between processes and links
that they use as paths, and thus to include source and destination
socket identifiers in packets. RFC38 described this in detail, in which
processes might have more than one of these paths, and that more than
one may be active at a time. As a result, there was the need to add a
process identifier to the header of each message, so that the incoming
data could be demultiplexed to the appropriate process. RFC38 further
suggested that 32 bits would be used for these identifiers. RFC48
discusses the current notion of listening on a given port, but does not
discuss how the issue of port determination. RFC61 notes that the
challenge of knowing the appropriate port numbers is ?left to the
processes? in general, but introduces the concept of a ?well-known? port
for common services.
RFC76 addresses this issue more constructively, proposing a ?telephone
book? by which an index would allow ports to be used by name, but still
assumes that both source and destination ports are fixed by such a
system. RFC333 suggests that the port pair, rather than an individual
port, would be used on both sides of the connection for demultiplexing
messages. This is the final view in RFC793 (and its predecessors,
including IEN 112), and brings us to their current meaning. RFC739
introduces the notion of generic reserved ports, used for groups of
protocols, such as ?any private RJE server?. Although the overall range
of such ports was (and remains) 16 bits, only the first 256 (high 8 bits
cleared) in the range were considered assigned.
RFC758 is the first to describe a list of such well-known ports, as well
as describing ranges used for different purposes:
         0-63      0-77      Network Wide Standard Function
         64-127    100-177   Hosts Specific Functions
         128-223   200-337   Reserved for Future Use
         224-255   340-377   Any Experimental Function
In RFC820, those range meanings disappeared, and a single list of
assignments is presented. By RFC900, they appeared as decimal numbers
rather than the octal ranges used previously. RFC1340 increased this
range from 0..255 to 0..1023, and began to list TCP and UDP port
assignments individually (although the assumption was, and remains, that
once assigned a port applies to all transport protocols, including TCP,
UDP, recently SCTP and DCCP, as well as ISO-TP4 for a brief period in
the early 1990s). RFC1340 also established the registered space, though
it notes that it is not controlled by the IANA at that point. The list
provided by RFC1700 in 1994 remained the standard until it was declared
replaced by an on-line version, as of RFC3232 in 2002.
The current IANA website (www.iana.org) indicates three ranges of port
assignments:
			0-1023		Well-Known (a.k.a. ?system?)
			1024-49151	Registered (a.k.a. ?user?)
			49152-65535	Dynamic/Private
Well-known encompasses the previous range of 0..255 which was expanded
to 0..1023. On some systems, use of these ports requires privileged
access, e.g., that the process run as ?root?. An additional range of
ports from 1024..49151 denotes non-privileged services, known as
?registered?. Because both types of ports are registered, they are
sometimes distinguished by the terms ?system? (Well-known) and ?user?
(Registered).

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060805/67f9679c/signature.bin

From dhc2 at dcrocker.net  Sat Aug  5 20:41:45 2006
From: dhc2 at dcrocker.net (Dave Crocker)
Date: Sat, 05 Aug 2006 20:41:45 -0700
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <44D4C94F.8020801@dcrocker.net>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net>
Message-ID: <44D564F9.4060705@dcrocker.net>



Dave Crocker wrote:
> Are there scaling, reliability or performance issues that suggest problems with
> the current problem?

sigh.  of course, that should have been:

   current *model*.

d/
-- 

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net

From Jon.Crowcroft at cl.cam.ac.uk  Sat Aug  5 22:10:33 2006
From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft)
Date: Sun, 06 Aug 2006 06:10:33 +0100
Subject: [e2e] What if there were no well known numbers?
Message-ID: <E1G9eX9-0002Gf-00@mta1.cl.cam.ac.uk>

history aside...

the reason this is a hot(ish) topic is that we moved from
having identifiers in packets to tell you where a packet should go 
(next hop, next subnet, host, process on post, etc etc)
to  mechanisms to STOP a packet going where you don't want it next.

As the arms race in attack and defense moves on, people have re-leared systems lessons 
(e.g. default off) and distributed systems lessons (e.g. early binding) so the way identifiers 
that can be used to make forwarding versus filtering decisions
ought to work for the majority case which has changed.

this, aside from the additional functioanlity we've tried to heap on IP (e.g. mobility and multicast)
means that the semantics (such as they are) of ports and addresses are no so complex (and context dependant)
that you cannot say what a packet means at some point in the net (hence it is dificult to build systems to stop
ddos, topological attacks, spam, etc etc, but also it is dificult to build policy control for things like mobile
and multicast ip)...

from the point of view of easy of programming with defauls, well-knownness is a quick and dirty bootstrap that made
sense...but
well known-ness is just the other side of a coin of security through obscurity when it comes to nats, firewals etc

anyhow, most the apps people have built in this post-rfc-relevance era 
work with dynamic ports and work through nats so de facto, if not de jure,
we know we didnt need well known ports (or globally significant IP addresses:)
for the internet to work - but we probably did need them to bootstrap stuff while everyone was
getting used to the idea of an open programmable data net:)

having services that describe themselves by their actions might be cute (c.f all the programmes that will use TCP
behaviours to tell you what OS someone is running:) - can we take that idea further..? and retain low latency/small
number of packets to startup a service?

oh and I think we should have an explicit protocol for establishing a
capability to _receive_
ip packets

chrs

j.

From touch at ISI.EDU  Sun Aug  6 09:11:36 2006
From: touch at ISI.EDU (Joe Touch)
Date: Sun, 06 Aug 2006 09:11:36 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <E1G9eX9-0002Gf-00@mta1.cl.cam.ac.uk>
References: <E1G9eX9-0002Gf-00@mta1.cl.cam.ac.uk>
Message-ID: <44D614B8.8020303@isi.edu>



Jon Crowcroft wrote:
...
> oh and I think we should have an explicit protocol for establishing a
> capability to _receive_
> ip packets

We do. It's called attaching to the Internet. IMO, that means you're up
for receiving probes to determine whether to proceed.

This is what motivated my interpretation of the network neutrality
issue, that originally was presented at a workshop about a year ago:
http://www.isi.edu/touch/internet-rights/

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060806/df975d03/signature.bin

From day at std.com  Sun Aug  6 21:26:35 2006
From: day at std.com (John Day)
Date: Mon, 7 Aug 2006 00:26:35 -0400
Subject: [e2e] What if there were no well known numbers? (history)
In-Reply-To: <20060805040424.A84515@gds.best.vwh.net>
References: <5.1.0.14.2.20060804154539.00b13e40@boreas.isi.edu>
	<a06230985c0f9aa661218@[10.0.1.3]>
	<20060805040424.A84515@gds.best.vwh.net>
Message-ID: <a06230913c0fc6afd1bbc@[10.0.1.12]>

Just to get the record straight.  Sitting at Illinois, we immediately 
saw well-known sockets as I said, "hard-wiring low core" but a 
necessary kludge because we didn't have time to do it right with 
application names and we didn't have all that many applications and 
we could do it right later (those famous last words of many projects).

The story that is being painted here is that this view was in the 
vast minority, that most people saw well-known sockets as a quite 
reasonable general solution for the long term (as it has turned out 
until the web came along that is.)

I don't mind changing my view, although my perspective is more 
flattering of the ARPANet guys who did it than the picture being 
painted here.

Take care,
John


At 4:04 +0000 2006/08/05, Greg Skinner wrote:
>On Fri, Aug 04, 2006 at 10:00:33PM -0400, John Day wrote:
>>  I remember that we had already had conversations about
>>  application-names and network addresses and a directory.  I know that
>>  a lot of our thinking was using operating systems as a guide to how
>>  to do it.  But since we only had 3 applications and only one
>>  occurrence of each per host, and we needed to get something up and
>>  running, there wasn't time to do it right.  Perhaps we were having
>>  these conversations with people other than the UCLA people.  Maybe it
>>  was the Multics crowd.  I can believe that in the very very early
>>  days that was the logic, but by 1970 or so, we knew better.  From
>>  about then, I always considered well-known sockets to be the
>>  equivalent of "hard-wiring low core."
>>
>>  A kludge.
>>
>>  The port numbers in TCP and NCP function as a connection-identifier
>>  within the scope of the (src, dest) addresses, i.e. it distinguishes
>>  multiple connections/flows between the same two points.  They do not
>>  identify applications.  The well-known idea is just an expedient
>>  convention.  It clearly doesn't generalize unless you are McKenzie
>>  who believed that Telnet and FTP were all you needed. ;-)
>
>I wasn't involved in any of the ARPANET R&D, but I was able to piece
>together a bit from the old RFCs.  The socket as connection identifier
>made its debut in RFC 33.  It was a 8-bit field called AEN (Another
>Eight-Bit Number).  The idea that there should be a "directory"
>of sockets appeared in RFC 65.  Jon Postel posed the question of
>whether standard protocols should have assigned/reserved socket
>numbers in RFC 205.  The first call for well known socket numbers came
>in RFC 322 (accompanying a concern about which socket numbers were
>currently in use at which hosts).  JP proposed that he be the "czar"
>for standard socket numbers in RFC 349.  So it seems as if well-known
>sockets were an expediency, as you say, which was preserved in TCP and
>UDP.
>
>BTW, speaking of hosts.txt, RFC 226 contained the first cut at an
>"official" hosts list.  (Before that, the RFC noted that each telnet
>implementation provided its own list of host names.)
>
>--gregbo


From day at std.com  Sun Aug  6 21:51:28 2006
From: day at std.com (John Day)
Date: Mon, 7 Aug 2006 00:51:28 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <44D4C94F.8020801@dcrocker.net>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net>
Message-ID: <a06230917c0fc6ced8fdc@[10.0.1.12]>

>
>The thread seems to have veered off, into an interesting discussion 
>of history.
>  There were a few responses to the main line of your query, and I am hoping we
>can consider it more extensively.
>
>In what follows, I decided to make straight assertions, mostly to try to
>engender focused responses.  I also decided to wander around the topic, a bit,
>to see what might resonate...
>
>At every layer of a protocol architecture, there is a means of distinguishing
>services at the next layer up.  Without having done an exhaustive 
>study, I will
>nonetheless assert that there is always a field at the current layer, for
>identifying among the entities at the next layer up.

Be careful here.  This is not really true.  The protocol-id field in 
IP for example identifies the *syntax* of the protocol in the layer 
above.  (It can't be the protocol because there could be more than 
one instance of the same protocol in the same system.  Doesn't happen 
often but it can happen.) Port-ids or sockets identify an instance of 
communication, not a  particular service.  Again, the well-known 
socket approach only works as long as there is only one instance of 
the protocol in the layer above and certainly only one instance of a 
"service." (We were lucky in the early work that this was true.)

>
>That field always has at least one standard, fixed value.  Whether it has more
>than one is the interesting question, and depends on how the standard value(s)
>get used.  (If there is only one, then it will be used as a dynamic 
>redirector.)

Actually, if you do it right, no one standard value is necessary at 
all.  You do have to know the name of the application you want to 
communicate with, but you needed to know that anyway.

>The questions of how and where the information is encoded both strike me as
>completely irrelevant, for any basic discussion about the topic you 
>raise.  The
>questions are obviously essential for matters of packet parsing and 
>possibly for
>some aspects of scaling, but irrelevant to any sort of information theoretic
>perspective.  Whether the bits are interpreted as a number or an ascii string
>does not matter.  Whether the field in distinct or part of some other
>"addressing" field also does not matter. They well might be 
>extremely important
>for human administration and/or encoding efficiency, but not for basic
>capabilities.

Agreed. Bits are bits.

>(Caveat:  XNS had the equivalent of the port number be part of the network
>address and this had an impact of what information its routers used.  It took
>some years before developers decided to have IP routers started 
>paying attention
>to port number...)
>
>What *does* matter is how to know what values to use. This, in turn, creates a
>bootstrapping/startup task.  I believe the deciding factor in 
>solving that task
>is when the binding is done to particular values.  Later binding gives more
>flexibility -- and possibly better scaling and easier administration -- but at
>the cost of additional mechanism and -- probably always -- complexity, extra
>round-trips and/or reliability.

Not really. But again, you have to do it the right way.  There are a 
lot of ways to do it that do require all sorts of extra stuff.

>Which is better, polling or interrupts?  The answer, of course, is that it
>depends.  It depends on the number of participants -- both total possible and
>currently active -- and their activity pattern.  Similarly, the choice between
>relatively static, pre-registration versus dynamic assignment depends upon how
>many services are involved and how quickly things change.  (Hmmm.  Dynamic
>assignment requires pre-registration too...)

Well, it depends on other things as well.  For example, if the 
identifiers being assigned are suppose to be location-dependent, then 
the assigner has to be able to interpret the location of the entity 
having the identifier assigned to it, so that it can assign the 
correct location-dependent identifier.

>In discussing the differences between email and instant messaging, I came to
>believe that we need to make a distinction between "protocol" and "service".
>The same protocol can be used for very different services, according to how it

Excellent.  This is very important. This is a fundamental principle 
of computer science: The idea that the black box abstracts the 
machinations of the box itself and that how the box accomplishes its 
function can be changed without the "user" of the box knowing it.

>is implemented at operated.  Some folks will remember that in the 70's, email
>had an instant messaging function.  While it involved a different FTP command
>than what we call email, the protocol was otherwise identical. Today, the

C'mon Dave, you know better.  MAIL and MLFL were there for the same 
purpose (sending mail).  MAIL wasn't for IM, it was because the TIPs 
(and others) didn't have a file system to act as a mailbox. The first 
IM on the 'Net was a hack by Jim Calvin in 72 to the Tenex command 
that let you link two terminals together so both users saw what the 
other typed.  Okay, you could construe it as being for IM, but that 
was after the fact. Not the reason it was created.  (MAIL sent mail 
on the Telnet connection of FTP; while MLFL opened a data connection 
and sent like any other file.)

>service distinctions are immediacy and reliability.  That is, email 
>is reliable
>push, except that delivery is into a mailbox rather than the screen, thereby
>making the last hop be "pull". This creates the view that email is not
>immediate. But it *is* reliable, in that a message survives most 
>crashes by the
>host holding the message.  IM is push all the way, but a message does not
>survive a crash.  My point, here, is that these are implementation 
>and operation
>distinctions, rather than inherent differences in the exchange protocols.

Dave, email is not reliable. Email is connectionless.  (Okay, it is 
much more reliable than UDP, but it is still not reliable.)  General 
rule:  To be reliable, if there is relaying at layer N, then there 
must be end-to-end error control at layer N+1.  Mail relays but there 
is no end-to-end error control over it.  (Receipt confirmation is the 
equivalent of the D-bit in X.25 and it wasn't e2e either.) There are 
crashes that mail won't survive.

>
>If a "protocol" does not automatically define a "service" then what does?  In
>the world of ports, it is the port number.  In the world of SRVs, it 
>is the SRV.
>Either way, they permit repurposing a protocol for different uses.

Yes, a protocol defines a service regardless of whether the designers 
thought it did.

>Observation:  Our specifications usually fail to make the distinction between
>protocol and service, either by conflating them or by ignoring the latter.  I
>suspect we regularly get into trouble because of this.  At the 
>least, we tend to
>carry implicit assumptions about the service that fail to consider 
>likely evolution.

Most definitely.  This is something the IETF is very bad at.  Much of 
the IPv6 work in changing all of the "other" specifications happened 
because there were no clean service definitions.  That other specs 
could refer to, rather than to the protocol itself.  If we write code 
the way we write specs, it could explain a lot.  ;-)

One should write the service definition before writing the protocol. 
This is something that OSI understood. There was a service definition 
for every protocol spec, including the unit-data protocols. (Frankly, 
I don't know how to write a protocol spec without writing the API 
first. But then I was raised on finite-state machines!)

>So...
>
>To say "What if there were no well known numbers" we cannot mean 
>"What if there
>were no initialization rendezvous mechanism?"

No, because they aren't necessary for that.

>I'll suggest that the question is really "What are the requirements for such a
>mechanism that might be better than our current model?"
>
>Are we seriously interested in trying to "trick" firewalls, by eliminating
>predictable service identifiers?  Do we think that will really work?  Do we
>really need to solve this "problem"?

Firewalls and NATs are a red herring in all of this.  To paraphrase 
Bucky Fuller, NATs only break broken architecture.

>Are there scaling, reliability or performance issues that suggest 
>problems with
>the current problem?
>
>Eliot's lear-iana-no-more-well-known-ports seems to focus on the 
>problem that we
>have a large number of unused and/or defunct well-known ports and 
>that using SRV
>would be better.  While it is certainly true that the SRV name space is much
>larger, it is also true that the performance, complexity and reliability
>differences in using SRVs are massive.
>
>In other words, what problem do we have or do we anticipate?
>
>When we have some sense of that, we can consider how to solve it.

SRVs are just one more band-aid.  It is time to stop with the 
band-aids and figure out what the "answer" is. We need to know the 
goal we should at least approximate. Incremental change without 
knowing where you are going is being lost.  The general rule when you 
are lost is to stay put and let the rescuers find you. What? There 
are no rescuers!  Hmmmm.

Take care,
John

From Jon.Crowcroft at cl.cam.ac.uk  Sun Aug  6 18:54:45 2006
From: Jon.Crowcroft at cl.cam.ac.uk (Jon Crowcroft)
Date: Mon, 07 Aug 2006 02:54:45 +0100
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: Message from Joe Touch <touch@ISI.EDU> 
	of "Sun, 06 Aug 2006 09:11:36 PDT." <44D614B8.8020303@isi.edu> 
Message-ID: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>

In missive <44D614B8.8020303 at isi.edu>, Joe Touch typed:
 
 >>> oh and I think we should have an explicit protocol for establishing a
 >>> capability to _receive_
 >>> ip packets

 >>We do. It's called attaching to the Internet. IMO, that means you're up
 >>for receiving probes to determine whether to proceed.

 >>This is what motivated my interpretation of the network neutrality
 >>issue, that originally was presented at a workshop about a year ago:
 >>http://www.isi.edu/touch/internet-rights/

Joe

a capability to receive would indicate _who_ you want to receive
packets _from_.

what we have now is the right to be bombarded or not. different.

the problem with many capability based systems is that they require
the sender to get a cpaability to send to a receiver which either
moves the problem to the capability server for the receiver which
means that then gets bombradred (see papers on denial of capbiability
attacks)
or else puts the receiver at the mercy of a 3rd party (probably
distributed and overprovisioned) cqapability service - i.e. less net
neutrality

requireing a receiver to control who can speak to them as a
fundamental part of connectivity (I have a paper i might submit to a
hot workshop about one approach to
the implementation details for this if i can get around to it...)
is an altogether more neutral scheme...

 >>--------------enig57A3834CA5AAE022A1832FE3
 >>Content-Type: application/pgp-signature; name="signature.asc"
 >>Content-Description: OpenPGP digital signature
 >>Content-Disposition: attachment; filename="signature.asc"
 >>
 >>-----BEGIN PGP SIGNATURE-----
 >>Version: GnuPG v1.4.3 (MingW32)
 >>Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
 >>
 >>iD8DBQFE1hS4E5f5cImnZrsRAk9jAJ92tluLBQnwHD/6FDcKM8lgDoe6qgCfSY8e
 >>l717lU+NCySBETlrgX4Cb5I=
 >>=t3aw
 >>-----END PGP SIGNATURE-----
 >>
 >>--------------enig57A3834CA5AAE022A1832FE3--

 cheers

   jon


From dhc2 at dcrocker.net  Mon Aug  7 00:55:26 2006
From: dhc2 at dcrocker.net (Dave Crocker)
Date: Mon, 07 Aug 2006 00:55:26 -0700
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <a06230917c0fc6ced8fdc@[10.0.1.12]>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net> <a06230917c0fc6ced8fdc@[10.0.1.12]>
Message-ID: <44D6F1EE.2000902@dcrocker.net>



John Day wrote:
>> At every layer of a protocol architecture, there is a means of 
>> distinguishing services at the next layer up.  Without having done an
>> exhaustive study, I will nonetheless assert that there is always a field at
>> the current layer, for identifying among the entities at the next layer up.
> 
> Be careful here.  This is not really true.  The protocol-id field in IP for
> example identifies the *syntax* of the protocol in the layer above. (It can't

It distinguishes rather more than differentiating among syntaxes. You can't
seriously mean that it does not distinguish different semantics at the next
layer up?


> be the protocol because there could be more than one instance of the same
> protocol in the same system.  Doesn't happen often but it can happen.)
> Port-ids or sockets identify an instance of communication, not a  particular
> service.  Again, the well-known socket approach only works as long as there
> is only one instance of the protocol in the layer above and certainly only
> one instance of a "service." (We were lucky in the early work that this was
> true.)

The fact that I used "service" in one sentence and "entity" in another provides
a strong hint that I was speaking in general terms about the regular use of a
multiplexing mechanism, without trying to be highly precise about what is being
multiplexed.  For one draft of my note I was, in fact, tempted to use the term
"clients", since each of these things treats the next layer down as a service...


>> That field always has at least one standard, fixed value.  Whether it has
>> more than one is the interesting question, and depends on how the standard 
>> value(s) get used.  (If there is only one, then it will be used as a
>> dynamic redirector.)
> 
> Actually, if you do it right, no one standard value is necessary at all.  You
> do have to know the name of the application you want to communicate with, but
> you needed to know that anyway.

Since I suspect that whatever you have in mind carries an implicit standard
value, I'll have to ask for a concrete and detailed example of needing none.


>> What *does* matter is how to know what values to use. This, in turn, 
>> creates a bootstrapping/startup task.  I believe the deciding factor in
>> solving that task is when the binding is done to particular values.  Later
>> binding gives more flexibility -- and possibly better scaling and easier
>> administration -- but at the cost of additional mechanism and -- probably
>> always -- complexity, extra round-trips and/or reliability.
> 
> Not really. But again, you have to do it the right way.  There are a lot of
> ways to do it that do require all sorts of extra stuff.

Example?


>>      Similarly, the choice 
>> between relatively static, pre-registration versus dynamic assignment
>> depends upon how many services are involved and how quickly things change.
>> (Hmmm. Dynamic assignment requires pre-registration too...)
> 
> Well, it depends on other things as well. 

Since the "depends" was meant to be the important point, I'm entirely happy to
have the list of dependencies be longer than I cited.


 For example, if the identifiers
> being assigned are suppose to be location-dependent, then the assigner has to
> be able to interpret the location of the entity having the identifier

Hmmm. I think you are confusing how a layer uses multiplexing values with my
point that there are such things, and that they involve some amount of
pre-registration.

In the case of locations, this very much holds true.  After all, the location
references do not mean much if they are not pre-registered.

(And then I suppose you'll invoke global coordinates system as an example of no
prior registration; and I'll respond that the algorithm for using that system is
nothing but a pre-registration of the entire set of values...)


>> is implemented at operated.  Some folks will remember that in the 70's,
>> email had an instant messaging function.  While it involved a different FTP
>>  command than what we call email, the protocol was otherwise identical.
>> Today, the
> 
> C'mon Dave, you know better.  MAIL and MLFL were there for the same purpose

Wrong commands.

I meant:

>          MAIL SEND TO TERMINAL (MSND)
> 
>             This command is like the MAIL command, except that the data
>             is displayed on the addressed user's terminal, if such
>             access is currently allowed, otherwise an error is returned.
> 
>          MAIL SEND TO TERMINAL OR MAILBOX (MSOM)
> 
>             This command is like the MAIL command, except that the data
>             is displayed on the addressed user's terminal, if such
>             access is currently allowed, otherwise the data is placed in
>             the user's mailbox.
> 
>          MAIL SEND TO TERMINAL AND MAILBOX (MSAM)
> 
>             This command is like the MAIL command, except that the data
>             is displayed on the addressed user's terminal, if such
>             access is currently allowed, and, in any case, the data is
>             placed in the user's mailbox.
> 

Very popular with MIT, as I recall.


>> service distinctions are immediacy and reliability.  That is, email is 
>> reliable push, except that delivery is into a mailbox rather than the
>> screen, thereby making the last hop be "pull". This creates the view that
>> email is not immediate. But it *is* reliable, in that a message survives
>> most crashes by the host holding the message.  IM is push all the way, but
>> a message does not survive a crash.  My point, here, is that these are
>> implementation and operation distinctions, rather than inherent differences
>> in the exchange protocols.
> 
> Dave, email is not reliable. Email is connectionless.

Well, yeah, reliability is relative.  TCP is a long way from perfectly reliable,
too.  (I class connection-vs-connectionless as irrelevant, though of course
there has to be some sort of state maintained.)

But I'm going to take umbrage with your taking umbrage, since I stated the
specific kind of reliability I meant, namely surviving a crash of the system
holding the message, and was using it to distinguish between email and IM
service models.

(I forgot to add reference to the fact that IM typically does not have a retry
-- although some now do -- whereas email has rather more of it than some would
like...)


> One should write the service definition before writing the protocol. This is
> something that OSI understood. 

Sort of.  They understood that something along these lines was needed, but what
got supplied was not all that helpful, since most of the services were too
complex, etc.

(But, then, I usually say that TCP and OSI are exactly the same, except for all
the details.)


> SRVs are just one more band-aid.  

It can be argued quite strongly that much of the Internet's success has been by
the careful application of first-aid techniques.  Since the resulting splints,
bandages, etc. seem to last some number of decades, and since other techniques
for developing services seem to fail in an open environment, I'm not overly
inclined to make derogatory comments about the particulars that have succeeded.

None of which precludes looking for alternatives, of course...


> It is time to stop with the band-aids and
> figure out what the "answer" is. We need to know the goal we should at least
> approximate. 

That sounds consonant with what I suggested.

d/


-- 

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net

From touch at ISI.EDU  Mon Aug  7 06:03:00 2006
From: touch at ISI.EDU (Joe Touch)
Date: Mon, 07 Aug 2006 06:03:00 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
Message-ID: <44D73A04.1080003@isi.edu>



Jon Crowcroft wrote:
> In missive <44D614B8.8020303 at isi.edu>, Joe Touch typed:
>  
>  >>> oh and I think we should have an explicit protocol for establishing a
>  >>> capability to _receive_
>  >>> ip packets
> 
>  >>We do. It's called attaching to the Internet. IMO, that means you're up
>  >>for receiving probes to determine whether to proceed.
> 
>  >>This is what motivated my interpretation of the network neutrality
>  >>issue, that originally was presented at a workshop about a year ago:
>  >>http://www.isi.edu/touch/internet-rights/
> 
> Joe
> 
> a capability to receive would indicate _who_ you want to receive
> packets _from_.

All forms of communication are bootstrapped by first determining if you
are the intended receiver. Making the receiver initiate that process
only relabels the endpoints; the receiver now needs to initiate
communication with new parties. The net result is that senders can no
longer reach any new parties. That's a very uninteresting network, IMO.

> what we have now is the right to be bombarded or not. different.
> 
> the problem with many capability based systems is that they require
> the sender to get a cpaability to send to a receiver which either
> moves the problem to the capability server for the receiver which
> means that then gets bombradred (see papers on denial of capbiability
> attacks)
> or else puts the receiver at the mercy of a 3rd party (probably
> distributed and overprovisioned) cqapability service - i.e. less net
> neutrality
> 
> requireing a receiver to control who can speak to them as a
> fundamental part of connectivity (I have a paper i might submit to a
> hot workshop about one approach to
> the implementation details for this if i can get around to it...)
> is an altogether more neutral scheme...

That'd be interesting only if it could show how to initiate
communication with a new party without reversing the labels.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060807/bbfe3f03/signature-0001.bin

From day at std.com  Mon Aug  7 07:05:32 2006
From: day at std.com (John Day)
Date: Mon, 7 Aug 2006 10:05:32 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <44D6F1EE.2000902@dcrocker.net>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net> <a06230917c0fc6ced8fdc@[10.0.1.12]>
	<44D6F1EE.2000902@dcrocker.net>
Message-ID: <a0623091dc0fcec9f35de@[10.0.1.12]>

At 0:55 -0700 2006/08/07, Dave Crocker wrote:
>John Day wrote:
>>>  At every layer of a protocol architecture, there is a means of
>>>  distinguishing services at the next layer up.  Without having done an
>>>  exhaustive study, I will nonetheless assert that there is always a field at
>>>  the current layer, for identifying among the entities at the next layer up.
>>
>>  Be careful here.  This is not really true.  The protocol-id field in IP for
>>  example identifies the *syntax* of the protocol in the layer 
>>above. (It can't
>
>It distinguishes rather more than differentiating among syntaxes. You can't
>seriously mean that it does not distinguish different semantics at the next
>layer up?
>

Nope.  When looking at this one has to consider what is the minimal 
amount one needs to know. All you *have* to know is enough that the 
layer above can interpret the bits.  Any semantics are the layer 
above's business.  As I said, this goes with the fact that the field 
does not allow designating more than one instance of the same 
protocol.  If it did, then it would involve more.  But then it would 
be a different field.

>  > be the protocol because there could be more than one instance of the same
>>  protocol in the same system.  Doesn't happen often but it can happen.)
>>  Port-ids or sockets identify an instance of communication, not a  particular
>>  service.  Again, the well-known socket approach only works as long as there
>>  is only one instance of the protocol in the layer above and certainly only
>>  one instance of a "service." (We were lucky in the early work that this was
>>  true.)
>
>The fact that I used "service" in one sentence and "entity" in 
>another provides
>a strong hint that I was speaking in general terms about the regular use of a
>multiplexing mechanism, without trying to be highly precise about 
>what is being
>multiplexed.  For one draft of my note I was, in fact, tempted to use the term
>"clients", since each of these things treats the next layer down as 
>a service...
>

Yes, of course it does.  Gosh, "service" and "entity"  it is 
beginning to sound like the language in the old (and wrong) OSI 
Model. ;-) Well-known sockets are a convention we have imposed and 
not a necessary part of the structure.  It has specific limitations. 
We were lucky in our early choices of applications that did not 
reveal the limitations.  All of our early applications were one per 
host.  This is also part of what lead to our mistaken infatuation 
with naming hosts.


>  >> That field always has at least one standard, fixed value.  Whether it has
>>>  more than one is the interesting question, and depends on how the standard
>>>  value(s) get used.  (If there is only one, then it will be used as a
>>>  dynamic redirector.)
>>
>>  Actually, if you do it right, no one standard value is necessary 
>>at all.  You
>>  do have to know the name of the application you want to 
>>communicate with, but
>>  you needed to know that anyway.
>
>Since I suspect that whatever you have in mind carries an implicit standard
>value, I'll have to ask for a concrete and detailed example of needing none.

Long story.  Currently with the editors.  Catch me off line and I can 
walk you through it.  Basically it all comes down to trying to patch 
an unfinished demo.

It is funny, but I have been finding that we have been acting like 
freshman engineers: not reducing the algebra before doing the 
arithmetic; and skipping steps because we think we know what they are 
and we don't. (Speaking metaphorically of course.)

>
>>>  What *does* matter is how to know what values to use. This, in turn,
>>>  creates a bootstrapping/startup task.  I believe the deciding factor in
>>>  solving that task is when the binding is done to particular values.  Later
>>>  binding gives more flexibility -- and possibly better scaling and easier
>>>  administration -- but at the cost of additional mechanism and -- probably
>>>  always -- complexity, extra round-trips and/or reliability.
>>
>  > Not really. But again, you have to do it the right way.  There are a lot of
>>  ways to do it that do require all sorts of extra stuff.
>
>Example?

See above.

>
>>>       Similarly, the choice
>>>  between relatively static, pre-registration versus dynamic assignment
>>>  depends upon how many services are involved and how quickly things change.
>>>  (Hmmm. Dynamic assignment requires pre-registration too...)
>>
>>  Well, it depends on other things as well.
>
>Since the "depends" was meant to be the important point, I'm entirely happy to
>have the list of dependencies be longer than I cited.
>
>
>  For example, if the identifiers
>>  being assigned are suppose to be location-dependent, then the 
>>assigner has to
>>  be able to interpret the location of the entity having the identifier
>
>Hmmm. I think you are confusing how a layer uses multiplexing values with my
>point that there are such things, and that they involve some amount of
>pre-registration.
>
>In the case of locations, this very much holds true.  After all, the location
>references do not mean much if they are not pre-registered.

I think we have different ideas of pre-register.  For example, 
so-called MAC addresses are pre-registered, but they aren't 
addresses.  They are serial numbers.

>(And then I suppose you'll invoke global coordinates system as an 
>example of no
>prior registration; and I'll respond that the algorithm for using 
>that system is
>nothing but a pre-registration of the entire set of values...)

Nope, nothing so arithmetic.  The assigner must have a topological 
space in mind from which it draws identifiers for assigning is 
necessary, whether you consider this pre-registration, I can't say.

>
>>>  is implemented at operated.  Some folks will remember that in the 70's,
>>>  email had an instant messaging function.  While it involved a different FTP
>>>   command than what we call email, the protocol was otherwise identical.
>>>  Today, the
>>
>>  C'mon Dave, you know better.  MAIL and MLFL were there for the same purpose
>
>Wrong commands.
>
>I meant:
>
>>           MAIL SEND TO TERMINAL (MSND)
>>
>>              This command is like the MAIL command, except that the data
>>              is displayed on the addressed user's terminal, if such
>>              access is currently allowed, otherwise an error is returned.
>>
>>           MAIL SEND TO TERMINAL OR MAILBOX (MSOM)
>>
>>              This command is like the MAIL command, except that the data
>>              is displayed on the addressed user's terminal, if such
>>              access is currently allowed, otherwise the data is placed in
>>              the user's mailbox.
>>
>>           MAIL SEND TO TERMINAL AND MAILBOX (MSAM)
>>
>>              This command is like the MAIL command, except that the data
>>              is displayed on the addressed user's terminal, if such
>>              access is currently allowed, and, in any case, the data is
>>              placed in the user's mailbox.
>  >

These were not in RFC 542, which was the original inclusion of mail 
in FTP. These came much later if they were ever in FTP.

>Very popular with MIT, as I recall.
>
>
>>>  service distinctions are immediacy and reliability.  That is, email is
>>>  reliable push, except that delivery is into a mailbox rather than the
>>>  screen, thereby making the last hop be "pull". This creates the view that
>>>  email is not immediate. But it *is* reliable, in that a message survives
>>>  most crashes by the host holding the message.  IM is push all the way, but
>>>  a message does not survive a crash.  My point, here, is that these are
>>>  implementation and operation distinctions, rather than inherent differences
>>>  in the exchange protocols.
>>
>>  Dave, email is not reliable. Email is connectionless.
>
>Well, yeah, reliability is relative.  TCP is a long way from 
>perfectly reliable,
>too.  (I class connection-vs-connectionless as irrelevant, though of course
>there has to be some sort of state maintained.)

The more useful distinction is whether processing the current PDU is 
affected by processing previous PDUs.  Each mail message is 
independent of the others.

>But I'm going to take umbrage with your taking umbrage, since I stated the
>specific kind of reliability I meant, namely surviving a crash of the system
>holding the message, and was using it to distinguish between email and IM
>service models.
>
>(I forgot to add reference to the fact that IM typically does not have a retry
>-- although some now do -- whereas email has rather more of it than some would
>like...)

The retries are not e2e.  They are hop-by-hop. Isn't the difference 
that IM is more like IP'; whereas mail is more like a message switch? 
In IP, if a router crashes, you don't assume that it will have the 
packets it was buffering when it crashed, whereas with a message 
switch you do.

>
>>  One should write the service definition before writing the protocol. This is
>>  something that OSI understood.
>
>Sort of.  They understood that something along these lines was 
>needed, but what
>got supplied was not all that helpful, since most of the services were too
>complex, etc.
>
>(But, then, I usually say that TCP and OSI are exactly the same, 
>except for all
>the details.)

Well sort of.   There were some significant differences. Most of the 
complexity in OSI was because it was 2 architectures under one 
umbrella fighting each other tooth and nail:  the European dedication 
to a connection-oriented model and the US dedication to a 
connectionless model.  How the European's ever thought X.25 could 
support applications of the 90s, I never understood but they did 
every thing they could to put it in which is where the complexity 
comes from.

If you pull out the connectionless side, i.e. (CLNP, TP4, Fast-byte, 
ACSE, CMIP) it takes the old ARPANet/Internet model one step further. 
(It still wasn't right, but it was another step.)  In particular, 
network addresses did not name subnetwork points of attachments as 
they do in IP.  Realizing that applications and application protocols 
were distinct, etc. CMIP was more powerful and yielded smaller 
implementations than SNMP, (although HEMS would have been better than 
either one) etc.  But as I say, it still wasn't right.

>
>>  SRVs are just one more band-aid. 
>
>It can be argued quite strongly that much of the Internet's success 
>has been by
>the careful application of first-aid techniques.  Since the resulting splints,
>bandages, etc. seem to last some number of decades, and since other techniques
>for developing services seem to fail in an open environment, I'm not overly
>inclined to make derogatory comments about the particulars that have 
>succeeded.
>
>None of which precludes looking for alternatives, of course...

One can also argue that much of the Internet's success has been the 
liberal application of Moore's Law. It is really Moore's Law that has 
allowed the band-aids to do the job.  But what I see that concerns me 
are band-aids that don't really go anywhere and we all know what 
happens to systems that are the sum of a large number of band-aids. 
This is what concerns me.

This approach of relying on splints and band-aids is fine for a 
research network, not for basing the world economy.  The thing that 
scares me is that the band-aids have made the Internet more like DOS, 
than Multics, where we would settle for Unix, but we don't have that 
either (and I don't see that anyone realizes that is the problem.)

>
>>  It is time to stop with the band-aids and
>>  figure out what the "answer" is. We need to know the goal we should at least
>>  approximate.
>
>That sounds consonant with what I suggested.

I would hope so.  ;-)

Take care,
John

From touch at ISI.EDU  Mon Aug  7 07:37:10 2006
From: touch at ISI.EDU (Joe Touch)
Date: Mon, 07 Aug 2006 07:37:10 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <44D74B64.6060808@cs.utk.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk> <44D73A04.1080003@isi.edu>
	<44D74B64.6060808@cs.utk.edu>
Message-ID: <44D75016.9060307@isi.edu>



Keith Moore wrote:
>> All forms of communication are bootstrapped by first determining if you
>> are the intended receiver. Making the receiver initiate that process
>> only relabels the endpoints; the receiver now needs to initiate
>> communication with new parties. The net result is that senders can no
>> longer reach any new parties. That's a very uninteresting network, IMO.
> 
> only if the network required receivers to specify senders on an
> individual basis.
> 
> today, most servers want to listen to all incoming traffic that is
> intended for the host and destination port.  but there is no particular
> reason to burden the network to carry traffic to the server that the
> server will discard.

You know who you don't want to talk to (who, which ports, etc.). Pushing
that filtering as far out as possible is certainly useful, but also well
known.

When you change your mind or add a new protocol, you need to open that
firewall up and let stuff in. There are two cases:

	1- you know who you're expecting
	2- you don't know who you're expecting

1 is vanishingly uninteresting; sure, it works for a fixed subset (e.g.,
within an enterprise or VPN).

2 is the only interesting case for a few reasons:

	- it is THE case that makes the Internet work
		the Internet being the open subset; you don't need
		to inform everyone to join

	- it requires informing everyone you're joining the net so
	they can decide whether to let you in
		such informing presents the same kind of
		unsolicited communication I've described as
		fundamental

	- it's the only case for which there is extant solution

As to authentication of source, that just pushes the problem of
unsolicited load to the authentication infrastructure.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060807/cae97470/signature.bin

From dpreed at reed.com  Mon Aug  7 08:24:09 2006
From: dpreed at reed.com (David P. Reed)
Date: Mon, 07 Aug 2006 11:24:09 -0400
Subject: [e2e] What if there were no well known numbers? (history)
In-Reply-To: <a06230913c0fc6afd1bbc@[10.0.1.12]>
References: <5.1.0.14.2.20060804154539.00b13e40@boreas.isi.edu>	<a06230985c0f9aa661218@[10.0.1.3]>	<20060805040424.A84515@gds.best.vwh.net>
	<a06230913c0fc6afd1bbc@[10.0.1.12]>
Message-ID: <44D75B19.8040605@reed.com>

Sitting in MIT's computing milieu, well-known sockets seemed like an 
unscalable kludge to us as well (hardwiring low core would capture the 
feeling we had).   I'm sure the comms guys (who never really get the 
idea of abstraction and virtualization, even today) thought the idea of 
hard-bound numbers would be perfectly fine (911 is perfectly perfect for 
emergency services in the phone network, right?  :-)  until you realize 
that that architectural choice combines calls for cats in trees and 
terrorist emergencies through the same unspecialized clerks in some 
outsourced callcenter 250 miles away).   It really is amazing to me that 
RFC's still are being updated to add new well-known port numbers for 
ridiculously parochial purposes.

But it's fair to remember that the Internet experiment was an elephant 
that few of the participants saw all sides of.  People focused on 
networking rarely considered the application and systems architecture 
questions as serious.


John Day wrote:
> Just to get the record straight.  Sitting at Illinois, we immediately 
> saw well-known sockets as I said, "hard-wiring low core" but a 
> necessary kludge because we didn't have time to do it right with 
> application names and we didn't have all that many applications and we 
> could do it right later (those famous last words of many projects).
>
> The story that is being painted here is that this view was in the vast 
> minority, that most people saw well-known sockets as a quite 
> reasonable general solution for the long term (as it has turned out 
> until the web came along that is.)
>
> I don't mind changing my view, although my perspective is more 
> flattering of the ARPANet guys who did it than the picture being 
> painted here.
>
> Take care,
> John
>
>
> At 4:04 +0000 2006/08/05, Greg Skinner wrote:
>> On Fri, Aug 04, 2006 at 10:00:33PM -0400, John Day wrote:
>>>  I remember that we had already had conversations about
>>>  application-names and network addresses and a directory.  I know that
>>>  a lot of our thinking was using operating systems as a guide to how
>>>  to do it.  But since we only had 3 applications and only one
>>>  occurrence of each per host, and we needed to get something up and
>>>  running, there wasn't time to do it right.  Perhaps we were having
>>>  these conversations with people other than the UCLA people.  Maybe it
>>>  was the Multics crowd.  I can believe that in the very very early
>>>  days that was the logic, but by 1970 or so, we knew better.  From
>>>  about then, I always considered well-known sockets to be the
>>>  equivalent of "hard-wiring low core."
>>>
>>>  A kludge.
>>>
>>>  The port numbers in TCP and NCP function as a connection-identifier
>>>  within the scope of the (src, dest) addresses, i.e. it distinguishes
>>>  multiple connections/flows between the same two points.  They do not
>>>  identify applications.  The well-known idea is just an expedient
>>>  convention.  It clearly doesn't generalize unless you are McKenzie
>>>  who believed that Telnet and FTP were all you needed. ;-)
>>
>> I wasn't involved in any of the ARPANET R&D, but I was able to piece
>> together a bit from the old RFCs.  The socket as connection identifier
>> made its debut in RFC 33.  It was a 8-bit field called AEN (Another
>> Eight-Bit Number).  The idea that there should be a "directory"
>> of sockets appeared in RFC 65.  Jon Postel posed the question of
>> whether standard protocols should have assigned/reserved socket
>> numbers in RFC 205.  The first call for well known socket numbers came
>> in RFC 322 (accompanying a concern about which socket numbers were
>> currently in use at which hosts).  JP proposed that he be the "czar"
>> for standard socket numbers in RFC 349.  So it seems as if well-known
>> sockets were an expediency, as you say, which was preserved in TCP and
>> UDP.
>>
>> BTW, speaking of hosts.txt, RFC 226 contained the first cut at an
>> "official" hosts list.  (Before that, the RFC noted that each telnet
>> implementation provided its own list of host names.)
>>
>> --gregbo
>
>
>


From dpreed at reed.com  Mon Aug  7 08:50:38 2006
From: dpreed at reed.com (David P. Reed)
Date: Mon, 07 Aug 2006 11:50:38 -0400
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <44D73A04.1080003@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk> <44D73A04.1080003@isi.edu>
Message-ID: <44D7614E.6050202@reed.com>

Joe/Jon - "stop - you're both right!" (from an old TV commercial).

Ultimately there are two essential parties to any message - the sender 
and the receiver - and a collection of unessential but helpful third 
parties (the network components that may help facilitate that 
communication).

The sender and receiver make a JOINT decision to send and to accept the 
sending.   Neither comes first in any essential way.

Jon is focused on delegating to third parties the job of blocking 
unwanted sends.  Joe is focused on delegating to third parties the job 
of presenting opportunities to receive to anyone who wants to.

Neither of you wants the third parties to become first parties - 
deciding on their own which communications should happen.  (I posit 
this, though it is evident only implicitly in your views).   Of course 
there are third parties who very much view it as their right to decide 
both who can send and who can receive messages.  (Verizon, for example, 
wants an exclusive franchise to decide who can receive broadband in the 
areas they provide service for and also an exclusive franchise to decide 
who can provide content that is received; governments want to provide 
non-discretionary policy controls as well, focused on blocking 
communications to which the government is not allowed to read the content).

But it is important to be skeptical of the idea that "the network" 
(which is hopefully a collection of independent and autonomous networks, 
at least in the case of the Internet) can provide any sort of 
non-discretionary guarantees of protection.   At best (end-to-end 
argument here) we ought to be able to make it easy for joint decisions 
to exchange messages to happen, and the number of non-joint decisions 
that are unwanted should be kept to a dull roar.


Joe Touch wrote:
> Jon Crowcroft wrote:
>   
>> In missive <44D614B8.8020303 at isi.edu>, Joe Touch typed:
>>  
>>  >>> oh and I think we should have an explicit protocol for establishing a
>>  >>> capability to _receive_
>>  >>> ip packets
>>
>>  >>We do. It's called attaching to the Internet. IMO, that means you're up
>>  >>for receiving probes to determine whether to proceed.
>>
>>  >>This is what motivated my interpretation of the network neutrality
>>  >>issue, that originally was presented at a workshop about a year ago:
>>  >>http://www.isi.edu/touch/internet-rights/
>>
>> Joe
>>
>> a capability to receive would indicate _who_ you want to receive
>> packets _from_.
>>     
>
> All forms of communication are bootstrapped by first determining if you
> are the intended receiver. Making the receiver initiate that process
> only relabels the endpoints; the receiver now needs to initiate
> communication with new parties. The net result is that senders can no
> longer reach any new parties. That's a very uninteresting network, IMO.
>
>   
>> what we have now is the right to be bombarded or not. different.
>>
>> the problem with many capability based systems is that they require
>> the sender to get a cpaability to send to a receiver which either
>> moves the problem to the capability server for the receiver which
>> means that then gets bombradred (see papers on denial of capbiability
>> attacks)
>> or else puts the receiver at the mercy of a 3rd party (probably
>> distributed and overprovisioned) cqapability service - i.e. less net
>> neutrality
>>
>> requireing a receiver to control who can speak to them as a
>> fundamental part of connectivity (I have a paper i might submit to a
>> hot workshop about one approach to
>> the implementation details for this if i can get around to it...)
>> is an altogether more neutral scheme...
>>     
>
> That'd be interesting only if it could show how to initiate
> communication with a new party without reversing the labels.
>
> Joe
>
>   


From craig at aland.bbn.com  Mon Aug  7 08:52:12 2006
From: craig at aland.bbn.com (Craig Partridge)
Date: Mon, 07 Aug 2006 11:52:12 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: Your message of "Mon, 07 Aug 2006 10:05:32 EDT."
	<a0623091dc0fcec9f35de@[10.0.1.12]> 
Message-ID: <20060807155212.0103867@aland.bbn.com>


In message <a0623091dc0fcec9f35de@[10.0.1.12]>, John Day writes:

>they do in IP.  Realizing that applications and application protocols 
>were distinct, etc. CMIP was more powerful and yielded smaller 
>implementations than SNMP, (although HEMS would have been better than 
>either one) etc.  But as I say, it still wasn't right.

[Aside -- smaller isn't always better, though it is often a hint.
Recall, SNMP launched because, at the start, it was smaller than CMIP/HEMS.
One issue was that SNMP left out functions that CMIP/HEMS anticipated
were needed.]

Speaking of bandaids, paradigm problems and the like, pardon me while I
toss a pebble into the pond.  I think, in network management, we're
in danger of institutionalizing MIB variables for another generation
of technology (perhaps calling them "objects", but same deal).

Yet when faced with things that are, fundamentally, collections of
software dip switches, I note that we seem to have no nice elegant
programming paradigm that allows us to create abstractions that hide
these switches (rather than expose them as 1,000s of MIB variables).
The knowledge plane was an attempt to create something intelligent
that mediated between this miasma of data and the typical clients that
want to know what is happening (with only selected squalid details).
But I keep wondering if there's a better paradigm for those virtual
dip switches.

And I think this problem is orthogonal to the other problem in network
management -- namely that we still have silo-based management environments
(e.g. ops different from install different from maintenance...).  Though
if we had a paradigm that solved both it would be nice.

Off soapbox...

Craig

From dhc2 at dcrocker.net  Mon Aug  7 10:25:54 2006
From: dhc2 at dcrocker.net (Dave Crocker)
Date: Mon, 07 Aug 2006 10:25:54 -0700
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <20060807155212.0103867@aland.bbn.com>
References: <20060807155212.0103867@aland.bbn.com>
Message-ID: <44D777A2.7080408@dcrocker.net>



Craig Partridge wrote:
> Yet when faced with things that are, fundamentally, collections of
> software dip switches, I note that we seem to have no nice elegant
> programming paradigm that allows us to create abstractions that hide
> these switches (rather than expose them as 1,000s of MIB variables).
> The knowledge plane was an attempt to create something intelligent
> that mediated between this miasma of data and the typical clients that
> want to know what is happening (with only selected squalid details).
> But I keep wondering if there's a better paradigm for those virtual
> dip switches.

I suspect one aspect of this is that the switches are specified, based on what
can be exposed, rather than having much idea how the information will get used.

Perhaps it would help to define usage scenarios for sets of switches.  Each
scenario becomes a possible point of aggregation.

d/
-- 

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net

From day at std.com  Mon Aug  7 10:38:58 2006
From: day at std.com (John Day)
Date: Mon, 7 Aug 2006 13:38:58 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <20060807155212.0103867@aland.bbn.com>
References: <20060807155212.0103867@aland.bbn.com>
Message-ID: <a0623092bc0fd29406b8a@[10.0.1.12]>

At 11:52 -0400 2006/08/07, Craig Partridge wrote:
>In message <a0623091dc0fcec9f35de@[10.0.1.12]>, John Day writes:
>
>>they do in IP.  Realizing that applications and application protocols
>>were distinct, etc. CMIP was more powerful and yielded smaller
>>implementations than SNMP, (although HEMS would have been better than
>>either one) etc.  But as I say, it still wasn't right.
>
>[Aside -- smaller isn't always better, though it is often a hint.
>Recall, SNMP launched because, at the start, it was smaller than CMIP/HEMS.
>One issue was that SNMP left out functions that CMIP/HEMS anticipated
>were needed.]

Not really.  I believe you once told me that HEMS was smaller than 
SNMP and another source which is very reliable has told me that CMIP 
is smaller than SNMP, primarily because it takes less code to 
implement scope and filter than lexicographical order.  Which I can 
believe.

Also, by the late 80s we had enough experience with the IEEE 802 
management protocol to know that the SNMP approach was too simple. Or 
as I usually said, it was so simple it was too complex to use.

>Speaking of bandaids, paradigm problems and the like, pardon me while I
>toss a pebble into the pond.  I think, in network management, we're
>in danger of institutionalizing MIB variables for another generation
>of technology (perhaps calling them "objects", but same deal).
>
>Yet when faced with things that are, fundamentally, collections of
>software dip switches, I note that we seem to have no nice elegant
>programming paradigm that allows us to create abstractions that hide
>these switches (rather than expose them as 1,000s of MIB variables).
>The knowledge plane was an attempt to create something intelligent
>that mediated between this miasma of data and the typical clients that
>want to know what is happening (with only selected squalid details).
>But I keep wondering if there's a better paradigm for those virtual
>dip switches.
>
>And I think this problem is orthogonal to the other problem in network
>management -- namely that we still have silo-based management environments
>(e.g. ops different from install different from maintenance...).  Though
>if we had a paradigm that solved both it would be nice.

I completely agree.  The MIB structures we have come up with do not 
have enough commonality.  Not only does lead to the 1000s of MIB 
variables, but there is no way to impose consistent behavior on the 
network.  To do that we need commonality.  We actually did this at 
Motorola in the late 80s. We had a common MIB structure where 
probably 75% of the MIBs for a wide range of devices (everything from 
T-1, LANs, routers, and old stat muxes) was common.  But we couldn't 
get them to allow us to submit to the standards organizations.

But vendors don't want commonality across devices.

Take care,
John

From dpreed at reed.com  Mon Aug  7 12:11:03 2006
From: dpreed at reed.com (David P. Reed)
Date: Mon, 07 Aug 2006 15:11:03 -0400
Subject: [e2e] MIB variables vs. ?
In-Reply-To: <44D777A2.7080408@dcrocker.net>
References: <20060807155212.0103867@aland.bbn.com>
	<44D777A2.7080408@dcrocker.net>
Message-ID: <44D79047.6040203@reed.com>

The standard flexible and elegant alternative to complex data structures 
(such as MIBs) is a "message-based protocol" that talks to objects that 
can be migrated across the network by copying the object (code & data 
together) or accessed across the network by moving the message to the data.

One can model such a thing after the core Smalltalk VM, or the Self VM, 
for example.   Unlike data structures, objects are inherently 
abstraction-building tools.

Of course you could just put the data structures into ASCII rather than 
ASN.1, and then have the worst of all worlds by using XML to represent 
your MIB equivalent.






From day at std.com  Mon Aug  7 12:00:18 2006
From: day at std.com (John Day)
Date: Mon, 7 Aug 2006 15:00:18 -0400
Subject: [e2e] What if there were no well known numbers? (history)
In-Reply-To: <44D75B19.8040605@reed.com>
References: <5.1.0.14.2.20060804154539.00b13e40@boreas.isi.edu>
	<a06230985c0f9aa661218@[10.0.1.3]>
	<20060805040424.A84515@gds.best.vwh.net>
	<a06230913c0fc6afd1bbc@[10.0.1.12]> <44D75B19.8040605@reed.com>
Message-ID: <a06230933c0fd3e0c4b73@[10.0.1.12]>

O, good!  Glad to hear we weren't the only ones!  I would have 
guessed that and since I spent alot of time talking to MAP and 
Pogran, it isn't surprising.  ;-)  As I said earlier, I had always 
thought that the saving grace of the ARPANet was it was done by OS 
jocks, not data comm guys.  And like you say, we were still trying to 
figure a lot of this out and some people were in one camp or another 
or somewhere on the line.  It is hard to explain to people today, how 
what seems so obvious now was far from it in 1970!

Take care,
John

At 11:24 -0400 2006/08/07, David P. Reed wrote:
>Sitting in MIT's computing milieu, well-known sockets seemed like an 
>unscalable kludge to us as well (hardwiring low core would capture 
>the feeling we had).   I'm sure the comms guys (who never really get 
>the idea of abstraction and virtualization, even today) thought the 
>idea of hard-bound numbers would be perfectly fine (911 is perfectly 
>perfect for emergency services in the phone network, right?  :-) 
>until you realize that that architectural choice combines calls for 
>cats in trees and terrorist emergencies through the same 
>unspecialized clerks in some outsourced callcenter 250 miles away). 
>It really is amazing to me that RFC's still are being updated to add 
>new well-known port numbers for ridiculously parochial purposes.
>
>But it's fair to remember that the Internet experiment was an 
>elephant that few of the participants saw all sides of.  People 
>focused on networking rarely considered the application and systems 
>architecture questions as serious.
>
>
>John Day wrote:
>>Just to get the record straight.  Sitting at Illinois, we 
>>immediately saw well-known sockets as I said, "hard-wiring low 
>>core" but a necessary kludge because we didn't have time to do it 
>>right with application names and we didn't have all that many 
>>applications and we could do it right later (those famous last 
>>words of many projects).
>>
>>The story that is being painted here is that this view was in the 
>>vast minority, that most people saw well-known sockets as a quite 
>>reasonable general solution for the long term (as it has turned out 
>>until the web came along that is.)
>>
>>I don't mind changing my view, although my perspective is more 
>>flattering of the ARPANet guys who did it than the picture being 
>>painted here.
>>
>>Take care,
>>John
>>
>>
>>At 4:04 +0000 2006/08/05, Greg Skinner wrote:
>>>On Fri, Aug 04, 2006 at 10:00:33PM -0400, John Day wrote:
>>>>  I remember that we had already had conversations about
>>>>  application-names and network addresses and a directory.  I know that
>>>>  a lot of our thinking was using operating systems as a guide to how
>>>>  to do it.  But since we only had 3 applications and only one
>>>>  occurrence of each per host, and we needed to get something up and
>>>>  running, there wasn't time to do it right.  Perhaps we were having
>>>>  these conversations with people other than the UCLA people.  Maybe it
>>>>  was the Multics crowd.  I can believe that in the very very early
>>>>  days that was the logic, but by 1970 or so, we knew better.  From
>>>>  about then, I always considered well-known sockets to be the
>>>>  equivalent of "hard-wiring low core."
>>>>
>>>>  A kludge.
>>>>
>>>>  The port numbers in TCP and NCP function as a connection-identifier
>>>>  within the scope of the (src, dest) addresses, i.e. it distinguishes
>>>>  multiple connections/flows between the same two points.  They do not
>>>>  identify applications.  The well-known idea is just an expedient
>>>>  convention.  It clearly doesn't generalize unless you are McKenzie
>>>>  who believed that Telnet and FTP were all you needed. ;-)
>>>
>>>I wasn't involved in any of the ARPANET R&D, but I was able to piece
>>>together a bit from the old RFCs.  The socket as connection identifier
>>>made its debut in RFC 33.  It was a 8-bit field called AEN (Another
>>>Eight-Bit Number).  The idea that there should be a "directory"
>>>of sockets appeared in RFC 65.  Jon Postel posed the question of
>>>whether standard protocols should have assigned/reserved socket
>>>numbers in RFC 205.  The first call for well known socket numbers came
>>>in RFC 322 (accompanying a concern about which socket numbers were
>>>currently in use at which hosts).  JP proposed that he be the "czar"
>>>for standard socket numbers in RFC 349.  So it seems as if well-known
>>>sockets were an expediency, as you say, which was preserved in TCP and
>>>UDP.
>>>
>>>BTW, speaking of hosts.txt, RFC 226 contained the first cut at an
>>>"official" hosts list.  (Before that, the RFC noted that each telnet
>>>implementation provided its own list of host names.)
>>>
>>>--gregbo


From craig at aland.bbn.com  Mon Aug  7 13:54:16 2006
From: craig at aland.bbn.com (Craig Partridge)
Date: Mon, 07 Aug 2006 16:54:16 -0400
Subject: [e2e] MIB variables vs. ?
In-Reply-To: Your message of "Mon, 07 Aug 2006 15:11:03 EDT."
	<44D79047.6040203@reed.com> 
Message-ID: <20060807205416.BC7AB67@aland.bbn.com>


Hi Dave:

My view is that objects don't buy much unless you find some way to
create a hierarchy (larger, more abstract objects composed of the little
ones).

That is, in the ultimate -- each object matches a dip switch.  And if
that's true, the difference between an message across the network to an
object saying "set yourself to theta" and a message that says "SET 
dip-switch theta" is purely the kind of difference that people used to
use to caricature Smalltalk.

Craig


In message <44D79047.6040203 at reed.com>, "David P. Reed" writes:

>The standard flexible and elegant alternative to complex data structures 
>(such as MIBs) is a "message-based protocol" that talks to objects that 
>can be migrated across the network by copying the object (code & data 
>together) or accessed across the network by moving the message to the data.
>
>One can model such a thing after the core Smalltalk VM, or the Self VM, 
>for example.   Unlike data structures, objects are inherently 
>abstraction-building tools.
>
>Of course you could just put the data structures into ASCII rather than 
>ASN.1, and then have the worst of all worlds by using XML to represent 
>your MIB equivalent.
>
>
>
>

From day at std.com  Mon Aug  7 14:24:19 2006
From: day at std.com (John Day)
Date: Mon, 7 Aug 2006 17:24:19 -0400
Subject: [e2e] MIB variables vs. ?
In-Reply-To: <44D79047.6040203@reed.com>
References: <20060807155212.0103867@aland.bbn.com>
	<44D777A2.7080408@dcrocker.net> <44D79047.6040203@reed.com>
Message-ID: <a06230936c0fd5f180a29@[10.0.1.12]>

At 15:11 -0400 2006/08/07, David P. Reed wrote:
>The standard flexible and elegant alternative to complex data 
>structures (such as MIBs) is a "message-based protocol" that talks 
>to objects that can be migrated across the network by copying the 
>object (code & data together) or accessed across the network by 
>moving the message to the data.
>
>One can model such a thing after the core Smalltalk VM, or the Self 
>VM, for example.   Unlike data structures, objects are inherently 
>abstraction-building tools.
>
>Of course you could just put the data structures into ASCII rather 
>than ASN.1, and then have the worst of all worlds by using XML to 
>represent your MIB equivalent.

And XML in ASCII have been suggested (groan).  Actually, if SNMP were 
using PER instead of BER it would be much faster, less code and less 
bits.

But the real problem is lack of consistency and commonality across 
MIBs.  This is major stumbling block to network management.  Until we 
have that, there will not be much improvement. But as we know, that 
will be fought tooth and nail.  That is why the whole SNMP security 
issue was such a red-herring.

Take care,
John


From puddinghead_wilson007 at yahoo.co.uk  Mon Aug  7 00:47:13 2006
From: puddinghead_wilson007 at yahoo.co.uk (Puddinhead Wilson)
Date: Mon, 7 Aug 2006 08:47:13 +0100 (BST)
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <E1G9eX9-0002Gf-00@mta1.cl.cam.ac.uk>
Message-ID: <20060807074713.98851.qmail@web25406.mail.ukl.yahoo.com>

> oh and I think we should have an explicit protocol
> for establishing a
> capability to _receive_
> ip packets

Atlas Shrugged...
I think its 15 years too late!
(I loved the good old ZX Spectrum 128k, my 1st ever
comp! i could actually use bit maps to create fonts in
hindi and french using simple pokes and peeks), today
I hear it is a whole new industry!





		
___________________________________________________________ 
Now you can scan emails quickly with a reading pane. Get the new Yahoo! Mail. http://uk.docs.yahoo.com/nowyoucan.html

From perfgeek at mac.com  Mon Aug  7 18:33:53 2006
From: perfgeek at mac.com (rick jones)
Date: Mon, 7 Aug 2006 18:33:53 -0700
Subject: [e2e] What if there were no well known numbers? (history)
In-Reply-To: <44D75B19.8040605@reed.com>
References: <5.1.0.14.2.20060804154539.00b13e40@boreas.isi.edu>
	<a06230985c0f9aa661218@[10.0.1.3]>
	<20060805040424.A84515@gds.best.vwh.net>
	<a06230913c0fc6afd1bbc@[10.0.1.12]> <44D75B19.8040605@reed.com>
Message-ID: <eafb8f4a5be8f2af91bca035735bd6d2@mac.com>

> 911 is perfectly perfect for emergency services in the phone network, 
> right?  :-)  until you realize that that architectural choice combines 
> calls for cats in trees and terrorist emergencies through the same 
> unspecialized clerks in some outsourced callcenter 250 miles away

The first part sounded right, but the second seems to be mixing 
architecture and implementation dictated by economic considerations?

rick jones
lives in a place with "311" service for non-emergency calls to the 
cops...

http://homepage.mac.com/perfgeek


From touch at ISI.EDU  Tue Aug  8 06:34:33 2006
From: touch at ISI.EDU (Joe Touch)
Date: Tue, 08 Aug 2006 06:34:33 -0700
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <a06230917c0fc6ced8fdc@[10.0.1.12]>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>	<44D4C94F.8020801@dcrocker.net>
	<a06230917c0fc6ced8fdc@[10.0.1.12]>
Message-ID: <44D892E9.9060403@isi.edu>

John Day wrote:
>>
>> The thread seems to have veered off, into an interesting discussion of
>> history.
>>  There were a few responses to the main line of your query, and I am
>> hoping we
>> can consider it more extensively.
>>
>> In what follows, I decided to make straight assertions, mostly to try to
>> engender focused responses.  I also decided to wander around the
>> topic, a bit,
>> to see what might resonate...
>>
>> At every layer of a protocol architecture, there is a means of
>> distinguishing
>> services at the next layer up.  Without having done an exhaustive
>> study, I will
>> nonetheless assert that there is always a field at the current layer, for
>> identifying among the entities at the next layer up.
> 
> Be careful here.  This is not really true.  The protocol-id field in IP
> for example identifies the *syntax* of the protocol in the layer above. 
> (It can't be the protocol because there could be more than one instance
> of the same protocol in the same system.  Doesn't happen often but it
> can happen.) Port-ids or sockets identify an instance of communication,
> not a  particular service.

They currently do both for the registered numbers, at least as a
convention, although individual host-pairs override that protocol
meaning by a-priori (usually out-of-band) agreement.

> Again, the well-known socket approach only
> works as long as there is only one instance of the protocol in the layer
> above and certainly only one instance of a "service." (We were lucky in
> the early work that this was true.)

It's possible for that instance to hand-off other instances, as is done
with FTP.

>> That field always has at least one standard, fixed value.  Whether it
>> has more
>> than one is the interesting question, and depends on how the standard
>> value(s)
>> get used.  (If there is only one, then it will be used as a dynamic
>> redirector.)
> 
> Actually, if you do it right, no one standard value is necessary at
> all.  You do have to know the name of the application you want to
> communicate with, but you needed to know that anyway.

That value must be 'standard' between the two endpoints, but as others
have noted, need not have meaning to anyone else along the path.

...
>> What *does* matter is how to know what values to use. This, in turn,
>> creates a
>> bootstrapping/startup task.  I believe the deciding factor in solving
>> that task
>> is when the binding is done to particular values.  Later binding gives
>> more
>> flexibility -- and possibly better scaling and easier administration
>> -- but at
>> the cost of additional mechanism and -- probably always -- complexity,
>> extra
>> round-trips and/or reliability.
> 
> Not really. But again, you have to do it the right way.  There are a lot
> of ways to do it that do require all sorts of extra stuff.

The key question is "what is late bound". IMO, we could really use
something that decouples protocol identifier from instance (e.g.,
process demultiplexing) identifier.

>> In discussing the differences between email and instant messaging, I
>> came to
>> believe that we need to make a distinction between "protocol" and
>> "service".
>> The same protocol can be used for very different services, according
>> to how it

This argues for three fields: demux ID (still needed), protocol, and
service name.

At that point, we could allow people to use HTTP for DNS exchanges if
they _really_ wanted, rather than the DNS protocol. I'm not sure that's
the point of the exercise, but modularity is a good idea.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 259 bytes
Desc: not available
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060808/afa769c2/signature.bin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060808/afa769c2/signature-0001.bin

From touch at ISI.EDU  Tue Aug  8 08:46:39 2006
From: touch at ISI.EDU (Joe Touch)
Date: Tue, 08 Aug 2006 08:46:39 -0700
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <44D8AEA2.8040306@cs.utk.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>	<44D4C94F.8020801@dcrocker.net>	<a06230917c0fc6ced8fdc@[10.0.1.12]>
	<44D892E9.9060403@isi.edu> <44D8AEA2.8040306@cs.utk.edu>
Message-ID: <44D8B1DF.1060604@isi.edu>



Keith Moore wrote:
>>> Port-ids or sockets identify an instance of communication,
>>> not a  particular service.
>>
>> They currently do both for the registered numbers, at least as a
>> convention, although individual host-pairs override that protocol
>> meaning by a-priori (usually out-of-band) agreement.
> 
> I think of port numbers identifying a distinguished service or default
> instance of a service.
> 
> e.g. A host can have web servers (i.e. something that serves web pages
> for a web browser), using the HTTP protocol, on any number of ports. The
> web server running HTTP on port 80 is the default instance, the one that
> people get if they don't specify anything more than the name of the
> service (which is a DNS name but not necessarily the name of a single
> host).

Existing well-known port allocations indicate both protocol and version;
that means that there are multiple 'default instances' in that case
(e.g., NFS).

> A host can also use HTTP to provide things other than web servers, and a
> host can have web servers running other protocols such as FTP.  So we
> have service names, host names, services, protocols, and ports - each
> subtly different than those next to it.

A few questions:

- how are service names different from services?

- why does the service name differ from the protocol?
	protocols should indicate the next-layer up only, IMO

	transport should indicate how to parse the next layer,
	e.g., to indicate "HTTP". HTTP already provides for ways
	to indicate the next layer, which is similar to what
	others call 'semantics', e.g.: ftp:, http:, etc. If
	you want to do DNS over HTTP, define a "dns:" type, IMO.

Ports really indicate which instance of a protocol at a host, IMO - but
supporting that in TCP requires redefining the 'socket pair' to be a
pair of triples: "host, protocol, port" (rather than the current "host,
port").

However, although there are many who want to consider multiple instances
of the same protocol, it's not clear how a source would know which
instance to talk to. IMO, instances are basically bound to the
destination IP address, and if you want multiple instances, get multiple
addresses - because the same resolution that determines host determines
instance, IMO.

I.e., instance indication and selection is rife with problems.

...
>> The key question is "what is late bound". IMO, we could really use
>> something that decouples protocol identifier from instance (e.g.,
>> process demultiplexing) identifier.
> 
> We could also use something that decouples service from protocol.  (do
> we really want to be stuck with HTTP forever as the only way to get web
> pages?  SMTP as the only way to transmit mail?)  How many layers do we
> want?

We do in HTTP. We might be able to use that in other protocols, but
that's a decision for those protocols, not TCP, IMO.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060808/2a96f8d2/signature-0001.bin

From touch at ISI.EDU  Tue Aug  8 09:03:37 2006
From: touch at ISI.EDU (Joe Touch)
Date: Tue, 08 Aug 2006 09:03:37 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <44D7614E.6050202@reed.com>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk> <44D73A04.1080003@isi.edu>
	<44D7614E.6050202@reed.com>
Message-ID: <44D8B5D9.2070802@isi.edu>



David P. Reed wrote:
> Joe/Jon - "stop - you're both right!" (from an old TV commercial).
> 
> Ultimately there are two essential parties to any message - the sender
> and the receiver - and a collection of unessential but helpful third
> parties (the network components that may help facilitate that
> communication).
> 
> The sender and receiver make a JOINT decision to send and to accept the
> sending.   Neither comes first in any essential way.

Base case: neither sender nor receiver has decided to do anything.

Can you show how to proceed in a way where the receiver doesn't have to
make the decision to accept SOMETHING first, notably before the sender
knows?

Can you show how to proceed where the sender knows what the receiver is
willing to accept, notably without the receiver silently being willing
to accept incoming coordination messages?

Can you show how the receiver can indicate its willingness and
particular parameters that doesn't involve having the receiver send
_somebody_ a message?

In the absence of above, I'll presume that ALL communication initiates
as follows:

- receiver is open to attack
- sender initiates a message
- receiver decides whether to respond

I.e., fixed receiver, variable sender. All other variants presume this
as a coordination mechanism, and allow 'receiver initiation' only on
subordinate channels that are coordinated this way first.

If you have an alternate example - notably that starts with ZERO shared
knowledge (since that's how we bootstrap, and without that nothing
happens), it'd be _very_ useful to see.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060808/f7a552b5/signature.bin

From dpreed at reed.com  Tue Aug  8 09:48:21 2006
From: dpreed at reed.com (David P. Reed)
Date: Tue, 08 Aug 2006 12:48:21 -0400
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <44D8B5D9.2070802@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk> <44D73A04.1080003@isi.edu>
	<44D7614E.6050202@reed.com> <44D8B5D9.2070802@isi.edu>
Message-ID: <44D8C055.9060903@reed.com>

Deep philosophical question, Joe.   What does it mean to receive or send?

Consider a human infant.   When born it is physically in an 
environment.   However in terms of speech it is neither sending nor 
receiving messages.

Which does it do first?   In fact, it probably starts by sending.   
Eventually sending (cries, kicks, smiles) provoke responses that seem to 
be correlated with sensed input.

Or maybe it starts by receiving.   But it is NOT "open to attack" 
because the messages that arrive are not acted upon in a predictable 
way.   Only after 12-18 months does a parent teach the child what 
messages is must act upon in order to get fed, etc.

The underlying philosophical question is the difference between energy 
impinging on a computer and its willingness to act upon it.

My computer cannot be attacked unless it is running a program that 
causes it to ACT upon incoming data.   Merely being connected to 
incoming data does not make it vulnerable.

Similarly, a sender cannot cause my computer to do anything predictable 
or interesting unless it can predict what impinging energy structures 
will cause predictable actions.

Thus putting responsibility on a "3rd party" to protect a receiver or 
limit a sender is a long way from the point where communications is 
turned on or enabled.

The step of installing Windows or Linux on the computer (with device 
drivers) is the first step.   If you install Windows you increase your 
risk hugely.   Though Linux with a crappy device driver is just as 
easily killed - a malformed packet can cause code to be executed in the 
kernel in many cases, since the device driver executes in the kernel 
address space.


From touch at ISI.EDU  Tue Aug  8 10:26:48 2006
From: touch at ISI.EDU (Joe Touch)
Date: Tue, 08 Aug 2006 10:26:48 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <44D8C055.9060903@reed.com>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk> <44D73A04.1080003@isi.edu>
	<44D7614E.6050202@reed.com> <44D8B5D9.2070802@isi.edu>
	<44D8C055.9060903@reed.com>
Message-ID: <44D8C958.9020809@isi.edu>



David P. Reed wrote:
> Deep philosophical question, Joe.   What does it mean to receive or send?
> 
> Consider a human infant.   When born it is physically in an
> environment.   However in terms of speech it is neither sending nor
> receiving messages.
> 
> Which does it do first?   In fact, it probably starts by sending.  
> Eventually sending (cries, kicks, smiles) provoke responses that seem to
> be correlated with sensed input.

In a two-party system, "receiver open to input" always precedes "sender
issues message".

I.e., both the parent and child are open to input first. THEN either of
them sends.

In both cases, receiving precedes sending.

> 
> Or maybe it starts by receiving.   But it is NOT "open to attack"
> because the messages that arrive are not acted upon in a predictable
> way.  

It can be attacked by overloading the input (e.g., send noise). That
attack prevents it from proceeding.

> Only after 12-18 months does a parent teach the child what
> messages is must act upon in order to get fed, etc.
> 
> The underlying philosophical question is the difference between energy
> impinging on a computer and its willingness to act upon it.
> 
> My computer cannot be attacked unless it is running a program that
> causes it to ACT upon incoming data.   Merely being connected to
> incoming data does not make it vulnerable.

Talk about philosophy. What does it mean to be connected to incoming
data and NOT act on it? That's basically not receiving it.

> Similarly, a sender cannot cause my computer to do anything predictable
> or interesting unless it can predict what impinging energy structures
> will cause predictable actions.

Sure it can. You can send to it, watch what it does (whether it responds
or not) and adjust your input accordingly. This is what both parent and
child already do.

> Thus putting responsibility on a "3rd party" to protect a receiver or
> limit a sender is a long way from the point where communications is
> turned on or enabled.

That '3rd party' isn't 3rd anything. That party is a receiver, who needs
to be told something by the two parties in the communication. Then IT is
open to attack as well.

> The step of installing Windows or Linux on the computer (with device
> drivers) is the first step.   If you install Windows you increase your
> risk hugely.   Though Linux with a crappy device driver is just as
> easily killed - a malformed packet can cause code to be executed in the
> kernel in many cases, since the device driver executes in the kernel
> address space.

The risks are statistical: a crappy OS that is not widely deployed is
probably nearly as secure as a good OS that is widely deployed. The
issue is both that the receiver is open to attack and that the sender
knows it (otherwise, which OS does the sender attack?)

I don't see how that has any bearing on this discussion, though.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060808/e9113daf/signature.bin

From dpreed at reed.com  Tue Aug  8 10:55:08 2006
From: dpreed at reed.com (David P. Reed)
Date: Tue, 08 Aug 2006 13:55:08 -0400
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <44D8C958.9020809@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk> <44D73A04.1080003@isi.edu>
	<44D7614E.6050202@reed.com> <44D8B5D9.2070802@isi.edu>
	<44D8C055.9060903@reed.com> <44D8C958.9020809@isi.edu>
Message-ID: <44D8CFFC.3060004@reed.com>

Perhaps the following will clarify my point, Joe.

I happily proceed unscathed  in 130 decibels of audio noise and 1 kW/sq. 
meter of photonic noise, because I do not choose to interpret that noise 
as a requirement to act.   On the other hand, I believe that much of 
that noise is present because the senders presume that I will act on it 
(advertisments on screens and billboards,

So that reinforces your point, Joe, that the receiver opens himself to 
"attack" whatever that is.  

But we do ask our neighbors not to run unmuffled motorcycles, to keep 
their use of obscene language to a minimum, and not to shout fire in 
crowded theaters.  Similarly, we choose to expect our neighbors not to 
put up ugly structures, not to shine projected images into our bedroom 
windows at night, etc. Even though they can.

So that reinforces Jon's point, that restraint matters because we are in 
the world together.

I tend to believe that your view, Joe, was  historically applicable in a 
world where communications was rare.   Now one must assume that one is 
connected in numerous ways, in order just to exist (for examples, one 
would be stupid not to be connected to the Windows Update service when 
running Windows.   Computers are no longer IN ANY SENSE self-contained).

Jon's view is far more applicable in a world where connection is NOT 
optional, and connection is pervasive.

Today, computers exist in a world with billboards, honking taxis, and 
other metaphorical "city" concepts of communications.   Messages are 
omnipresent, and must be explicitly blocked rather than explicitly 
requested.

A baby does not "choose" to be exposed to signals.   It is inherently 
exposed.


Joe Touch wrote:
> David P. Reed wrote:
>   
>> Deep philosophical question, Joe.   What does it mean to receive or send?
>>
>> Consider a human infant.   When born it is physically in an
>> environment.   However in terms of speech it is neither sending nor
>> receiving messages.
>>
>> Which does it do first?   In fact, it probably starts by sending.  
>> Eventually sending (cries, kicks, smiles) provoke responses that seem to
>> be correlated with sensed input.
>>     
>
> In a two-party system, "receiver open to input" always precedes "sender
> issues message".
>
> I.e., both the parent and child are open to input first. THEN either of
> them sends.
>
> In both cases, receiving precedes sending.
>
>   
>> Or maybe it starts by receiving.   But it is NOT "open to attack"
>> because the messages that arrive are not acted upon in a predictable
>> way.  
>>     
>
> It can be attacked by overloading the input (e.g., send noise). That
> attack prevents it from proceeding.
>
>   
>> Only after 12-18 months does a parent teach the child what
>> messages is must act upon in order to get fed, etc.
>>
>> The underlying philosophical question is the difference between energy
>> impinging on a computer and its willingness to act upon it.
>>
>> My computer cannot be attacked unless it is running a program that
>> causes it to ACT upon incoming data.   Merely being connected to
>> incoming data does not make it vulnerable.
>>     
>
> Talk about philosophy. What does it mean to be connected to incoming
> data and NOT act on it? That's basically not receiving it.
>
>   
>> Similarly, a sender cannot cause my computer to do anything predictable
>> or interesting unless it can predict what impinging energy structures
>> will cause predictable actions.
>>     
>
> Sure it can. You can send to it, watch what it does (whether it responds
> or not) and adjust your input accordingly. This is what both parent and
> child already do.
>
>   
>> Thus putting responsibility on a "3rd party" to protect a receiver or
>> limit a sender is a long way from the point where communications is
>> turned on or enabled.
>>     
>
> That '3rd party' isn't 3rd anything. That party is a receiver, who needs
> to be told something by the two parties in the communication. Then IT is
> open to attack as well.
>
>   
>> The step of installing Windows or Linux on the computer (with device
>> drivers) is the first step.   If you install Windows you increase your
>> risk hugely.   Though Linux with a crappy device driver is just as
>> easily killed - a malformed packet can cause code to be executed in the
>> kernel in many cases, since the device driver executes in the kernel
>> address space.
>>     
>
> The risks are statistical: a crappy OS that is not widely deployed is
> probably nearly as secure as a good OS that is widely deployed. The
> issue is both that the receiver is open to attack and that the sender
> knows it (otherwise, which OS does the sender attack?)
>
> I don't see how that has any bearing on this discussion, though.
>
> Joe
>
>   


From touch at ISI.EDU  Tue Aug  8 11:08:08 2006
From: touch at ISI.EDU (Joe Touch)
Date: Tue, 08 Aug 2006 11:08:08 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <44D8CFFC.3060004@reed.com>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk> <44D73A04.1080003@isi.edu>
	<44D7614E.6050202@reed.com> <44D8B5D9.2070802@isi.edu>
	<44D8C055.9060903@reed.com> <44D8C958.9020809@isi.edu>
	<44D8CFFC.3060004@reed.com>
Message-ID: <44D8D308.5020601@isi.edu>



David P. Reed wrote:
> Perhaps the following will clarify my point, Joe.
> 
> I happily proceed unscathed  in 130 decibels of audio noise and 1 kW/sq.
> meter of photonic noise, because I do not choose to interpret that noise
> as a requirement to act.   On the other hand, I believe that much of
> that noise is present because the senders presume that I will act on it
> (advertisments on screens and billboards,
> 
> So that reinforces your point, Joe, that the receiver opens himself to
> "attack" whatever that is. 
> But we do ask our neighbors not to run unmuffled motorcycles, to keep
> their use of obscene language to a minimum, and not to shout fire in
> crowded theaters.  Similarly, we choose to expect our neighbors not to
> put up ugly structures, not to shine projected images into our bedroom
> windows at night, etc. Even though they can.
> 
> So that reinforces Jon's point, that restraint matters because we are in
> the world together.

Restraint applies generally, agreed.

> I tend to believe that your view, Joe, was  historically applicable in a
> world where communications was rare.   Now one must assume that one is
> connected in numerous ways, in order just to exist (for examples, one
> would be stupid not to be connected to the Windows Update service when
> running Windows.   Computers are no longer IN ANY SENSE self-contained).
> 
> Jon's view is far more applicable in a world where connection is NOT
> optional, and connection is pervasive.
> 
> Today, computers exist in a world with billboards, honking taxis, and
> other metaphorical "city" concepts of communications.   Messages are
> omnipresent, and must be explicitly blocked rather than explicitly
> requested.

The example above was very useful.

If you plug your ears (eyes, etc. - i.e., block), you are NOT open to
new communication. You have no way to bootstrap.

Restraint applies in many places:

1) senders must restrain themselves from initiating communication

2) senders not doing so should be 'punished'
	i.e., 'boy who cried wolf' reaction. note the important
	analogy - when he really DID see a wolf, nobody listened.

3) receivers can rate limit, triage, etc.
	you can change the channel when a commercial comes on

Receivers are inherently passive. To do otherwise makes them senders,
subject to sender rules. To plug their inputs renders them deaf, period.

Joe



	


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060808/dec90147/signature.bin

From toby.rodwell at dante.org.uk  Tue Aug  8 11:32:19 2006
From: toby.rodwell at dante.org.uk (Toby Rodwell)
Date: Tue, 08 Aug 2006 19:32:19 +0100
Subject: [e2e] Unexpected small reduction in cwnd during iperf test
Message-ID: <44D8D8B3.9060109@dante.org.uk>

Can anyone think in what circumstances a TCP instance might reduce cwnd
by a small amount, without there being any change in ssthresh?  I
detected this with a script that periodically (every 0.25 seconds)
collects TCP stats.  The TCP transfer in question was an iperf test
running between a measurement point in a New York PoP and
another in Budapest.  netstat (run before and after the test) confirms
that there was no packet loss, but there were two re-transmitted
segments, which presumably was the event which initially set ssthresh as
4740, and I assume was caused by reordering/dup ACKs.

Time (s)	
15.50 	 cwnd:5931	 ssthresh:4740	 	
15.75 	 cwnd:5931	 ssthresh:4740	 	
16.00 	 cwnd:5931	 ssthresh:4740
...
23.75 	 cwnd:5931	 ssthresh:4740	 	
24.00 	 cwnd:5926	 ssthresh:4740 <==GLITCH?	
24.25 	 cwnd:5927	 ssthresh:4740	 	
24.50 	 cwnd:5928	 ssthresh:4740	 	
24.75 	 cwnd:5929	 ssthresh:4740	 	
25.00 	 cwnd:5930	 ssthresh:4740	 	
25.25 	 cwnd:5931	 ssthresh:4740	 	
25.50 	 cwnd:5931	 ssthresh:4740	 	

The hosts are both Linux kernel's 2.6.13, using BIC congestion control.

regards
Toby

-- 
______________________________________________________________________

Toby Rodwell
Network Engineer

DANTE - www.dante.net

Tel: +44 (0)1223 371 300
Fax: +44 (0)1223 371 371
Email: toby.rodwell at dante.org.uk
PGP Key ID: 0xB864694E

City House, 126-130 Hills Road
Cambridge CB2 1PQ
UK
_____________________________________________________________________



From salman at cs.columbia.edu  Tue Aug  8 12:37:58 2006
From: salman at cs.columbia.edu (Salman Abdul Baset)
Date: Tue, 8 Aug 2006 15:37:58 -0400 (EDT)
Subject: [e2e] Unexpected small reduction in cwnd during iperf test
In-Reply-To: <mailman.1.1155063600.5012.end2end-interest@postel.org>
References: <mailman.1.1155063600.5012.end2end-interest@postel.org>
Message-ID: <Pine.GSO.4.58.0608081524150.8160@dynasty.cs.columbia.edu>

I can think of two possible scenarios:

2.6.13 implements RFC 2861 which suggests that cwnd should be reduced
if application packet rate is reduced. But that does not seem to be the
case in your scenario.

You may want to check the TCP send and receive buffer sizes. It may so
happen that the receiver ran out of buffer space because packets were
reordered and so the sender had to reduce its window.

The possible reason for retransmission without packet loss is timeout. You
may want to check its value. You may also want to do indirect packet
counting at sender and receiver using tcpdump and wc.

Regards,
Salman

>
> Message: 6
> Date: Tue, 08 Aug 2006 19:32:19 +0100
> From: Toby Rodwell <toby.rodwell at dante.org.uk>
> Subject: [e2e] Unexpected small reduction in cwnd during iperf test
> To: end2end-interest at postel.org
> Message-ID: <44D8D8B3.9060109 at dante.org.uk>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Can anyone think in what circumstances a TCP instance might reduce cwnd
> by a small amount, without there being any change in ssthresh?  I
> detected this with a script that periodically (every 0.25 seconds)
> collects TCP stats.  The TCP transfer in question was an iperf test
> running between a measurement point in a New York PoP and
> another in Budapest.  netstat (run before and after the test) confirms
> that there was no packet loss, but there were two re-transmitted
> segments, which presumably was the event which initially set ssthresh as
> 4740, and I assume was caused by reordering/dup ACKs.
>
> Time (s)
> 15.50 	 cwnd:5931	 ssthresh:4740
> 15.75 	 cwnd:5931	 ssthresh:4740
> 16.00 	 cwnd:5931	 ssthresh:4740
> ...
> 23.75 	 cwnd:5931	 ssthresh:4740
> 24.00 	 cwnd:5926	 ssthresh:4740 <==GLITCH?
> 24.25 	 cwnd:5927	 ssthresh:4740
> 24.50 	 cwnd:5928	 ssthresh:4740
> 24.75 	 cwnd:5929	 ssthresh:4740
> 25.00 	 cwnd:5930	 ssthresh:4740
> 25.25 	 cwnd:5931	 ssthresh:4740
> 25.50 	 cwnd:5931	 ssthresh:4740
>
> The hosts are both Linux kernel's 2.6.13, using BIC congestion control.
>
> regards
> Toby
>
> --
> ______________________________________________________________________
>
> Toby Rodwell
> Network Engineer
>
> DANTE - www.dante.net
>
> Tel: +44 (0)1223 371 300
> Fax: +44 (0)1223 371 371
> Email: toby.rodwell at dante.org.uk
> PGP Key ID: 0xB864694E
>
> City House, 126-130 Hills Road
> Cambridge CB2 1PQ
> UK
> _____________________________________________________________________
>
>
>
>
> ------------------------------
>
> _______________________________________________
> end2end-interest mailing list
> end2end-interest at postel.org
> http://mailman.postel.org/mailman/listinfo/end2end-interest
>
>
> End of end2end-interest Digest, Vol 30, Issue 12
> ************************************************
>

From weixl at caltech.edu  Tue Aug  8 12:58:51 2006
From: weixl at caltech.edu (Xiaoliang (David) Wei)
Date: Tue, 8 Aug 2006 12:58:51 -0700
Subject: [e2e] Unexpected small reduction in cwnd during iperf test
References: <44D8D8B3.9060109@dante.org.uk>
Message-ID: <00a101c6bb25$127f3b30$ec2dd783@baybridge>

Hi Toby,

> Can anyone think in what circumstances a TCP instance might reduce cwnd
> by a small amount, without there being any change in ssthresh?  I
> detected this with a script that periodically (every 0.25 seconds)
> collects TCP stats.  The TCP transfer in question was an iperf test
> running between a measurement point in a New York PoP and
> another in Budapest.  netstat (run before and after the test) confirms
> that there was no packet loss, but there were two re-transmitted

I wonder how netstat could tell if there is packet loss? You mean "no packet 
loss at the local NIC?"

> Time (s)
> 15.50 cwnd:5931 ssthresh:4740
> 15.75 cwnd:5931 ssthresh:4740
> 16.00 cwnd:5931 ssthresh:4740
> ...
> 23.75 cwnd:5931 ssthresh:4740
> 24.00 cwnd:5926 ssthresh:4740 <==GLITCH?

as the cwnd before this glitch has been kept to 5931 unchanged for several 
RTTs (I assume the RTT in your case is in the order of 100ms across the 
Atlantic), I guess one possibility is that there is some ack packet 
reordering.

In this case, Linux TCP enters Disorder state. Since ack is not in order and 
no SACK (since only ack is disordered), the sender cannot send any new 
packets and the actually number of packets in flight keep decreasing. When 
the inorder ack comes back finally, the number of packets in flight is 
several packets smaller than congestion window. Upon the arrival of this 
in-order ack comes, sender exists Disorder state and the tcp_moderate_cwnd 
is called, reducing the congestion window to be # of packets in flight plus 
3...

just one explanation... not sure if it is of your case. More measurements on 
the TCP congestion avoidance state is helpful, I think. Or, if the 
experiment is repeatable and you could have the tcpdump trace during this 
events, it is also helpful.

> 24.25 cwnd:5927 ssthresh:4740
> 24.50 cwnd:5928 ssthresh:4740
> 24.75 cwnd:5929 ssthresh:4740
> 25.00 cwnd:5930 ssthresh:4740
> 25.25 cwnd:5931 ssthresh:4740
> 25.50 cwnd:5931 ssthresh:4740
>
> The hosts are both Linux kernel's 2.6.13, using BIC congestion control.
...


-David
---------------------------------------------------------
Xiaoliang (David) Wei
http://davidwei.org    Graduate Student, Netlab, Caltech
====================================== 


From saikat at cs.cornell.edu  Tue Aug  8 13:31:44 2006
From: saikat at cs.cornell.edu (Saikat Guha)
Date: Wed, 09 Aug 2006 02:01:44 +0530
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <44D8C958.9020809@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk> <44D73A04.1080003@isi.edu>
	<44D7614E.6050202@reed.com> <44D8B5D9.2070802@isi.edu>
	<44D8C055.9060903@reed.com>  <44D8C958.9020809@isi.edu>
Message-ID: <1155069104.3406.149.camel@localhost.localdomain>

On Tue, 2006-08-08 at 10:26 -0700, Joe Touch wrote:
> In a two-party system, "receiver open to input" always precedes "sender
> issues message".

When talking about the present Internet, does the "two-party system"
refer to only the sending host and the receiving host? What about the
corresponding middles -- the corporate firewall that most of us are
behind, or the NAT that many home users are behind?

Agreed that the receiver must be "open to input" before the sender sends
the message, but the receiver need not be open-to-input _for any and all
possible senders_ -- it can be open-to-input for a trusted middle entity
that can vet the sender's message and relay it to the receiver. (The
middle entity here is open-to-input for all, and suitably protected.)

In keeping with the parent-child analogy, I would consider that similar
to the relationship between my host and my firewall -- not that
interesting since both are open-to-input from both. The parent can,
however, shield the child from a _stranger_ either proactively or
reactively: in the proactive case, the child is not open-to-input from
the stranger until the parent introduces the two.

> In both cases, receiving precedes sending.

Is it _necessary and required_ to be able to receive *from anyone and
everyone* before someone can send to you?

My 2c.
-- 
Saikat
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060809/c135c8d3/attachment.bin

From toby.rodwell at dante.org.uk  Tue Aug  8 15:45:07 2006
From: toby.rodwell at dante.org.uk (Toby Rodwell)
Date: Tue, 08 Aug 2006 23:45:07 +0100
Subject: [e2e] Unexpected small reduction in cwnd during iperf test
In-Reply-To: <00a101c6bb25$127f3b30$ec2dd783@baybridge>
References: <44D8D8B3.9060109@dante.org.uk>
	<00a101c6bb25$127f3b30$ec2dd783@baybridge>
Message-ID: <44D913F3.2090302@dante.org.uk>

Hi David,
Many thanks for your reply.

Xiaoliang (David) Wei wrote:
> Hi Toby,
> 
>> ...  The TCP transfer in question was an iperf test
>> running between a measurement point in a New York PoP and
>> another in Budapest.  netstat (run before and after the test) confirms
>> that there was no packet loss, but there were two re-transmitted
> 
> I wonder how netstat could tell if there is packet loss? You mean "no
> packet loss at the local NIC?"
"netstat -s" on the sender shows total number of TCP segments lost -
since there was no difference between the value before or after the test
I'm confident there was no packet loss.
>  
> as the cwnd before this glitch has been kept to 5931 unchanged for
> several RTTs (I assume the RTT in your case is in the order of 100ms
> across the Atlantic), I guess one possibility is that there is some ack
> packet reordering.
It is certainly possible (almost likely) that ACKs are being reordered,
but I hadn't realised that would have any knock on effect - I'd always
thought that a disordered 'late' ACK would be ignored as the disordered
'early' ACK would have acknowledged its sequence space.
...
> 
> just one explanation... not sure if it is of your case. More
> measurements on the TCP congestion avoidance state is helpful, I think.
> Or, if the experiment is repeatable and you could have the tcpdump trace
> during this events, it is also helpful.
The experiment is certainly repeatable but the end hosts are not
particularly powerful, and I don't think they would support trying to
collect tcpdump traces  (even just the headers) at 650Mbps without
adversely affecting the result.

regards
Toby

-- 
______________________________________________________________________

Toby Rodwell
Network Engineer

DANTE - www.dante.net

Tel: +44 (0)1223 371 300
Fax: +44 (0)1223 371 371
Email: toby.rodwell at dante.org.uk
PGP Key ID: 0xB864694E

City House, 126-130 Hills Road
Cambridge CB2 1PQ
UK
_____________________________________________________________________

From perfgeek at mac.com  Tue Aug  8 19:47:13 2006
From: perfgeek at mac.com (rick jones)
Date: Tue, 8 Aug 2006 19:47:13 -0700
Subject: [e2e] Unexpected small reduction in cwnd during iperf test
In-Reply-To: <44D913F3.2090302@dante.org.uk>
References: <44D8D8B3.9060109@dante.org.uk>
	<00a101c6bb25$127f3b30$ec2dd783@baybridge>
	<44D913F3.2090302@dante.org.uk>
Message-ID: <c11897a6f68b86e94ee93393ea3b4fe7@mac.com>

> "netstat -s" on the sender shows total number of TCP segments lost -

IIRC, it will show the number of TCP segments (and/or octets) 
_retransmitted_ which _should_ correlate to segments lost, but that is 
not guaranteed.

rick jones
there is no rest for the wicked, yet the virtuous have no pillows


From pekka.nikander at nomadiclab.com  Tue Aug  8 21:49:44 2006
From: pekka.nikander at nomadiclab.com (Pekka Nikander)
Date: Wed, 9 Aug 2006 07:49:44 +0300
Subject: [e2e] About the primitives and their value (was: What if there were
	no well known numbers?)
In-Reply-To: <44D8D308.5020601@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk> <44D73A04.1080003@isi.edu>
	<44D7614E.6050202@reed.com> <44D8B5D9.2070802@isi.edu>
	<44D8C055.9060903@reed.com> <44D8C958.9020809@isi.edu>
	<44D8CFFC.3060004@reed.com> <44D8D308.5020601@isi.edu>
Message-ID: <4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>

Joe,

> Receivers are inherently passive. To do otherwise makes them senders,
> subject to sender rules. To plug their inputs renders them deaf,  
> period.

A communication network is as transparent as it is made to be.  It is  
not necessarily like "ether", open to anyone's noise.  Agreed, the  
original Internet used to be open for anyone to send anything to  
anyone, and that had great _value_ for the community.  However, that  
is only _one_ example of network design.

If I take your "ether"-like fully transparent network, then I must  
agree with you.  In such a network a receiver is simply passive and  
must receive whatever any sender sends to it.

However, put the first bridge or router there, and you have to make  
the choice of making the box fully transparent or _not_.  You can  
make the box a "firewall", allowing the "receiver" instruct the box  
of what information it wants to receive, by default, and what not.   
Hence, once you give up your fully-open network abstraction, stating  
that "receivers are inherently passive" becomes a mere tautology.

I have seen network designs where "to receive" is an active  
primitive:  the "senders" are "passively" offering data or services  
in the network, and a "receiver" must actively ask for such a piece  
of data or service in order to get it.  The point is that the  
"receiver" can only send the request to the "network", not to the  
"sender".  End-to-end data transfer will only take place if there is  
both a willing sender and a willing receiver.  That is, unless there  
is already someone willing to provide the piece of data or service,  
the request to receive will go nowhere.  In other words, such a  
network is _not_ designed to support the "send this datagram to this  
recipient" primitive, but on two primitives like: "I am willing to  
send/offer/distribute this <whatever>" and "I want to receive/get/ 
consume this <whatever>".

[You can argue that in the previous paragraph I've got the roles  
wrong, that the data or service providers are "receivers" willing to  
receive requests and the clients are "senders" sending the requests.   
Sure, one can see the situation as such if one wills, but it doesn't  
change the point:  the requests are only delivered if there is  
someone actively willing to act upon them.]

Sure, such a design would not be the Internet as we used to know it  
nor as we know it today.  So, the question goes to the relative  
values.  What is the relative value of network based on the "send  
datagram to recipient" primitive vs. other types of networks, based  
on other primitives.  Since there _are_ active boxes in the network,  
we do have the choice of selecting the primitives.  We should not be  
bound to the "receivers are inherently passive" thinking; while true  
in an ether-like shared medium, such thinking is meaningless as soon  
as you put an active box between the "sender" and the "receiver".

Hence, IMHO, what is needed is some kind of (micro)economic  
understanding of the different systems.  There will be at least three  
different types of parties (senders, receivers, network elements),  
with different interests and incentives.  The communication  
primitives, and the protocols implementing the primitives, will  
regulate what kind of agreements the parties will be able to  
negotiate and will affect the relative negotiating power of the  
parties (cf. Lessig's "Code".)  It would be great if we got the  
"regulation-through-protocol-design" right enough.

The only thing that is somewhat clear to me at this time is that the  
current primitive, "send this datagram to this recipient",  
independent of the recipient's stated willingness or unwillingness of  
receiving it, gets the incentives wrong, i.e., results in an  
undesirable dynamic Nash equilibrium.

--Pekka N.


From Brunner at netlab.nec.de  Wed Aug  9 01:14:41 2006
From: Brunner at netlab.nec.de (Marcus Brunner)
Date: Wed, 9 Aug 2006 10:14:41 +0200
Subject: [e2e] About the primitives and their value (was: What if there
	wereno well known numbers?)
In-Reply-To: <4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
Message-ID: <6D28EBC684A4D94096217AD2FE400873A9C4C4@venus.office>


> Hence, IMHO, what is needed is some kind of (micro)economic 
> understanding of the different systems.  There will be at 
> least three different types of parties (senders, receivers, 
> network elements), with different interests and incentives.  
> The communication primitives, and the protocols implementing 
> the primitives, will regulate what kind of agreements the 
> parties will be able to negotiate and will affect the 
> relative negotiating power of the parties (cf. Lessig's 
> "Code".)  It would be great if we got the 
> "regulation-through-protocol-design" right enough.

Pekka,

I agree with your observation, but does this really need to be reflected
in the protocol design, isn't it more an interface to the network type
of question? I would agree that at lest some additional protocol
machinery would be required for implmenting some of the expressed
scheme.

Finally, I think it heavily depends on the applications and environment
where the differnt types of primitives are useful. I'm not sure the
"willing to offer and want to get" type of primitives  are the only
useful ones, so various flavours of those might be the right way to go
with the caveat of getting a pretty complex interface to the network.

Marcus

From weixl at caltech.edu  Wed Aug  9 03:35:17 2006
From: weixl at caltech.edu (Xiaoliang (David) Wei)
Date: Wed, 9 Aug 2006 03:35:17 -0700
Subject: [e2e] Unexpected small reduction in cwnd during iperf test
References: <44D8D8B3.9060109@dante.org.uk>
	<00a101c6bb25$127f3b30$ec2dd783@baybridge>
	<44D913F3.2090302@dante.org.uk>
Message-ID: <02dd01c6bb9f$81832e90$ec2dd783@baybridge>

Thanks, Toby.

>> as the cwnd before this glitch has been kept to 5931 unchanged for
>> several RTTs (I assume the RTT in your case is in the order of 100ms
>> across the Atlantic), I guess one possibility is that there is some ack
>> packet reordering.
> It is certainly possible (almost likely) that ACKs are being reordered,
> but I hadn't realised that would have any knock on effect - I'd always
> thought that a disordered 'late' ACK would be ignored as the disordered
> 'early' ACK would have acknowledged its sequence space.

I think you are right... I was thinking the reordering of data packets. (But 
on the other hand, as SACK is enabled, the number of packets in flight 
should not decrease in the event of data packet reordering).
Sorry...

-David 


From pekka.nikander at nomadiclab.com  Wed Aug  9 04:01:23 2006
From: pekka.nikander at nomadiclab.com (Pekka Nikander)
Date: Wed, 9 Aug 2006 14:01:23 +0300
Subject: [e2e] About the primitives and their value (was: What if there
	wereno well known numbers?)
In-Reply-To: <6D28EBC684A4D94096217AD2FE400873A9C4C4@venus.office>
References: <6D28EBC684A4D94096217AD2FE400873A9C4C4@venus.office>
Message-ID: <EBB40931-ADB3-44DD-B993-9276616A034E@nomadiclab.com>

Marcus,
>> Hence, IMHO, what is needed is some kind of (micro)economic
>> understanding of the different systems.  There will be at
>> least three different types of parties (senders, receivers,
>> network elements), with different interests and incentives.
>> The communication primitives, and the protocols implementing
>> the primitives, will regulate what kind of agreements the
>> parties will be able to negotiate and will affect the
>> relative negotiating power of the parties (cf. Lessig's
>> "Code".)  It would be great if we got the
>> "regulation-through-protocol-design" right enough.
>
> I agree with your observation, but does this really need to be  
> reflected
> in the protocol design, isn't it more an interface to the network type
> of question? I would agree that at lest some additional protocol
> machinery would be required for implmenting some of the expressed
> scheme.
>
> Finally, I think it heavily depends on the applications and  
> environment
> where the differnt types of primitives are useful. I'm not sure the
> "willing to offer and want to get" type of primitives  are the only
> useful ones, so various flavours of those might be the right way to go
> with the caveat of getting a pretty complex interface to the network.

I am trying to look at here primitives that are as fundamental and as  
simple as the current IP datagram service.  With raw IP, you send a  
datagram to the destination, and the datagram will go there if it can  
in the first case, independent on whether the recipient is interested  
in getting it at all.

For the current IP case, we do have additional protocol machinery to  
make this simple primitive possible: all the routing protocols plus  
all the local protocols such as ARP, NDP, and DHCP.  And then we have  
all the peering or transit agreements between the ISPs plus all the  
service agreements between ISPs and companies and consumers buying  
"Internet access".

Now, the current scheme creates an economic equilibrium where there  
are strong incentives for spammers and DDoSsers to send their  
traffic, imposing extra burden to the ISP in trying to protect their  
networks and their customers from unwanted traffic, and imposing  
extra burden to the end-users in trying to protect themselves from  
spam and other ill traffic.  It is even profitable for the spammers  
to spend quite a lot of resources in order to move rapidly from one  
ISP to another, in order to be able to continue their business.

Hence, we can see that the current protocol machinery and the current  
communication primitives, together with the associated agreement  
structures, create a certain kind of an micro-economic dynamic  
equilibrium, with very visible macro-economic consequences.

In a similar way, if we create a "Future Internet Network Design  
(FIND)" or "Next Generation Internet (NGI)", with new types of  
fundamental communication primitives and associated protocol  
machinery, we will create a new economic playing ground where new  
types of agreements will emerge.  That playing ground will eventually  
settle into a dynamic equilibrium.  And that equilibrium will  
eventually shift based on changes in other technologies and the user  
community, just as has happened to the Internet.

Saying the same in other words, people will always do whatever they  
can.  If we build a system where it is technically possible for one  
player group to gain extra profit by pushing costs to others, some  
people will do that, sooner or later.  Relatedly but not quite as  
badly, if we create a system where it is possible both from the  
technical and market-situation point of views for one interest group  
(such as the operators) to rip extra revenue from the other interest  
groups, they will do so.  Hence, the system design must not only  
consider the primitives but also the kind of ISP market place the  
supporting protocol machinery creates.  From my personal point of  
view, our unfortunate social responsibility is to slowly eat our own  
jobs by designing communication systems where the operators' ability  
for oligopolistic structures or for price differentiation is limited  
by the technically "open" or "fair" nature of the system.  Only that,  
as far as I can see, will minimise, over the long term, the  
communication costs paid by the rest of the society, leading to  
reduced transaction costs and increased effectiveness of the overall  
economy.

Hence, my dream is that we would be able to build such a new  
communication platform, with almost equally simple primitives than  
the current raw IP primitive, where the incentives, disincentives,  
and restrictions, as imposed by the system and protocol design, lead  
to a socially more desirable dynamic equilibrium than what the  
current one is.  However, in order to be able to do that, we have to  
understand how the protocol primitives and infrastructure protocols  
affect the user's ability to do things (like send spam), how they  
affect the structure of the peering, transit, and end-user  
agreements, what kind and how large transaction costs (including non- 
monetary costs) are desirable and tolerable, etc.  Unfortunately I am  
almost clueless there; I don't know how to analyse those relationships.

Anyway, as argued vividly by Larry Lessig from the OS and apps point  
of view, the code out there forms a kind of regulative or restrictive  
environment.  That argument applies very much here.  The kind of  
communication primitives and the underlying cost structure, as  
reflected in the inter-ISP agreements and made possible by the  
protocols needed for backing those agreements, affect very much what  
the end-users "can" do; that is, what is economically viable for them  
to do.

What comes to more complex networking interfaces, I think those  
should be buildable on the top of the basic primitives.  I have long  
time ago bought the KISS argument.  Keeping the base network as  
simple as possible keeps the costs down.

To clarify, I am not at all sure if "willing to offer" and "want to  
get" are the right primitives, but those are ones that currently seem  
to be gaining some mind share in the research community.

--Pekka


From hgs at cs.columbia.edu  Wed Aug  9 06:07:35 2006
From: hgs at cs.columbia.edu (Henning Schulzrinne)
Date: Wed, 9 Aug 2006 09:07:35 -0400
Subject: [e2e] About the primitives and their value (was: What if there
	were no well known numbers?)
In-Reply-To: <4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk> <44D73A04.1080003@isi.edu>
	<44D7614E.6050202@reed.com> <44D8B5D9.2070802@isi.edu>
	<44D8C055.9060903@reed.com> <44D8C958.9020809@isi.edu>
	<44D8CFFC.3060004@reed.com> <44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
Message-ID: <B94AC9C9-B01A-4440-95DC-CF9716E02C81@cs.columbia.edu>

> I have seen network designs where "to receive" is an active  
> primitive:  the "senders" are "passively" offering data or services  
> in the network, and a "receiver" must actively ask for such a piece  
> of data or service in order to get it.  The point is that the  
> "receiver" can only send the request to the "network", not to the  
> "sender".  End-to-end data transfer will only take place if there  
> is both a willing sender and a willing receiver.  That is, unless  
> there is already someone willing to provide the piece of data or  
> service, the request to receive will go nowhere.  In other words,  
> such a network is _not_ designed to support the "send this datagram  
> to this recipient" primitive, but on two primitives like: "I am  
> willing to send/offer/distribute this <whatever>" and "I want to  
> receive/get/consume this <whatever>".
>
> [You can argue that in the previous paragraph I've got the roles  
> wrong, that the data or service providers are "receivers" willing  
> to receive requests and the clients are "senders" sending the  
> requests.  Sure, one can see the situation as such if one wills,  
> but it doesn't change the point:  the requests are only delivered  
> if there is someone actively willing to act upon them.]
>
> Sure, such a design would not be the Internet as we used to know it  
> nor as we know it today.

This seems remarkably similar to the model used by IP multicast and,  
at the application level, for mailing lists and presence (or any  
PUBLISH-SUBSCRIBE system for that matter), so this model is not all  
that new. (I probably need to qualify the IP multicast analogy, since  
DVMRP offers more of a first-packet-is-free model in flood-and-prune.)

If you want, you can consider the classical radio/TV model in the  
same spirit: a receiver has to explicitly tune to a station to  
receive the data and the FCC will make sure that only one station can  
use that frequency in any particular place. While the other  
frequencies reach the receiver, they have no negative impact, leaving  
issues of neighbor interference aside.

Henning

From dpreed at reed.com  Wed Aug  9 06:31:30 2006
From: dpreed at reed.com (David P. Reed)
Date: Wed, 09 Aug 2006 09:31:30 -0400
Subject: [e2e] About the primitives and their value
In-Reply-To: <4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk> <44D73A04.1080003@isi.edu>
	<44D7614E.6050202@reed.com> <44D8B5D9.2070802@isi.edu>
	<44D8C055.9060903@reed.com> <44D8C958.9020809@isi.edu>
	<44D8CFFC.3060004@reed.com> <44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
Message-ID: <44D9E3B2.1070007@reed.com>

Excellent perspective!   I think you are describing "pub-sub" 
architectures and "blackboard" architectures, which are very interesting 
alternatives that have been known for a long time.   It's relevant to 
note that the fundamental "Ethernet" (on a single segment) is indeed a 
"pub-sub" architecture of sorts - everyone sees every packet, and 
patterns are set into the low-level pattern matcher to decide which 
addresses are received.   Similarly, radios do low-level pattern 
matching with matched filters (frequency or code or time-slot pattern 
matching) on a shared blackboard.

It is at least conceptually possible to scale pub-sub architectures and 
make them interoperable at the same scale as the send-receive Internet.

What worries me is that the word "attack" has been imputed to situations 
where the target of the attack is perhaps just as responsible as the 
source for the mishaps involved (the balance depends on context - I must 
use Microsoft Outlook because my company requires it, so I must use 
Windows, so I must open my life up to vulnerabilities that are shared 
because Windows is a dominant monoculture and Microsoft doesn't really 
act on an aggressive basis to take responsibility for that monoculture's 
inherent vulnerability - perhaps because its lawyers advise it that it 
need not assume such liability).

There is no meaning to the word "attack" at the packet level, 
independent of context, just as there is no difference between an 
Israeli killing with a gun and a Hezbollah member killing with a gun 
independent of context.

The same applies to network elements carrying packets.   Lacking a view 
of the context, one cannot consider the network elements (or collections 
of them) to be facilitating or harming the parties to communications.

It's all in the context.   Netheads seem to think packets are 
self-evident in their meaning, in isolation.   That's only because 
netheads are willing to assert that they know the context.  It's the old 
idea of a "security kernel" - the error of synecdoche (metaphorically 
confusing the property of a part with properties of the whole).




Pekka Nikander wrote:
> Joe,
>
>> Receivers are inherently passive. To do otherwise makes them senders,
>> subject to sender rules. To plug their inputs renders them deaf, period.
>
> A communication network is as transparent as it is made to be.  It is 
> not necessarily like "ether", open to anyone's noise.  Agreed, the 
> original Internet used to be open for anyone to send anything to 
> anyone, and that had great _value_ for the community.  However, that 
> is only _one_ example of network design.
>
> If I take your "ether"-like fully transparent network, then I must 
> agree with you.  In such a network a receiver is simply passive and 
> must receive whatever any sender sends to it.
>
> However, put the first bridge or router there, and you have to make 
> the choice of making the box fully transparent or _not_.  You can make 
> the box a "firewall", allowing the "receiver" instruct the box of what 
> information it wants to receive, by default, and what not.  Hence, 
> once you give up your fully-open network abstraction, stating that 
> "receivers are inherently passive" becomes a mere tautology.
>
> I have seen network designs where "to receive" is an active 
> primitive:  the "senders" are "passively" offering data or services in 
> the network, and a "receiver" must actively ask for such a piece of 
> data or service in order to get it.  The point is that the "receiver" 
> can only send the request to the "network", not to the "sender".  
> End-to-end data transfer will only take place if there is both a 
> willing sender and a willing receiver.  That is, unless there is 
> already someone willing to provide the piece of data or service, the 
> request to receive will go nowhere.  In other words, such a network is 
> _not_ designed to support the "send this datagram to this recipient" 
> primitive, but on two primitives like: "I am willing to 
> send/offer/distribute this <whatever>" and "I want to 
> receive/get/consume this <whatever>".
>
> [You can argue that in the previous paragraph I've got the roles 
> wrong, that the data or service providers are "receivers" willing to 
> receive requests and the clients are "senders" sending the requests.  
> Sure, one can see the situation as such if one wills, but it doesn't 
> change the point:  the requests are only delivered if there is someone 
> actively willing to act upon them.]
>
> Sure, such a design would not be the Internet as we used to know it 
> nor as we know it today.  So, the question goes to the relative 
> values.  What is the relative value of network based on the "send 
> datagram to recipient" primitive vs. other types of networks, based on 
> other primitives.  Since there _are_ active boxes in the network, we 
> do have the choice of selecting the primitives.  We should not be 
> bound to the "receivers are inherently passive" thinking; while true 
> in an ether-like shared medium, such thinking is meaningless as soon 
> as you put an active box between the "sender" and the "receiver".
>
> Hence, IMHO, what is needed is some kind of (micro)economic 
> understanding of the different systems.  There will be at least three 
> different types of parties (senders, receivers, network elements), 
> with different interests and incentives.  The communication 
> primitives, and the protocols implementing the primitives, will 
> regulate what kind of agreements the parties will be able to negotiate 
> and will affect the relative negotiating power of the parties (cf. 
> Lessig's "Code".)  It would be great if we got the 
> "regulation-through-protocol-design" right enough.
>
> The only thing that is somewhat clear to me at this time is that the 
> current primitive, "send this datagram to this recipient", independent 
> of the recipient's stated willingness or unwillingness of receiving 
> it, gets the incentives wrong, i.e., results in an undesirable dynamic 
> Nash equilibrium.
>
> --Pekka N.
>
>
>


From touch at ISI.EDU  Wed Aug  9 07:26:43 2006
From: touch at ISI.EDU (Joe Touch)
Date: Wed, 09 Aug 2006 07:26:43 -0700
Subject: [e2e] What if there were no well known numbers?
In-Reply-To: <1155069104.3406.149.camel@localhost.localdomain>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk> <44D73A04.1080003@isi.edu>	
	<44D7614E.6050202@reed.com> <44D8B5D9.2070802@isi.edu>	
	<44D8C055.9060903@reed.com> <44D8C958.9020809@isi.edu>
	<1155069104.3406.149.camel@localhost.localdomain>
Message-ID: <44D9F0A3.2090604@isi.edu>



Saikat Guha wrote:
> On Tue, 2006-08-08 at 10:26 -0700, Joe Touch wrote:
>> In a two-party system, "receiver open to input" always precedes "sender
>> issues message".
> 
> When talking about the present Internet, does the "two-party system"
> refer to only the sending host and the receiving host?

Yes.

> What about the
> corresponding middles -- the corporate firewall that most of us are
> behind, or the NAT that many home users are behind?

They masquerade as one of the two parties. When they succeed, they are
the receiver or sender, and the system ends up with multiple steps and
multiple directions of two-party communication.

> Agreed that the receiver must be "open to input" before the sender sends
> the message, but the receiver need not be open-to-input _for any and all
> possible senders_ -- it can be open-to-input for a trusted middle entity
> that can vet the sender's message and relay it to the receiver. (The
> middle entity here is open-to-input for all, and suitably protected.)

You have succeeded only in redefining one endpoint as the middlebox.
That middlebox still must be open to all senders or it won't be able to
figure out which ones to forward. It can't know who a message is from
until it reads it.

...
> Is it _necessary and required_ to be able to receive *from anyone and
> everyone* before someone can send to you?

When "someone" isn't locked in stone apriori, yes - by definition. The
point is that a closed system can be closed, but an open one cannot.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060809/f1b008f8/signature.bin

From touch at ISI.EDU  Wed Aug  9 07:29:56 2006
From: touch at ISI.EDU (Joe Touch)
Date: Wed, 09 Aug 2006 07:29:56 -0700
Subject: [e2e] About the primitives and their value
In-Reply-To: <4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
Message-ID: <44D9F164.1030906@isi.edu>



Pekka Nikander wrote:
> Joe,
> 
>> Receivers are inherently passive. To do otherwise makes them senders,
>> subject to sender rules. To plug their inputs renders them deaf, period.
> 
> A communication network is as transparent as it is made to be.  It is
> not necessarily like "ether", open to anyone's noise.  Agreed, the
> original Internet used to be open for anyone to send anything to anyone,
> and that had great _value_ for the community.  However, that is only
> _one_ example of network design.
> 
> If I take your "ether"-like fully transparent network, then I must agree
> with you.  In such a network a receiver is simply passive and must
> receive whatever any sender sends to it.
> 
> However, put the first bridge or router there, and you have to make the
> choice of making the box fully transparent or _not_.  You can make the
> box a "firewall", allowing the "receiver" instruct the box of what
> information it wants to receive, by default, and what not.  Hence, once
> you give up your fully-open network abstraction, stating that "receivers
> are inherently passive" becomes a mere tautology.

If you deploy a firewall, how does it know who to let in? It has to read
the messages it receives. You have moved the triage problem to the
firewall, and redefined the receiver to be it.

> I have seen network designs where "to receive" is an active primitive: 
> the "senders" are "passively" offering data or services in the network,
> and a "receiver" must actively ask for such a piece of data or service
> in order to get it.

As others have pointed out, that's publish/subscribe. The pub/sub system
then becomes the thing that bootstraps communication.

Now show us a place to publish that is NOT open to all incoming pub/sub
messages. ;-)

Again, all this does is move the problem - and the opportunity for attack.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060809/50f1e8cd/signature.bin

From pekka.nikander at nomadiclab.com  Wed Aug  9 08:33:16 2006
From: pekka.nikander at nomadiclab.com (Pekka Nikander)
Date: Wed, 9 Aug 2006 18:33:16 +0300
Subject: [e2e] About the primitives and their value
In-Reply-To: <44D9F164.1030906@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
Message-ID: <CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>

>>> Receivers are inherently passive. To do otherwise makes them  
>>> senders,
>>> subject to sender rules. To plug their inputs renders them deaf,  
>>> period.
>>
>> However, put the first bridge or router there, and you have to  
>> make the
>> choice of making the box fully transparent or _not_.  You can make  
>> the
>> box a "firewall", allowing the "receiver" instruct the box of what
>> information it wants to receive, by default, and what not.  Hence,  
>> once
>> you give up your fully-open network abstraction, stating that  
>> "receivers
>> are inherently passive" becomes a mere tautology.
>
> If you deploy a firewall, how does it know who to let in? It has to  
> read
> the messages it receives. You have moved the triage problem to the
> firewall, and redefined the receiver to be it.
> ...
> Now show us a place to publish that is NOT open to all incoming pub/ 
> sub
> messages. ;-)
>
> Again, all this does is move the problem - and the opportunity for  
> attack.

Sure, I completely agree.

The trick is to move the problem as close to the potential attacker  
as possible.

If we make the first active box owned by somebody else but the  
potential attacker the first "firewall", we have pretty much  
contained the problem, including most of the zombies.

The problem lies in how to distribute the "firewall information"  
within the network so that the firewall closest to the attack source  
can and will both intelligently enough filter out all or at least  
most of the unwanted traffic and pass all wanted traffic.  That  
problem, in turn, is not only a technical problem.  It is technically  
quite feasible to build a scalable pub/sub architecture, even to  
Internet sizes.  The real problem lies in the incentives: how do we  
motivate the "firewall" next to the potential attacker to take the  
burden of filtering out all traffic that does not have a known  
willing receiver.  That requires quite a lot of effort from the  
firewall side, in order to establish the needed state.  It is far  
easier just to pass everything, as long as it doesn't fill the next  
uplink.

So, at least from my point of view, the really hard problem is to  
device the new "routing" infrastructure protocols in such a way that  
the ISPs benefit from collaboratively knowing which traffic is wanted  
(by someone) and which is not.  Furthermore, such controlling  
capability must be balanced with the desired openness; i.e., we must  
not unnecessarily shift any controlling power to the networking  
elements and we must create incentive for them to still passing all  
wanted traffic without discriminating some wanted traffic against other.

--Pekka


From touch at ISI.EDU  Wed Aug  9 08:43:34 2006
From: touch at ISI.EDU (Joe Touch)
Date: Wed, 09 Aug 2006 08:43:34 -0700
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <20060808152849.ac27cdc8.moore@cs.utk.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>	<44D4C94F.8020801@dcrocker.net>	<a06230917c0fc6ced8fdc@[10.0.1.12]>	<44D892E9.9060403@isi.edu>	<44D8AEA2.8040306@cs.utk.edu>	<44D8B1DF.1060604@isi.edu>
	<20060808152849.ac27cdc8.moore@cs.utk.edu>
Message-ID: <44DA02A6.3070702@isi.edu>



Keith Moore wrote:
>> Existing well-known port allocations indicate both protocol and version;
>> that means that there are multiple 'default instances' in that case
>> (e.g., NFS).
> 
> Version and instance (in the sense that I was using the word) are
> orthogonal.
...
> NFS doesn't work this way (so it's kind of a strained example), but for
> a more reasonable file sharing protocol it would make sense to have
> different sets of file systems available for mounting and access from
> different ports.  So for example, a default set of file systems to be
> exported to clients might be on default port A, with alternate sets of
> file systems exported on ports B and C.  

That's a really good example of what NOT to do with ports. Picking a
subset of mount points isn't a service; it's a decision within a
service. NFS (correctly) already provides that.

>>> A host can also use HTTP to provide things other than web servers, and a
>>> host can have web servers running other protocols such as FTP.  So we
>>> have service names, host names, services, protocols, and ports - each
>>> subtly different than those next to it.
>> A few questions:
>>
>> - how are service names different from services?
> 
> It's hard to nail down the terminology.  I was actually using service
> names in two different ways.  One can think of www.example.com as a
> service name, a name of a service that provides a set of web pages.

It's a DNS name. It could indicate a place to FTP, a place to NFS mount,
etc. That's ALL it is. There's no semantic value to "www." in the prefix.

It only means a web server when
a) you connect to it on a well-known port (HTTP the service)
b) you issue an HTTP request on that port

You can layer DNS on top of HTTP if you want, but as far as the port is
concerned, you're doing HTTP, period. If you want to do DNS on port 80,
you issue DNS requests (native, i.e.) on that port.

> The other kind of service name I was wanting to talk about was the
> notion of service independent of protocol.  "web server" captures the
> notion of a service that exists for the purpose of providing web pages
> to web browsers. 

It's not the name of a service, but a group of applications run on
different machines. If they interact, it's a distributed service. If
not, it's an aggregate service.

> That's distinct from HTTP, which is a protocol, in two
> ways.  One is that HTTP can be (and is) used for purposes other than
> providing web pages for use by web browsers.

That's completely hidden from the use of the well-known HTTP port to
indicate the HTTP protocol and web service, as per above, so that's not
relevant at this level.

If you want to talk about THAT level, you need to indicate how the
endpoint apps decide to talk, e.g., DNS over HTTP. That is NOT a
transport ID at the layer HTTP is working, and so is irrelevant to the
transport port.

> While the same HTTP
> instance can be used to provide both web pages and other kinds of data,
> to be used by web browsers and/or other kinds of clients, in practice
> it often makes sense to have different instances of HTTP do the
> different jobs.

That argues for a way to demux things inside HTTP based on "I'm doing
DNS over HTTP" indicators. However, (incorrectly), some use a-priori
knowledge of which HTTP server is running which layered service to argue
that they need multiple HTTP servers. They DO NOT.

> The other difference between service name and protocol is that you
> don't necessarily want to tie them together too closely because someday
> you might like to have a different (hopefully better) protocol provide
> the same service. 

That's what version numbers inside protocols are for - demuxing versions
of a protocol. It would be useful if the IETF and IEEE would a) require
them, and b) use them (they do not - e.g., using a different 802 type
for IPv6 was an error, motivated only by short-term desire to make
cheaper ethernet switches).

>> 	transport should indicate how to parse the next layer,
>> 	e.g., to indicate "HTTP". HTTP already provides for ways
>> 	to indicate the next layer, which is similar to what
>> 	others call 'semantics', e.g.: ftp:, http:, etc. 
> 
> That's a very confusing example.  If you send a  URL prefix in an HTTP
> request it's because you're talking to a web proxy,

See RFC 2616, sec. 5.2.1. The absolute GET is temporarily, in 1.1,
required as follows:

       GET /pub/WWW/TheProject.html HTTP/1.1
       Host: www.w3.org

This is just to allow backward compatibility with previous HTTP versions
until the absolute URL form is required. They're semantically
equivalent, though.

> and in this case
> (to the web proxy) the URL prefix is really just a way of
> distinguishing one kind of resource from another and indicating what
> protocol is to be used by the proxy to access that resource.  I
> wouldn't use the word "semantics" to describe this because the
> semantics of a resource accessed by ftp are no different than the
> semantics of the same resource accessed by http.

FTP supports modes that HTTP does not, and vice versa. The semantics are
not the same, except in the trivial one-file case.

>> Ports really indicate which instance of a protocol at a host, IMO - but
>> supporting that in TCP requires redefining the 'socket pair' to be a
>> pair of triples: "host, protocol, port" (rather than the current "host,
>> port").
> 
> Why do we need to expose the protocol in TCP?  Why isn't the port
> selector sufficient? 

I discuss this in the ID I noted before (draft-touch-tcp-portnames). The
port selector is used to demux connections and attach them to processes
on a host; the protocol (portname in my version, well-known port in the
current version) indicates the protocol.

> The destination host knows which protocol is
> being used, the source host presumably also knows (it has some reason
> for choosing that port, whether it's because it's a well-known port or
> a SRV record or configuration data or whatever), and I wonder whether
> anyone else needs to know.

That's only for well-known ports; you're saying that anything the
endpoints agree to a-priori need not be in the packet. That's true, but
limiting (see the ID for reasons).

> Exposing the protocol in TCP, making it
> explicit rather than just a convention, further encourages
> intermediaries to try to interpret it....

Not more or less than well-known ports. The string "NFS" could mean HTTP
to you and me, NFS to some other pair, or DNS to another. The meaning of
transport information is local to the transport endpoints.

>> However, although there are many who want to consider multiple instances
>> of the same protocol, it's not clear how a source would know which
>> instance to talk to. IMO, instances are basically bound to the
>> destination IP address, and if you want multiple instances, get multiple
>> addresses - because the same resolution that determines host determines
>> instance, IMO.
> 
> I don't immediately understand why the IP address is a better instance
> selector than a port, except perhaps for protocols that use multiple
> ports.

That's why.

> And it seems that for better or worse NAPTs have made this less
> feasible because they mean that A:P1 and A:P2 might not actually reach
> the same host. 

They do reach the same host - the NAPT emulates exactly one host. To the
extent that they do not reach the same host, protocols break (e.g., FTP,
H.323, etc.)

> Also (as we're seeing with IPv6), assigning multiple IP
> addresses to a host can be problematic - which ones does the host use
> to source new traffic? 

The problem there is different. When you assign multiple addresses to a
host, you're making a single host into multiple virtual hosts. When
that's not what you're doing, things break. That's not a surprise either.

>>>> The key question is "what is late bound". IMO, we could really use
>>>> something that decouples protocol identifier from instance (e.g.,
>>>> process demultiplexing) identifier.
>>> We could also use something that decouples service from protocol.  (do
>>> we really want to be stuck with HTTP forever as the only way to get web
>>> pages?  SMTP as the only way to transmit mail?)  How many layers do we
>>> want?
>> We do in HTTP. 
> 
> Strongly disagree.   And I think that's a very shortsighted view.

I've shown above that it's layered and flexible. What's shortsighted
about that?

>> We might be able to use that in other protocols, but
>> that's a decision for those protocols, not TCP, IMO.
> 
> Well, sure, the decision about whether to upgrade or replace one
> protocol must be made independently of the decisions for other
> protocols.  And I don't see offhand why TCP needs to change to
> facilitate that, except maybe to expand the port space beyond 16 bits.

You're still overloading protocol with process demuxing; IMO, it is that
which was shortsighted and we have an opportunity to correct.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060809/dbeb6344/signature.bin

From touch at ISI.EDU  Wed Aug  9 08:50:28 2006
From: touch at ISI.EDU (Joe Touch)
Date: Wed, 09 Aug 2006 08:50:28 -0700
Subject: [e2e] About the primitives and their value
In-Reply-To: <CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
Message-ID: <44DA0444.3000103@isi.edu>



Pekka Nikander wrote:
...
>> Now show us a place to publish that is NOT open to all incoming pub/sub
>> messages. ;-)
>>
>> Again, all this does is move the problem - and the opportunity for
>> attack.
> 
> Sure, I completely agree.
> 
> The trick is to move the problem as close to the potential attacker as
> possible.

There are two ways to reduce the impact of those attacks:

1) move the weakness close to the attacker, so it affects only the attacker

2) diversify the weakness and make it a strength (SOS, our own "Agile
Tunnel Protocol" system and DynaBone, etc.)

...
> The problem lies in how to distribute the "firewall information" within
> the network so that the firewall closest to the attack source can and
> will both intelligently enough filter out all or at least most of the
> unwanted traffic and pass all wanted traffic. 

That assumes trusted relationships with basically everyone EXCEPT those
who are attacking you. I don't think that's a defensible position
(either in rhetoric or in operation in the network).

> So, at least from my point of view, the really hard problem is to device
> the new "routing" infrastructure protocols in such a way that the ISPs
> benefit from collaboratively knowing which traffic is wanted (by
> someone) and which is not.

I don't think this CAN be solved by secure or protected routing. Near as
I can tell, protected routing presumes highly constrained topologies
which aren't feasible in practice. As someone recently told me, there
are too many cases where "doing the right thing" is indistinguishable
from a "routing protocol attack".

An alternate position to locking everything down (#1 above) is to
diversify routing enough that _something_ gets through (#2 above) - a
position which seems obvious, and came up in the same discussion noted
above. That's 'best effort', what the Internet was predicated on, and
IMO is a better position.

Joe


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060809/21df5627/signature.bin

From pekka.nikander at nomadiclab.com  Wed Aug  9 09:00:35 2006
From: pekka.nikander at nomadiclab.com (Pekka Nikander)
Date: Wed, 9 Aug 2006 19:00:35 +0300
Subject: [e2e] About the primitives and their value
In-Reply-To: <44DA0444.3000103@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
Message-ID: <D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>

>> The problem lies in how to distribute the "firewall information"  
>> within
>> the network so that the firewall closest to the attack source can and
>> will both intelligently enough filter out all or at least most of the
>> unwanted traffic and pass all wanted traffic.
>
> That assumes trusted relationships with basically everyone EXCEPT  
> those
> who are attacking you. I don't think that's a defensible position
> (either in rhetoric or in operation in the network).

No, it doesn't.  It just assumes a design where the social balance  
lies in the side of honest players, i.e., where playing honest is  
still a strategy with a higher pay-off than any dishonest strategy.   
It requires mechanisms that contain byzantine attacks and make cybil  
attacks unfeasible.  For some background, see e.g. Axelrod's "The  
Evolution of Co-operation".  But you probably know all that.

But that's why I state that this is more a micro-economic than  
network-technology problem.

>> So, at least from my point of view, the really hard problem is to  
>> device
>> the new "routing" infrastructure protocols in such a way that the  
>> ISPs
>> benefit from collaboratively knowing which traffic is wanted (by
>> someone) and which is not.
>
> I don't think this CAN be solved by secure or protected routing.  
> Near as
> I can tell, protected routing presumes highly constrained topologies
> which aren't feasible in practice. As someone recently told me, there
> are too many cases where "doing the right thing" is indistinguishable
> from a "routing protocol attack".

As long as we try to remain within the current send-receive paradigm,  
I'm afraid you are right.  However, if we consider other fundamental  
paradigms, I wouldn't be that sure.

> An alternate position to locking everything down (#1 above) is to
> diversify routing enough that _something_ gets through (#2 above) - a
> position which seems obvious, and came up in the same discussion noted
> above. That's 'best effort', what the Internet was predicated on, and
> IMO is a better position.

Maybe.  Maybe my interest in applying collaborative technologies in  
low-level networking infrastructures in a quest of trying to  
understand communications based on other fundamental paradigms but  
send-receive are futile.  But my intuition says otherwise.

--Pekka


From bmanning at vacation.karoshi.com  Wed Aug  9 10:03:01 2006
From: bmanning at vacation.karoshi.com (bmanning@vacation.karoshi.com)
Date: Wed, 9 Aug 2006 17:03:01 +0000
Subject: [e2e] About the primitives and their value
In-Reply-To: <44DA0444.3000103@isi.edu>
References: <44D7614E.6050202@reed.com> <44D8B5D9.2070802@isi.edu>
	<44D8C055.9060903@reed.com> <44D8C958.9020809@isi.edu>
	<44D8CFFC.3060004@reed.com> <44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
Message-ID: <20060809170301.GA3968@vacation.karoshi.com.>


> Joe Touch wrote:
> > Pekka Nikander wrote:
> > The trick is to move the problem as close to the potential attacker as
> > possible.
> 
> 2) diversify the weakness and make it a strength (SOS, our own "Agile
> Tunnel Protocol" system and DynaBone, etc.)
> 
> ...
> > The problem lies in how to distribute the "firewall information" within
> > the network so that the firewall closest to the attack source can and
> > will both intelligently enough filter out all or at least most of the
> > unwanted traffic and pass all wanted traffic. 
> 
> That assumes trusted relationships with basically everyone EXCEPT those
> who are attacking you. I don't think that's a defensible position
> (either in rhetoric or in operation in the network).
> 
	what was the best stratagy for winning "Life" - initally
	trust everyone, once "burned", never trust them again?

--bill


From touch at ISI.EDU  Wed Aug  9 10:09:04 2006
From: touch at ISI.EDU (Joe Touch)
Date: Wed, 09 Aug 2006 10:09:04 -0700
Subject: [e2e] About the primitives and their value
In-Reply-To: <20060809170301.GA3968@vacation.karoshi.com.>
References: <44D7614E.6050202@reed.com> <44D8B5D9.2070802@isi.edu>
	<44D8C055.9060903@reed.com> <44D8C958.9020809@isi.edu>
	<44D8CFFC.3060004@reed.com> <44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
	<20060809170301.GA3968@vacation.karoshi.com.>
Message-ID: <44DA16B0.60407@isi.edu>



bmanning at vacation.karoshi.com wrote:
>> Joe Touch wrote:
>>> Pekka Nikander wrote:
>>> The trick is to move the problem as close to the potential attacker as
>>> possible.
>> 2) diversify the weakness and make it a strength (SOS, our own "Agile
>> Tunnel Protocol" system and DynaBone, etc.)
>>
>> ...
>>> The problem lies in how to distribute the "firewall information" within
>>> the network so that the firewall closest to the attack source can and
>>> will both intelligently enough filter out all or at least most of the
>>> unwanted traffic and pass all wanted traffic. 
>> That assumes trusted relationships with basically everyone EXCEPT those
>> who are attacking you. I don't think that's a defensible position
>> (either in rhetoric or in operation in the network).
>>
> 	what was the best stratagy for winning "Life" - initally
> 	trust everyone, once "burned", never trust them again?

That's the paradox. If you never open up again, you're not running an
internetwork; you're running a closed system. But opening up again makes
attacks possible.

The only balance is to accept the fact that (as I stated in my PFIR
'bill') communication is an agreement among consenting parties, and that
- in the Internet - 'consenting' is determined by a packet exchange.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060809/a56453df/signature.bin

From tvest at pch.net  Wed Aug  9 10:23:35 2006
From: tvest at pch.net (Tom Vest)
Date: Wed, 9 Aug 2006 13:23:35 -0400
Subject: [e2e] About the primitives and their value
In-Reply-To: <20060809170301.GA3968@vacation.karoshi.com.>
References: <44D7614E.6050202@reed.com> <44D8B5D9.2070802@isi.edu>
	<44D8C055.9060903@reed.com> <44D8C958.9020809@isi.edu>
	<44D8CFFC.3060004@reed.com> <44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
	<20060809170301.GA3968@vacation.karoshi.com.>
Message-ID: <93CC1C57-54EA-4623-8D2E-AE3FD1C48A7F@pch.net>

trust once, then tit-for-tat.
Finite games present defection problems if the players know which is  
the last turn.

TV

On Aug 9, 2006, at 1:03 PM, bmanning at vacation.karoshi.com wrote:

>
>> Joe Touch wrote:
>>> Pekka Nikander wrote:
>>> The trick is to move the problem as close to the potential  
>>> attacker as
>>> possible.
>>
>> 2) diversify the weakness and make it a strength (SOS, our own "Agile
>> Tunnel Protocol" system and DynaBone, etc.)
>>
>> ...
>>> The problem lies in how to distribute the "firewall information"  
>>> within
>>> the network so that the firewall closest to the attack source can  
>>> and
>>> will both intelligently enough filter out all or at least most of  
>>> the
>>> unwanted traffic and pass all wanted traffic.
>>
>> That assumes trusted relationships with basically everyone EXCEPT  
>> those
>> who are attacking you. I don't think that's a defensible position
>> (either in rhetoric or in operation in the network).
>>
> 	what was the best stratagy for winning "Life" - initally
> 	trust everyone, once "burned", never trust them again?
>
> --bill
>


From touch at ISI.EDU  Wed Aug  9 11:34:13 2006
From: touch at ISI.EDU (Joe Touch)
Date: Wed, 09 Aug 2006 11:34:13 -0700
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <44DA1C59.1060905@cs.utk.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>	<44D4C94F.8020801@dcrocker.net>	<a06230917c0fc6ced8fdc@[10.0.1.12]>	<44D892E9.9060403@isi.edu>	<44D8AEA2.8040306@cs.utk.edu>	<44D8B1DF.1060604@isi.edu>
	<20060808152849.ac27cdc8.moore@cs.utk.edu>
	<44DA02A6.3070702@isi.edu> <44DA1C59.1060905@cs.utk.edu>
Message-ID: <44DA2AA5.20305@isi.edu>



Keith Moore wrote:
>>> NFS doesn't work this way (so it's kind of a strained example), but
>>>  for a more reasonable file sharing protocol it would make sense to
>>>  have different sets of file systems available for mounting and
>>> access from different ports.  So for example, a default set of file
>>>  systems to be exported to clients might be on default port A, with
>>>  alternate sets of file systems exported on ports B and C.
>>
>> That's a really good example of what NOT to do with ports. Picking a
>>  subset of mount points isn't a service; it's a decision within a
>> service. NFS (correctly) already provides that.
> 
> It's been awhile since I read the specifications but I do not recall
> any facility within NFS that defines sets of exported file systems
> provided by a host - in all of the NFS systems of which I'm aware file
> systems have uniquely named mount points and there is no way for a
> server to export multiple sets of file systems with overlapping mount
> point names between the sets. 

Right. NFS provides mount points. You want sets of mount points. Call
the NFS protocol designers and add that. That's not a transport demux or
protocol choice issue; that's internal to NFS. If it's not there, then
add it there.

> And I disagree with your characterization.  The nice thing about
> grouping several mount points together under one port is that it's very
> easy to change the set of mount points by changing the port.

That's called a "hack". It works, but let's not design our protocols
around them.

...
> Perhaps it would be better if every protocol were explicitly designed to
> support multiple named service instances on a single port, but that
> would be difficult given that different service instances sometimes
> require different processes and different server codes.

Difficult? IMO, it's appropriate and necessary.

> But the wider point is that ports don't really identify protocols -
> fundamentally they're just demultiplexing tokens. 

Not today. Today they're both; we can decouple the two in many ways
which I tried to explain in the ID.

> They can be used for
> multiple purposes, and this is a good thing.  The convention for mapping
> between ports and protocols/services is just that - a convention.

Conventions that are required on both ends of connection are called
protocols ;-)

...
> Agreed that it's a DNS name and agreed that the prefix "www" doesn't
> imply anything semantically (though it's useful for illustrative
> purposes in an example).
> 
> But there is still a subtle distinction between DNS name and service
> name (as I was using the term here), as there are DNS labels and RRsets
> that don't correspond to any AAAA or A record.

Sure, agreed.

>> It only means a web server when a) you connect to it on a well-known
>> port (HTTP the service) b) you issue an HTTP request on that port
> 
> It's not necessarily a web server even then, because HTTP doesn't really
> imply the web.

Right - to most people, HTML does. HTTP is one way to transport HTML,
but we could use FTP for many things (except forms, e.g.).

> Nor does port 80.  There are uses of HTTP (as in IP over
> HTTP) that don't resemble anything most people would recognize as the
> web. IMHO, something becomes part of the web when it is linked to from
> the web.  (What's the root of the web?  Mu. :)

The web doesn't need a root, any more than the Internet has one (root IP
address?). The web is any place that reaches the rest of the web - it's
an application layer form of the Internet in that sense.

>> You can layer DNS on top of HTTP if you want, but as far as the port
>> is concerned, you're doing HTTP, period. If you want to do DNS on port
>> 80, you issue DNS requests (native, i.e.) on that port.
> 
> Right, but then you'd be running a different service than providing web
> pages even if you were running HTTP on port 80. 

If you want to define 'service' as HTML, then fine. But note that port
80 means HTTP - it does NOT mean HTML. You're still running HTTP when
you run DNS over HTTP. Never mind that the output wouldn't be what most
people want to see on their screens.

I think I now understand what you want - you want "application" in the
ISO sense, whereas HTTP is "application" in the Internet stack sense
(the thing that lives over TCP).

If you want an 'application location service', fine. That's not what TCP
demuxes on, nor is it what ports indicate. Ports indicate protocols on
top of TCP.

 >>> That's distinct from HTTP, which is a protocol, in two ways.  One is
>>> that HTTP can be (and is) used for purposes other than providing
>>>  web pages for use by web browsers.
>>
>> That's completely hidden from the use of the well-known HTTP port to
>>  indicate the HTTP protocol and web service, as per above, so that's
>> not relevant at this level.
> 
> But port 80 does not (reliably) either indicate the HTTP protocol or the
> web service.

In 'well known ports' it does. I've already discussed that the real
meaning is an agreement that's private to the endpoints - either agreed
a-priori (well known) or indicated explicitly.

...
> Port number (host demultiplexing token), protocol, and service provided
> are largely orthogonal.  It's just that in practice, and by convention,
> we tend to associate them.

Where 'service' is ISO application, yes, they're different. It's not
practice or convention - it's part of the way well-known ports in TCP
are defined, and that's part of the protocol (pick what layer you want
to define that at - port within TCP, or service over a port -- both are
fixed by well-knowns).

>>> While the same HTTP instance can be used to provide both web pages
>>> and other kinds of data, to be used by web browsers and/or other
>>> kinds of clients, in practice it often makes sense to have different
>>> instances of HTTP do the different jobs.
>>
>> That argues for a way to demux things inside HTTP based on "I'm doing
>>  DNS over HTTP" indicators.
> 
> No it doesn't, because that would imply (for instance) that all apps
> need to define their own demuxing protocols to support different service
> instances.

YES!

> It would also imply that all application codes need to
> interface through a general purpose application-specific demuxer (rather
> than just listening to a specific port) so that multiple instances of
> the same application (using different codes) could each share a single
> port.

YES!

I see nothing wrong with either; in fact, I see both as being necessary
and appropriate, even if not currently used.

>> However, (incorrectly), some use a-priori knowledge of which HTTP
>> server is running which layered service to argue that they need
>> multiple HTTP servers. They DO NOT.
> 
> As a practical matter, that is simply incorrect.  Try getting two
> different HTTP protocol engines (either from different vendors and/or
> built to serve different purposes) to run on the same port on the same
> host.

That's an implementation problem, partly due to the lack of demuxing
info in HTTP (although some servers do support this 'internally',
calling it virtual-hosting), and partly due to inter-vendor competition.

>>> The other difference between service name and protocol is that you
>>>  don't necessarily want to tie them together too closely because
>>> someday you might like to have a different (hopefully better)
>>> protocol provide the same service.
>>
>> That's what version numbers inside protocols are for - demuxing
>> versions of a protocol.
> 
> Disagree.  An in-band protocol version number isn't very useful unless
> the two protocols are similar enough that you can feasibly negotiate
> version from the same protocol engines. 

If you put the version number in the front of the stream/packet (like
you're supposed to), that's sufficiently similar, and you can redefine
the rest.

> For minor protocol changes this
> is fine, but for major protocol changes (or to migrate to a very
> different kind of protocol) you need a way of distinguishing between one
> protocol/version and another before you've chosen a particular protocol
> engine. 

You need to parse a packet to do this. Parse NFS and figure out what
version it is *then*.

> Ports are a good way of doing this on an occasional basis.

Ports are a hack for doing this at TCP because NFS didn't do it inside NFS.

>> It would be useful if the IETF and IEEE would a) require them, and b)
>>  use them
> 
> In my experience, protocol version numbers are very rarely useful.
> Either the changes you want to make to a protocol are incremental (in
> which case other kinds of feature negotiation, such as capability lists,
> or tagged options, tend to work better) or the changes you want to make
> to a protocol are so significant that you really want to feed the new
> protocol to a completely different protocol engine.

Nobody said you need a monolithic implementation; if you want, make a
parser (at the NFS level, e.g.,) that hands off streams to NFSv3 or
NFSv4 as needed.

>> (they do not - e.g., using a different 802 type for IPv6 was an
>> error, motivated only by short-term desire to make cheaper ethernet
>> switches).
> 
> Seems fairly harmless, but maybe I'm not aware of the downsides.
> 
> IPv4 and IPv6 are of course distinguishable, but it's unclear how the
> numerous existing implementations of IPv4 would have treated a version
> field of 6 (most would probably discard it, some would probably ignore
> it, a few would probably barf because that code was buggy and the path
> had never been tested).

Unclear? What are we here for? How they're supposed to react is quietly,
by dumping the segments. Period. See RFC1122 Sec 3.2.1.1 and RFC 1812
Sec 5.2.2.

...
> But from another angle: should the name of a protocol be embedded into,
> and tied to, a reference to a resource?  That would imply that any
> resource named by an http: URL is inherently tied to HTTP and can only
> be accessed via the HTTP protocol.  In a future day where there is a
> significantly better alternative, this would be a pessimal choice.  As
> it is we feel very constrained to try to improve HTTP only in ways that
> can easily be done over TCP on port 80 by the same protocol engines that
> parse HTTP 1.0 and 1.1, which is IMHO probably unfortunate.

So you want to push versioning, instances, and everything else -
including data subsets, etc. - to the port level.

Let's move hosts there too. We can all be IP address 10.0.0.1, and
differentiate who's who, etc. - everything - on port numbers.

--

Either that, or layering is useful (IMO, yes), and we need to decouple
things that are not REQUIRED to be coupled.

>>>> Ports really indicate which instance of a protocol at a host, IMO
>>>>  - but supporting that in TCP requires redefining the 'socket pair'
>>>> to be a pair of triples: "host, protocol, port" (rather than the
>>>> current "host, port").
>>> Why do we need to expose the protocol in TCP?  Why isn't the port
>>> selector sufficient?
>>
>> I discuss this in the ID I noted before (draft-touch-tcp-portnames).
>> The port selector is used to demux connections and attach them to
>> processes on a host; the protocol (portname in my version, well-known
>>  port in the current version) indicates the protocol.
> 
> "The protocol...indicates the protocol" 

Sorry - the protocol (TCP) indicates (via portname) the protocol (at the
next layer).

> doesn't answer the question of
> why it is needed to expose the protocol to the network, and the
> reasoning in the document seems muddy.

You don't. You need to expose it to the endpoint.

> I recognize that there is a danger of port space exhaustion, but there
> are lots of ways to solve this problem that don't require exposing the
> name of the protocol to the network.

Example, please. (I discuss alternatives in the ID; is there some aspect
that is not covered, or an example missed?)

> I also recognize that there is a demand (perhaps a naive one) by network
> operators to be able to identify and filter traffic based on various
> criteria including probably both protocol (as when specific protocol
> engines are known to be broken) and service (as when the network
> operator wants to prohibit certain kinds of traffic on its network). But
>  it's not immediately clear that explicit labels are actually useful, or
> what it would take to make them useful.  (some of the arguments against
>  definition of the "evil bit" might apply here also).  Even if explicit
> labels were useful for filtering, should they be based on protocol (e.g.
> "http") or service ("http for web pages" vs. "http for IP tunneling")?

Again, the label is truly meaningful only to the other end. It's not
there for filtering. It WILL be used for that, just like well known
ports are now. And just as (in)effectively - anyone with a brain will
circumvent that filtering by changing the meaning of the strings on the
subset of hosts they configure out-of-band.

>>> The destination host knows which protocol is being used, the source
>>>  host presumably also knows (it has some reason for choosing that
>>> port, whether it's because it's a well-known port or a SRV record or
>>> configuration data or whatever), and I wonder whether anyone else
>>> needs to know.
>>
>> That's only for well-known ports;
> 
> no, it's true regardless of how the initiator chose which destination
> port to use.

So I use the string "APPLESAUCE" - what protocol does that mean, please?

I can either scramble meaning (use HTTP for DNS) or just use nonsense
strings (as above). Only well-known ports assume global a-priori
agreement of meaning.

>>> Exposing the protocol in TCP, making it explicit rather than just a
>>>  convention, further encourages intermediaries to try to interpret
>>> it....
>>
>> Not more or less than well-known ports. The string "NFS" could mean
>> HTTP to you and me, NFS to some other pair, or DNS to another. The
>> meaning of transport information is local to the transport endpoints.
>>
> Then why make it a string and increase the chance for misinterpretation?

I like strings because they're nearly as compact, don't need a two-level
IANA registration, and are more inherently extensible (does 0123 mean
the same as 123? - not as strings). Otherwise, no difference.

The portnames ID argues primarily for another ID for the protocol
separate from the port. The use of strings is my personal preference,
but not required.

>>>> However, although there are many who want to consider multiple
>>>> instances of the same protocol, it's not clear how a source would
>>>>  know which instance to talk to. IMO, instances are basically bound
>>>> to the destination IP address, and if you want multiple instances,
>>>> get multiple addresses - because the same resolution that determines
>>>> host determines instance, IMO.
>>> I don't immediately understand why the IP address is a better
>>> instance selector than a port, except perhaps for protocols that use
>>> multiple ports.
>>
>> That's why.
> 
> well, the notion of 'host' is a lot fuzzier than it used to be, so the
> assumption that A:P1 and A:P2 are really on the same host (in the sense,
> say, of being able to access the same private data) is a lot more
> dubious than it was in the mid-1970s.

It's not dubious; it's required by the Internet architecture as
specified by RFCs 1122, 1812, etc. NAPT users do cartwheels to restore
that correlation.

> in addition to NAPTs, we have
> large distributed memory clusters, we have big-IP boxes.  these days I'd
> regard use of multiple well-known (well-known or preassigned) ports in a
> single protocol as bad protocol design.  I'd also probably regard any
> assumption that A:P1 and A:P2 were inherently on the same host as bad
> protocol design.  That doesn't mean that a protocol can't have use
> mutliple ports, but that (e.g.) it should have at most one well-known
> port with the other ports (and perhaps IP addresses) negotiated in-band.

Being able to reach the same data at different hosts isn't the same as
the same data. Forms are an example of that. If you want to have a
big-IP box, then front end it with a single IP address, or you'll need
to do cartwheels to emulate that (e.g., send all messages from a single
source IP address to the same backend server, share state via a locking
dbase, etc.)

>>> And it seems that for better or worse NAPTs have made this less
>>> feasible because they mean that A:P1 and A:P2 might not actually
>>> reach the same host.
>>
>> They do reach the same host - the NAPT emulates exactly one host.
> 
> To the extent that it does so (and I think that's a stretch), it does so
> very poorly.  They're not the same host in any reasonable sense that an
> application can reliably make use of, only in an abstract sense that is
> meaningless from a practical perspective.

They are a single host in every sense that the Internet expects (as per
below).

>> To the extent that they do not reach the same host, protocols break
>> (e.g., FTP, H.323, etc.)
> 
> Well, we all know that NATs break things.  But NATs aren't the only
> reason that it's dubious to assume that A:P1 and A:P2 are on the same host.

Right. But anything that breaks A:P1 and A:P2 aren't the same host is
broken by Internet standards (literally) - including port-specific
policy routing, etc. There are reasons to use many of these things, but
as far as the Internet architecture is concerned, they've busted it.

>>> Also (as we're seeing with IPv6), assigning multiple IP addresses to
>>> a host can be problematic - which ones does the host use to source
>>> new traffic?
>>
>> The problem there is different. When you assign multiple addresses to
>>  a host, you're making a single host into multiple virtual hosts.
>> When that's not what you're doing, things break. That's not a
>> surprise either.
> 
> Tell that to the IPv6 architects.

I have ;-)

Actually, most of what they do makes the host into a router+host
combination. There's still the issue you raise - which address "is" the
host, and when you don't pick a single one, you break things. Again,
unsurprising.

What IPv6 wants is a router+host. What it gave us is multihoming without
routing, which doesn't work.

>>>>>> The key question is "what is late bound". IMO, we could really use
>>>>>> something that decouples protocol identifier from instance (e.g.,
>>>>>> process demultiplexing) identifier.
>>>>> We could also use something that decouples service from protocol. 
>>>>> (do we really want to be stuck with HTTP forever as the only way to
>>>>> get web pages?  SMTP as the only way to transmit mail?)  How many
>>>>> layers do we want?
>>>> We do in HTTP.
>>> Strongly disagree.   And I think that's a very shortsighted view.
>>
>> I've shown above that it's layered and flexible. What's shortsighted
>>  about that?
> 
> Maybe I've lost track of what you were arguing here, but you seemed to
> be saying that we should be stuck with HTTP forever, and constrained
> from now to the end of time to change HTTP only in a way that is
> compatible with existing protocol engines. 

Nope - all I'm saying is that if you want to change HTTP, use it's
existing versioning. Show me something you cannot do that way (e.g.,
consider protocols that currently lack version numbers that you want to
add), and then maybe it's really a different protocol and deserves a
different 'portname'. Otherwise, it's a single protocol and demuxing
goes inside it (logically).

And please, let's not discuss poorly designed implementations further.
Protocol engines are an implementation issue; there are good ones and
bad. A good one first triages messages based on version. A good one
allows back-end version-specific engines to be hooked together.

>> You're still overloading protocol with process demuxing; IMO, it is
>> that which was shortsighted and we have an opportunity to correct.
> 
> No, I'm reiterating that the demuxing token we call a port number is not
> inherently tied to either protocol or service.

And I am reiterating that they are.

> And before we try to
> explicitly expose either protocol or service to the network, we need to
> be much clearer about what this actually means (protocol name is not at
> all adequate), how this affects operations such as protocol agility, how
> reliable this field can actually be in practice, its potential for
> misuse, its likely affect on network transparency and the resulting
> ability of the network to support new applications, and several other
> considerations.

Huh? So we can't define the next layer up until we define all the
layers? I don't agree.

The whole point of layering is that demuxing is local. TCP has two kinds
of demuxing: instance of a connection, and protocol above it; thus the
two IDs (port and portname, as I suggest). The rest (service, etc.) is
inside the data stream at that point.

> IMHO we'd also need to be clear (for the sake of backward compatibility)
> that the protocol name is not a demultiplexing token that can be used in
> addition to ports, that it doesn't change the way that ports work.

It overrides the a-priori well-known ports list deployed on hosts. It
changes behavior only at connection establishment; thereafter, both
existing TCP and TCP-portnames demux on just ports.

The problem is that connection establishment is more than just demuxing;
it's attaching to the next layer up.

> If
> we need more ports we need to define a straightforward way of extending
> port space, which is orthogonal to providing a way of exposing protocol
> and/or service names to the network.

That's what portnames are - orthogonal. Service names belong the next
layer up, inside HTTP, NFS, DNS, etc.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060809/46e2565f/signature.bin

From day at std.com  Wed Aug  9 13:21:19 2006
From: day at std.com (John Day)
Date: Wed, 9 Aug 2006 16:21:19 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <44D8AEA2.8040306@cs.utk.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net>	<a06230917c0fc6ced8fdc@[10.0.1.12]>
	<44D892E9.9060403@isi.edu> <44D8AEA2.8040306@cs.utk.edu>
Message-ID: <a06230912c0ffed91dcc2@[10.197.98.218]>

At 11:32 -0400 2006/08/08, Keith Moore wrote:
>>>Port-ids or sockets identify an instance of communication,
>>>not a  particular service.
>>
>>They currently do both for the registered numbers, at least as a
>>convention, although individual host-pairs override that protocol
>>meaning by a-priori (usually out-of-band) agreement.
>
>I think of port numbers identifying a distinguished service or 
>default instance of a service.

No, you are carrying the semantics of well-known ports to ports in 
general.  You don't want to make this association.  Look at the use 
of ports in other protocols and in operating systems.

>e.g. A host can have web servers (i.e. something that serves web 
>pages for a web browser), using the HTTP protocol, on any number of 
>ports. The web server running HTTP on port 80 is the default 
>instance, the one that people get if they don't specify anything 
>more than the name of the service (which is a DNS name but not 
>necessarily the name of a single host).
>
>A host can also use HTTP to provide things other than web servers, 
>and a host can have web servers running other protocols such as FTP. 
>So we have service names, host names, services, protocols, and ports 
>- each subtly different than those next to it.

First of all, be careful.  Again, the infatuation with names of hosts 
are an artifact of the current tradition.  They are not necessary for 
naming and addressing for communications.  TCP can be used for any 
application which may or may not use one of the standard protocols.

>>>Again, the well-known socket approach only
>>>works as long as there is only one instance of the protocol in the layer
>>>above and certainly only one instance of a "service." (We were lucky in
>>>the early work that this was true.)
>
>[realizing that here I'm replying to the previous message...]
>Well-known ports work well for selecting the default instance of a 
>service.  Often that is what we want to do, and we find it 
>convenient to have a distinguished instance of a service that serves 
>as a default. Well-known ports don't prevent us from offering 
>alternate instances of the same service on other ports.

They are kludge that you have grown use to because you have never 
known anything else.  That is the nice thing about software we can 
almost anything work.  But it does require that we reserve well-known 
ports on all systems whether they need them or not.  We have been 
lucky that for 20 years we only had 3 applications.

>(Though I'll admit that it has become easier to do so since the URL 
>name:port convention because popular.  Before that, the port number 
>was often wired into applications, and it still is often wired into 
>some applications, such as SMTP.  But a lot of the need to be able 
>to use alternate port numbers has resulted from the introduction of 
>NAPTs. Before NAPTs we could get away with assigning multiple IP 
>addresses (and multiple DNS names) to a host if we needed to run 
>multiple instances of a service on that host.  And we still find it 
>convenient to do that for hosts not behind NAPTs, and for hosts 
>behind firewalls that restrict which ports can be used.)

URLs try to do what needs to be done.  But we really on have them for 
HTTP.  It is not easy to use in general.

>>>Actually, if you do it right, no one standard value is necessary at
>>>all.  You do have to know the name of the application you want to
>>>communicate with, but you needed to know that anyway.
>
>To me this looks like overloading because you might want more than 
>one instance of the same application on the host.  I'd say that you 
>need to know the name of the instance of the service you want to 
>talk to.  Now in an alternate universe that name might be something 
>like "foo.example.com:web:http:1" - encompassing server name 
>(foo.example.com), the name of the service (web) protocol name 
>(http), and an identifier (1) to distinguish one instance of the 
>same service/protocol from another. But we might not actually need 
>that much complexity, and it would expose it to traffic analysis 
>which is good or bad depending on your point-of-view.

Indeed. The name would have to allow for both type and instance.  As 
well as applications with multiple application protocols and multiple 
instances of them.  But this was worked out year ago.

>Right now we need to know the name of the peer we want to 
>communicate with.  That name is composed of an IP address and a port 
>number.  This is unambiguous.  Having a default port for a 
>particular service doesn't prevent alternate instances of a service 
>from being established at the same IP address, as most modern 
>applications allow other port numbers to be specified.  Each 
>application can perform that binding as late as it wants to.  The 
>only real problem (other than legacy apps that don't support late 
>bindings) is caused by NAPTs that want to make port numbers 
>realm-specific, and we already know that NATs were a bad idea.

NATs only cause trouble because we have half an architecture.  People 
have gotten use to the pain and think that it is perfectly normal to 
run a net this way.  It is a lot like old DOS users, wondering why 
UNIX guys thought it was so much better.

As for the making work, as I said, the nice thing about software is 
you can usually find a way to make almost anything work.

>
>>The key question is "what is late bound". IMO, we could really use
>>something that decouples protocol identifier from instance (e.g.,
>>process demultiplexing) identifier.
>
>We could also use something that decouples service from protocol. 
>(do we really want to be stuck with HTTP forever as the only way to 
>get web pages?  SMTP as the only way to transmit mail?)  How many 
>layers do we want?

Don't think it is a question of want, a question of need.  25 years 
ago we figured out how much naming and addressing we need but we 
choose to ignore the answer.

>Right now we have a convention, not a rule, that certain port 
>numbers imply certain protocols.  I actually think it's about the 
>right level of slipperiness.  Hosts can do what they want with their 
>ports.  Networks can look at the port convention in an attempt to 
>measure traffic levels, and they can block ports based on knowledge 
>or belief about what applications are going to use them, but if they 
>try to insist that particular apps (and only those apps) use 
>particular ports (and no other ports) they're going to break things 
>and there will be backlash.  In my mind this encourages networks to 
>be more transparent (thus creating an environment more favorable to 
>new apps) than they would be if we had a protocol identifier exposed 
>in the packet.
>
>In summary: Port numbers are sufficient for the endpoints, and well 
>known ports are a useful convention for a default or distinguished 
>instance of a service as long as we don't expect them to be rigidly 
>adhered to.  The question is: how much information about what 
>services/protocols are being used should be exposed to the network? 
>And if we had such a convention for exposing services/protocols to 
>the network are we in a position to demand that hosts rigidly 
>enforce that convention?

Why would you want to?  More the question is what applications are 
not being built because they are either impossible or far too complex 
to do in the current environment. ABTW, these won't be brought up at 
IETF meetings, they will squelched long before they get there.  You 
can put a more complete system in place and let the old one co-exist.

>>This argues for three fields: demux ID (still needed), protocol, and
>>service name.
>>
>>At that point, we could allow people to use HTTP for DNS exchanges if
>>they _really_ wanted, rather than the DNS protocol. I'm not sure that's
>>the point of the exercise, but modularity is a good idea.
>
>Is giving people more ways to do the same thing inherently a good 
>thing?  Seems to me that at some point it degrades interoperability 
>without adding much in the way of new functionality.

I think it is a good thing to give people ways to do the things they 
can't do.  Or ways that are much simpler so the concentrate on doing 
their thing rather than our thing.

Take care,
John

From touch at ISI.EDU  Wed Aug  9 15:31:46 2006
From: touch at ISI.EDU (Joe Touch)
Date: Wed, 09 Aug 2006 15:31:46 -0700
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <20060809171944.662ad1d6.moore@cs.utk.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>	<44D4C94F.8020801@dcrocker.net>	<a06230917c0fc6ced8fdc@[10.0.1.12]>	<44D892E9.9060403@isi.edu>	<44D8AEA2.8040306@cs.utk.edu>	<44D8B1DF.1060604@isi.edu>	<20060808152849.ac27cdc8.moore@cs.utk.edu>	<44DA02A6.3070702@isi.edu>	<44DA1C59.1060905@cs.utk.edu>	<44DA2AA5.20305@isi.edu>
	<20060809171944.662ad1d6.moore@cs.utk.edu>
Message-ID: <44DA6252.8020704@isi.edu>

We may be converging, so I'll focus on the port issues per se:

Keith Moore wrote:
...
>>>>> The destination host knows which protocol is being used, the source
>>>>>  host presumably also knows (it has some reason for choosing that
>>>>> port, whether it's because it's a well-known port or a SRV record or
>>>>> configuration data or whatever), and I wonder whether anyone else
>>>>> needs to know.
>>>> That's only for well-known ports;
>>> no, it's true regardless of how the initiator chose which destination
>>> port to use.
>> So I use the string "APPLESAUCE" - what protocol does that mean, please?
> 
> I think we're sort of in agreement here.  It doesn't matter why or how
> a port is chosen - basically it shouldn't mean anything to anyone
> except the destination.   You're arguing that the same would be true of
> a selector string in a TCP option.   I can't say that you're wrong in
> how they could be used  - but I'm concerned by your wanting to define
> the selector string as a protocol name when there are (a) good reasons
> to not expose this to the network (even for legitimate apps that have
> no reasons to call themselves applesauce) and (b) good reasons to run
> multiple instances of the same protocol (presumably with the same
> protocol name) on the same host.  

Numbers and strings are equally exposed. If you want privacy at that
layer, encrypt.

As to running multiple instances of the same protocol on the same host,
that's harder - the key question is "how do you know which one you want
to talk to".

If the answer is "it depends on the protocol" (e.g., server instance in
HTTP, NFS subset, etc.), that ID belongs in the HTTP/NFS/etc stream, not
in the TCP header - and not demuxed there.

Is there an ID you can think of that is necessary at the TCP layer that
differentiates instance of a protocol?

>> The portnames ID argues primarily for another ID for the protocol
>> separate from the port. The use of strings is my personal preference,
>> but not required.
> 
> I guess I think it's more straightforward to just extend the size of
> the port number space.

My ID shows how to do this in a fairly backward-compatible way.
Increasing portspace size has benefits, but isn't as viable. It also
continues to mix port and protocol in the same number space.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060809/061f18e4/signature.bin

From day at std.com  Wed Aug  9 17:26:24 2006
From: day at std.com (John Day)
Date: Wed, 9 Aug 2006 20:26:24 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <20060809175005.6255fec9.moore@cs.utk.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net>	<a06230917c0fc6ced8fdc@[10.0.1.12]>
	<44D892E9.9060403@isi.edu>	<44D8AEA2.8040306@cs.utk.edu>
	<a06230912c0ffed91dcc2@[10.197.98.218]>
	<20060809175005.6255fec9.moore@cs.utk.edu>
Message-ID: <a06230917c10028b893f6@[10.197.98.218]>

At 17:50 -0400 2006/08/09, Keith Moore wrote:
>  > >I think of port numbers identifying a distinguished service or
>>  >default instance of a service.
>>
>>  No, you are carrying the semantics of well-known ports to ports in
>>  general.  You don't want to make this association.  Look at the use
>>  of ports in other protocols and in operating systems.
>
>Agreed, I was being sloppy.  I should have said "I think of well-known
>port numbers as identifying a distinguished service or default instance
>of a service" (as opposed to identifying, say, a protocol)
>
>>  >e.g. A host can have web servers (i.e. something that serves web
>>  >pages for a web browser), using the HTTP protocol, on any number of
>>  >ports. The web server running HTTP on port 80 is the default
>>  >instance, the one that people get if they don't specify anything
>>  >more than the name of the service (which is a DNS name but not
>>  >necessarily the name of a single host).
>>  >
>>  >A host can also use HTTP to provide things other than web servers,
>>  >and a host can have web servers running other protocols such as FTP.
>>  >So we have service names, host names, services, protocols, and ports
>>  >- each subtly different than those next to it.
>>
>>  First of all, be careful.  Again, the infatuation with names of hosts
>>  are an artifact of the current tradition.  They are not necessary for
>>  naming and addressing for communications.  TCP can be used for any
>>  application which may or may not use one of the standard protocols.
>
>I'm not sure what point you are trying to make here.  Applications care
>about names of hosts, services, things in DNS, etc. I don't think
>that's infatuation, but recognition of a need, as apps have to
>interface with humans.  TCP doesn't, and IMHO shouldn't, care about
>such things.

No Applications care about names of other applications.  TCPs care 
about names of other TCPs.  Host names were something that came in 
very early and were a bit of sloppiness that we got very attached to. 
I used "infatuation" because I have seen the look of abject terror on 
some people's faces when I suggested it was irrelevant to the naming 
and addressing problem.

The only place I have found a host-name useful is for network 
management when you want to know about things you are managing that 
are in the same system.  But for communication it is pretty much 
irrelevant.  Yes, there are special cases where you want to know that 
an application is on a particular system but they are just that: 
special cases.

>  > >>>Again, the well-known socket approach only
>>  >>>works as long as there is only one instance of the protocol in the layer
>>  >>>above and certainly only one instance of a "service." (We were lucky in
>>  >>>the early work that this was true.)
>>  >
>>  >[realizing that here I'm replying to the previous message...]
>>  >Well-known ports work well for selecting the default instance of a
>>  >service.  Often that is what we want to do, and we find it
>>  >convenient to have a distinguished instance of a service that serves
>>  >as a default. Well-known ports don't prevent us from offering
>>  >alternate instances of the same service on other ports.
>>
>>  They are kludge that you have grown use to because you have never
>>  known anything else.  That is the nice thing about software we can
>>  almost anything work.  But it does require that we reserve well-known
>>  ports on all systems whether they need them or not.  We have been
>>  lucky that for 20 years we only had 3 applications.
>
>There are lots of different ways to solve a problem.  TCP could have
>been designed to specify the protocol instead of a port.  Then we would
>need some sort of kludge to allow multiple instances of a protocol on a
>host.  Or it could have been designed to specify both a protocol and an
>instance, and applications designed to run on top of TCP would have
>needed to specify protocol instance when connecting (much as they
>specify port #s now).

Actually it shouldn't have been at all.  This is really none of TCP's 
business.  TCP implements mechanisms to create a reliable channel and 
the pair of port-ids are there to be a connection-identifier, i. e. 
identify an instance.  Binding that channel to a pair of applications 
is a separate problem.  It was done in NCP as a short cut, partly 
because we didn't know any better and partly because we had bigger 
problems to solve.  TCP just did what NCP did.

>I don't know of many things that break if a system puts some other
>engine or service on a port besides that indicated by the well-known
>port assignment for that port.  Traffic sniffers and interception
>proxies, maybe, but I'm not sure that it's a good thing architecturally
>for networks to be able to agressively monitor and alter traffic.

I remember someone putting a system dump on what others thought was 
the socket for new Telnet. It came as a bit of a surprise.  ;-)

>  > >(Though I'll admit that it has become easier to do so since the URL
>>  >name:port convention because popular.  Before that, the port number
>>  >was often wired into applications, and it still is often wired into
>>  >some applications, such as SMTP.  But a lot of the need to be able
>>  >to use alternate port numbers has resulted from the introduction of
>>  >NAPTs. Before NAPTs we could get away with assigning multiple IP
>>  >addresses (and multiple DNS names) to a host if we needed to run
>>  >multiple instances of a service on that host.  And we still find it
>>  >convenient to do that for hosts not behind NAPTs, and for hosts
>>  >behind firewalls that restrict which ports can be used.)
>>
>>  URLs try to do what needs to be done.  But we really on have them for
>>  HTTP.  It is not easy to use in general.
>
>URLs are becoming more and more popular.  They're just more visible in
>HTTP than in other apps.  and even apps that don't use URLs are often
>now able to specify ports.  Every MUA I know of lets you specify ports
>for mail submission, POP, and IMAP.  (I think it would be even better
>if they let people type in smtp:, pop:, and imap: URls)
>

As I said before the great thing about software is that you can heap 
band-aid upon band aid and call it a system.

>  > >>>Actually, if you do it right, no one standard value is necessary at
>>  >>>all.  You do have to know the name of the application you want to
>>  >>>communicate with, but you needed to know that anyway.
>>  >
>>  >To me this looks like overloading because you might want more than
>>  >one instance of the same application on the host.  I'd say that you
>>  >need to know the name of the instance of the service you want to
>>  >talk to.  Now in an alternate universe that name might be something
>>  >like "foo.example.com:web:http:1" - encompassing server name
>>  >(foo.example.com), the name of the service (web) protocol name
>>  >(http), and an identifier (1) to distinguish one instance of the
>>  >same service/protocol from another. But we might not actually need
>>  >that much complexity, and it would expose it to traffic analysis
>>  >which is good or bad depending on your point-of-view.
>>
>>  Indeed. The name would have to allow for both type and instance.  As
>>  well as applications with multiple application protocols and multiple
>>  instances of them.  But this was worked out year ago.
>
>I won't claim that it can't be done, but is it really needed?  or worth
>it?

Only for those that need it.  Remember there are lots of people out 
there developing applications for the net that will never see the 
IETF or the "common" protocols.  These people are struggling to solve 
their problems because there are being forced to use the network 
equivalent of DOS, because the "new bell-heads" see no need to have a 
"Unix".  Our job is to provide the complete tool set in such a way 
that if they don't need it doesn't get in their way and if they do 
they have it.  We aren't even debating wrenches vs sockets, we are 
debating whether nails can't be used for everything.

>  > >Right now we need to know the name of the peer we want to
>>  >communicate with.  That name is composed of an IP address and a port
>  > >number.  This is unambiguous.  Having a default port for a
>>  >particular service doesn't prevent alternate instances of a service
>>  >from being established at the same IP address, as most modern
>>  >applications allow other port numbers to be specified.  Each
>>  >application can perform that binding as late as it wants to.  The
>>  >only real problem (other than legacy apps that don't support late
>>  >bindings) is caused by NAPTs that want to make port numbers
>>  >realm-specific, and we already know that NATs were a bad idea.
>>
>>  NATs only cause trouble because we have half an architecture. 
>
>Any architecture is incomplete, and the Internet architecture is no
>exception.  I'm not sure what benefit there is to berating it now,

That is a common misconception.  No, this architecture is incomplete 
on an absolute measure.  It is an unfinished demo and the demo was 
held in '72. It is very possible to have a complete architecture. 
Oddly enough, it would yield simpler solutions to what we have.

>though there are certainly things to be learned by examining its
>limitations.  Nor do I really see a better way (even after all this
>time) of doing either routing or referral than to have a large address
>space with distributed assignment of addresses and each address being
>unique within the network.   So I'm inclined to think that even if we
>had a whole architecture, NATs as we know them would still be a
>hinderance.   Of course, given a somewhat different architecture, we
>would have had different kinds of NATs - or maybe, not had the
>need for NATs.

NATs would not be a problem.  They would either integrate cleanly or 
not exist depending on your point of view.

>  > People
>>  have gotten use to the pain and think that it is perfectly normal to
>>  run a net this way. 
>
>People have become accustomed to heart disease too.  That doesn't
>mean that trans fat is good for you.  And yet people are slowly
>learning to not put this stuff in food.
>
>>  As for the making work, as I said, the nice thing about software is
>>  you can usually find a way to make almost anything work.
>
>Until you drown in complexity.  When that happens, you blame anything
>but your own efforts to make the system more complex.  (Or you let some
>other disruptive factor - such as y2k or ipv6 - be the excuse for
>scrapping things and starting over with a cleaner slate)
>
>>  >>The key question is "what is late bound". IMO, we could really use
>>  >>something that decouples protocol identifier from instance (e.g.,
>>  >>process demultiplexing) identifier.
>>  >
>>  >We could also use something that decouples service from protocol.
>>  >(do we really want to be stuck with HTTP forever as the only way to
>>  >get web pages?  SMTP as the only way to transmit mail?)  How many
>>  >layers do we want?
>>
>>  Don't think it is a question of want, a question of need.  25 years
>>  ago we figured out how much naming and addressing we need but we
>>  choose to ignore the answer.
>
>Care to supply a pointer?

RFC 1498

>
>>  >In summary: Port numbers are sufficient for the endpoints, and well
>>  >known ports are a useful convention for a default or distinguished
>>  >instance of a service as long as we don't expect them to be rigidly
>>  >adhered to.  The question is: how much information about what
>>  >services/protocols are being used should be exposed to the network?
>>  >And if we had such a convention for exposing services/protocols to
>>  >the network are we in a position to demand that hosts rigidly
>>  >enforce that convention?
>>
>>  Why would you want to? 
>
>I'm not sure we do.  But I'm also not sure why network-visible service
>or protocol identifiers are useful if the network can't take them as a
>reliable indicator of content.  So absent some means to police them, I
>doubt that they are useful at all.

Ahhh, then we are in agreement?

Take care,
John

From davide+e2e at cs.cmu.edu  Wed Aug  9 23:55:23 2006
From: davide+e2e at cs.cmu.edu (Dave Eckhardt)
Date: Thu, 10 Aug 2006 02:55:23 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <a06230917c10028b893f6@[10.197.98.218]>
Message-ID: <24679.1155192923@piper.nectar.cs.cmu.edu>

>>> 25 years ago we figured out how much naming and
>>> addressing we need but we choose to ignore the
>>> answer.

>> Care to supply a pointer?

> RFC 1498

I've meant to give that a careful read, and this time I
did--with the result that I'm really unsure what an
"attachment point" is.

Saltzer himself points out that this is a troublesome
concept for DIX Ethernet (the old broadcast-along-coax
kind), with MAC addresses serving roughly but not exactly
the role.

It's possible to describe an attachment point in a switched
Ethernet network:  port X on switch aa:bb:cc:dd:ee:ff--but
the only people who worry about such an attachment point
are your network operations staff, and only when the machine
turns into a monster.  A remote endpoint does not need to
resolve a service onto a node and then the node onto such
an attachment point for communication to work.

There's more trouble when one considers wireless networks.
I don't think an "attachment point" can be a (latitude,
longitude, altitude) tuple, but I don't think it can be
anything else either.  My "attachment point" could be the
base station I'm registered with, and the architecture of
cellular phone systems (where phones are seriously not the
same things as base stations) somewhat argues that way.
There are complexities here, though:  in a soft-handoff
CDMA system, a handset routinely communicates with multiple
base stations simultaneously--is it x% attached to one
tower and (1-x)% attached to another?  Here a remote peer
is firmly not involved in knowing my attachment point--it
doesn't figure out where I'm attached and then consider
paths to me.

Things get odder in ad-hoc networks or sensor networks,
where power control and variable coding mean that each
node is connected to every other node at some cost (really
a space of (power,delay) values), which may be varying and
may be hard to measure.

If "attachment point" is challenged by shared-medium
Ethernet, switched Ethernet, "wireless Ethernet", CDMA
cellular phone systems, ad-hoc networks, and sensor
networks, I'm having trouble seeing the crispness of the
concept--which was apparently clear to Saltzer.  If I've
missed something obvious, please clue me in...

Dave Eckhardt

From day at std.com  Thu Aug 10 04:41:18 2006
From: day at std.com (John Day)
Date: Thu, 10 Aug 2006 07:41:18 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <24679.1155192923@piper.nectar.cs.cmu.edu>
References: <24679.1155192923@piper.nectar.cs.cmu.edu>
Message-ID: <a06230919c100c254d76d@[10.197.98.218]>

At 2:55 -0400 2006/08/10, Dave Eckhardt wrote:
>  >>> 25 years ago we figured out how much naming and
>>>>  addressing we need but we choose to ignore the
>>>>  answer.
>
>>>  Care to supply a pointer?
>
>>  RFC 1498
>
>I've meant to give that a careful read, and this time I
>did--with the result that I'm really unsure what an
>"attachment point" is.

Try Interface.  Saltzer is drawing an analog with Operating Systems, 
where one has physical addresses, logical addresses, and application 
names.  Point of attachment addresses are the first of these.  IP 
addresses are point of attachment addresses because they name the 
interface.

I should have pointed out that Saltzer misses one point, primarily 
because it didn't occur when he wrote.  Routing is a two step process 
(or should be). Routes are sequences of node addresses.  From the 
forwarding table you get the "next hop."  Then you must also keep the 
mappings of node address to point of attachment for all your nearest 
neighbors to choose the specific *path* to the next hop.

Multiple paths between adjacent nodes were rare or non-existent when 
Saltzer was writing.  You will notice that this actually makes a 
cleaner model.

Naming an interface between layer N and N-1 is the same as naming the 
protocol machine at layer N-1.  MAC addresses and IP addresses name 
the same thing.  This has been true since before there were IP 
addresses and was a requirement for IPv6.

>Saltzer himself points out that this is a troublesome
>concept for DIX Ethernet (the old broadcast-along-coax
>kind), with MAC addresses serving roughly but not exactly
>the role.

There is nothing "troublesome" about it. The word Saltzer uses is 
"elusive."  And that is because looking at an Ethernet segment as a 
stand-alone network represents a degenerate case of the structure. 
It isn't a problem.  It is just a degenerate case.  An Ethernet is 
point-to-point and the nature of the beast means that you don't need 
to explicitly keep the "path" mapping.  The structure simply 
collapses as it would for a "network" of two machines connected by a 
wire.

>It's possible to describe an attachment point in a switched
>Ethernet network:  port X on switch aa:bb:cc:dd:ee:ff--but
>the only people who worry about such an attachment point
>are your network operations staff, and only when the machine
>turns into a monster.  A remote endpoint does not need to
>resolve a service onto a node and then the node onto such
>an attachment point for communication to work.

Clearly not.  Since the Internet limps along without half of what 
Saltzer says is needed.  It is just that some things are difficult, 
impossible, or far more complex than they need to be without it.  You 
can make a computer work without application names and virtual 
addresses too.  Doesn't mean I would want to.

>There's more trouble when one considers wireless networks.
>I don't think an "attachment point" can be a (latitude,
>longitude, altitude) tuple, but I don't think it can be
>anything else either.  My "attachment point" could be the

I don't see why it has to be anything like that.  Point of 
attachments need only be unambigous within the scope of that layer. 
They aren't global and they only have to be understood by the other 
elements in that layer.  There is absolutely no requirement for the 
addresses at this level to be "geographic" or even "topological". 
Quite often scope is sufficiently small that enumeration works

>base station I'm registered with, and the architecture of
>cellular phone systems (where phones are seriously not the
>same things as base stations) somewhat argues that way.

It does indeed.

>There are complexities here, though:  in a soft-handoff
>CDMA system, a handset routinely communicates with multiple
>base stations simultaneously--is it x% attached to one
>tower and (1-x)% attached to another?  Here a remote peer
>is firmly not involved in knowing my attachment point--it
>doesn't figure out where I'm attached and then consider
>paths to me.

Nor should it ever be.  This is not a problem.  It is an advantage.

>Things get odder in ad-hoc networks or sensor networks,
>where power control and variable coding mean that each
>node is connected to every other node at some cost (really
>a space of (power,delay) values), which may be varying and
>may be hard to measure.

Not sure I see why this relevant to the problem.

>If "attachment point" is challenged by shared-medium
>Ethernet, switched Ethernet, "wireless Ethernet", CDMA

Which it isn't.

>cellular phone systems, ad-hoc networks, and sensor
>networks, I'm having trouble seeing the crispness of the
>concept--which was apparently clear to Saltzer.  If I've
>missed something obvious, please clue me in...

Take care,
John

From jnc at mercury.lcs.mit.edu  Thu Aug 10 06:36:56 2006
From: jnc at mercury.lcs.mit.edu (Noel Chiappa)
Date: Thu, 10 Aug 2006 09:36:56 -0400 (EDT)
Subject: [e2e] Port numbers, SRV records or...?
Message-ID: <20060810133656.D626A86AE4@mercury.lcs.mit.edu>

    > From: Dave Eckhardt <davide+e2e at cs.cmu.edu>

    > I'm really unsure what an "attachment point" is.

The high-level abstractions in computer science are picked for reasons that
in the end have a very strong streak of practicality (because they make it
easy for us to do things, and organize systems well, etc).

When one looks at "attachment point" this way, they also have a very
practical definition: "the thing/entity, in the entire internet, which the
system of routers deliver packets to".

To be a little more specific, it consists of a tuple: i) the identity of that
small group of routers which can deliver the packets directly to the
destination, without going through another router - the packet has to be sent
to one of these first; ii) the identity of the thing/entity to which that
router then delivers that packet. This is fundamentally a practical
definition: you get the packet to one of those routers, and then they can get
it to the ultimate destination.

To expand on i) a little (I've described it in this way, precisely because of
some of your later comments) the usual situation in actual communication
hardware is that we have things we call "physical subnets", wherein there's
an N^2 connectivity, and where packets can be delivered by lower-level (i.e.
below the internetwork layer) means from any connected station to any other.
So we identify that set of routers by identifying the physical subnet they
are attached to.


    > Saltzer himself points out that this is a troublesome concept for DIX
    > Ethernet ... with MAC addresses serving roughly but not exactly the role.
    > .. an attachment point in a switched Ethernet network: port X on switch
    > aa:bb:cc:dd:ee:ff ..
    > A remote endpoint does not need to resolve a service onto a node and
    > then the node onto such an attachment point for communication to work.

True. But it does need enough info to get it there eventually.

An example is ARP (which exists only because IPv4 addresses were set at 32
bits before anyone conceived of 48-bit hardware addresses :-). ARP allows us
to identify Ethernet interfaces (the ii) from above) with a scope-local
identifier of less than 48 bits, and ARP then turns that into the needed 48
bits - just as the hardware/etc in bridges turns those 48-bit identifiers
into "port X on switch aa:bb:cc:dd:ee:ff".

Think of it as a series of bindings, which the network has to resolve.


    > There's more trouble when one considers wireless networks. I don't
    > think an "attachment point" can be a (latitude, longitude, altitude)
    > tuple, but I don't think it can be anything else either. My "attachment
    > point" could be the base station I'm registered with

Let's try analyzing this case with my i)/ii) model above, and see how that
goes...

    > There are complexities here, though: in a soft-handoff CDMA system, a
    > handset routinely communicates with multiple base stations
    > simultaneously

To me, this suggests that, *from the point of view of the internetwork
system*, that handset has *multiple* "attachment points" - because there are
multiple "places in the entire internet, which the system of routers deliver
packets to".

One has to consider the set of handsets which a base-station X can talk with
a "physical subnet", because the internet architecture's model of a "physical
subnet" requires N^2 connectivity between all stations on that PS. One cannot
consider a group of base-stations, and the handsets they talk to, to be a PS,
because some handsets may not be able to reach some base-stations.

(Yes, yes, I know I'm ignoring the direct handset-handset communication case.
It has been studied, in packet radio networks going back to the 70's. I leave
it as an exercise for the reader.)

This all assumes, of course, that the base stations are internetwork routers,
and so the device has (from the point of view of the set of internet routers)
multiple attachment points.

If not, then you have a "local network" which is made up in turn of packet
switches which have to emulate, through their internal mechanisms, an N^2
network - just like the Internet treated the old ARPANET. That model works
too (and I will resist the temptation to get into a long digression of
whether that's a good design or not :-).

    > Here a remote peer is firmly not involved in knowing my attachment
    > point--it doesn't figure out where I'm attached and then consider paths
    > to me.

The internetwork routers do have to know (at some level of knowledge) which
base stations can be used to get to you (or, if they are not internetwork
routers, which "radio network" they have to deliver the packets to in order
to get them to you).


    > Things get odder in ad-hoc networks or sensor networks, where power
    > control and variable coding mean that each node is connected to every
    > other node at some cost (really a space of (power,delay) values), which
    > may be varying and may be hard to measure.

This is a path-selection problem. The question then becomes "do we want to
tackle that problem within the internet-level path-selection, or do we want
to do it at a lower level, and present an N^2 abstraction to the internetwork
layer".


    > If "attachment point" is challenged by shared-medium Ethernet, switched
    > Ethernet, "wireless Ethernet", CDMA cellular phone systems, ad-hoc
    > networks, and sensor networks, I'm having trouble seeing the crispness
    > of the concept

As I said, "attachment point" is a concept that's useful to the internetwork
layer.

To me, the interesting question really is: "is the N^2 connectivity
assumption that the internetwork-level path-selection currently assumes for a
physical subnetwork really the way to go" (because it means that physical
transmission systems which don't provide N^2 connectivity have to have a
layer in there which provides that), or should we try and expand the
architectural model to support that kind of thing directly?


Note that we do make other physical transmission systems step up to the plate
and have local mechanisms to supply a required service model when their
native service model isn't rich enough - an example is reliabilty. TCP just
doesn't work well with very high BER, so high BER networks have to use FEC or
something, to bring the service model provided *at the interface to the next
layer up* (i.e. the internetworking layer) to that required by the TCP/IP
internetwork.

Is that the right call? Is the N^2 requirement the right call? Interesting
questions...

	Noel

From chvogt at tm.uka.de  Thu Aug 10 08:57:58 2006
From: chvogt at tm.uka.de (Christian Vogt)
Date: Thu, 10 Aug 2006 17:57:58 +0200
Subject: [e2e] Internet Packet Loss Measurements?
Message-ID: <44DB5786.50103@tm.uka.de>

Hello e2e folks,

does anyone know about recent measurements on end-to-end packet loss in
the Internet?

After some more or less unsuccessful searching, I'm wondering whether
there is anything more current than, e.g., Vern Paxson's Ph.D. thesis.
This thesis covers Internet packet loss quite extensively, but it dates
back to 1997 (the measurements are actually from 1994/1995) and the
Internet has evolved since then.  More recent work is kind of sparse...

If someone could provide a pointer, I'd really appreciate that.

Thanks a lot,
- Christian

-- 
Christian Vogt, Institute of Telematics, Universitaet Karlsruhe (TH)
www.tm.uka.de/~chvogt/pubkey/



From day at std.com  Thu Aug 10 08:35:26 2006
From: day at std.com (John Day)
Date: Thu, 10 Aug 2006 11:35:26 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <44D892E9.9060403@isi.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net>	<a06230917c0fc6ced8fdc@[10.0.1.12]>
	<44D892E9.9060403@isi.edu>
Message-ID: <a0623091fc10101539b50@[10.197.98.218]>


Running a bit behind.

Snip
>
>  > Be careful here.  This is not really true.  The protocol-id field in IP
>>  for example identifies the *syntax* of the protocol in the layer above.
>>  (It can't be the protocol because there could be more than one instance
>>  of the same protocol in the same system.  Doesn't happen often but it
>>  can happen.) Port-ids or sockets identify an instance of communication,
>>  not a  particular service.
>
>They currently do both for the registered numbers, at least as a
>convention, although individual host-pairs override that protocol
>meaning by a-priori (usually out-of-band) agreement.

Yes, there are lots of ad-hoc a-priori things going on.  This is how 
demos work not production systems.  We are spending far too much 
effort on the networking and it raises the cost of doing real work. 
This is just like the argument a few years ago that I take a PC of 
the the box and 8 hours later I have it configured and can use, it 
sort of.  I take Mac out of the box and plug it in, it boots and I am 
ready to work.

>
>>  Again, the well-known socket approach only
>>  works as long as there is only one instance of the protocol in the layer
>>  above and certainly only one instance of a "service." (We were lucky in
>>  the early work that this was true.)
>
>It's possible for that instance to hand-off other instances, as is done
>with FTP.

Yes, this creates a bottleneck, a single point of failure and 
security problems.

>  >> That field always has at least one standard, fixed value.  Whether it
>>>  has more
>>>  than one is the interesting question, and depends on how the standard
>>>  value(s)
>>>  get used.  (If there is only one, then it will be used as a dynamic
>>>  redirector.)
>>
>>  Actually, if you do it right, no one standard value is necessary at
>>  all.  You do have to know the name of the application you want to
>>  communicate with, but you needed to know that anyway.
>
>That value must be 'standard' between the two endpoints, but as others
>have noted, need not have meaning to anyone else along the path.

Not really.  No requirement that it is standard.  It might be nice, 
but there is no requirement.

>...
>>>  What *does* matter is how to know what values to use. This, in turn,
>>>  creates a
>>>  bootstrapping/startup task.  I believe the deciding factor in solving
>>>  that task
>>>  is when the binding is done to particular values.  Later binding gives
>>>  more
>>>  flexibility -- and possibly better scaling and easier administration
>>>  -- but at
>>>  the cost of additional mechanism and -- probably always -- complexity,
>>>  extra
>>>  round-trips and/or reliability.
>>
>>  Not really. But again, you have to do it the right way.  There are a lot
>>  of ways to do it that do require all sorts of extra stuff.
>
>The key question is "what is late bound". IMO, we could really use
>something that decouples protocol identifier from instance (e.g.,
>process demultiplexing) identifier.
>
>>>  In discussing the differences between email and instant messaging, I
>>>  came to
>>>  believe that we need to make a distinction between "protocol" and
>>>  "service".
>>>  The same protocol can be used for very different services, according
>>>  to how it
>
>This argues for three fields: demux ID (still needed), protocol, and
>service name.

Well, that is part of it.

>At that point, we could allow people to use HTTP for DNS exchanges if
>they _really_ wanted, rather than the DNS protocol. I'm not sure that's
>the point of the exercise, but modularity is a good idea.

Band-aids on band-aids.

Take care,
John

From jasleen at cs.unc.edu  Thu Aug 10 09:47:36 2006
From: jasleen at cs.unc.edu (Jasleen Kaur)
Date: Thu, 10 Aug 2006 12:47:36 -0400
Subject: [e2e] Internet Packet Loss Measurements?
In-Reply-To: <44DB5786.50103@tm.uka.de>
References: <44DB5786.50103@tm.uka.de>
Message-ID: <44DB6328.1080302@cs.unc.edu>


Christian,

We have recently developed a passive-analysis tool for estimating TCP 
losses reliably. We've used this tool to study the traces of millions of 
recent TCP connections. Some of our observations have been published in 
the July issue of CCR. You can access the paper at:

http://www.cs.unc.edu/~jasleen/papers/ccr06.pdf

This paper can provide loss-rates only within "aggregates" of 
connections. We have much more data than is presented here. Please let 
us know if you have a specific need.

Thanks,
Jasleen




Christian Vogt wrote:

>Hello e2e folks,
>
>does anyone know about recent measurements on end-to-end packet loss in
>the Internet?
>
>After some more or less unsuccessful searching, I'm wondering whether
>there is anything more current than, e.g., Vern Paxson's Ph.D. thesis.
>This thesis covers Internet packet loss quite extensively, but it dates
>back to 1997 (the measurements are actually from 1994/1995) and the
>Internet has evolved since then.  More recent work is kind of sparse...
>
>If someone could provide a pointer, I'd really appreciate that.
>
>Thanks a lot,
>- Christian
>
>  
>


From pganti at gmail.com  Thu Aug 10 10:47:08 2006
From: pganti at gmail.com (Paddy Ganti)
Date: Thu, 10 Aug 2006 10:47:08 -0700
Subject: [e2e] Internet Packet Loss Measurements?
In-Reply-To: <44DB5786.50103@tm.uka.de>
References: <44DB5786.50103@tm.uka.de>
Message-ID: <2ff1f08a0608101047u1a44eb06y9297e2f0575f4103@mail.gmail.com>

Christian,

I am not sure if this helps but here's a table of packet loss across the
world using PingER. I find very useful to derive parameters when setting up
experiments

http://www-iepm.slac.stanford.edu/cgi-wrap/table.pl

-Paddy
On 8/10/06, Christian Vogt <chvogt at tm.uka.de> wrote:
>
> Hello e2e folks,
>
> does anyone know about recent measurements on end-to-end packet loss in
> the Internet?
>
> After some more or less unsuccessful searching, I'm wondering whether
> there is anything more current than, e.g., Vern Paxson's Ph.D. thesis.
> This thesis covers Internet packet loss quite extensively, but it dates
> back to 1997 (the measurements are actually from 1994/1995) and the
> Internet has evolved since then.  More recent work is kind of sparse...
>
> If someone could provide a pointer, I'd really appreciate that.
>
> Thanks a lot,
> - Christian
>
> --
> Christian Vogt, Institute of Telematics, Universitaet Karlsruhe (TH)
> www.tm.uka.de/~chvogt/pubkey/
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20060810/5cea9712/attachment-0001.html

From chvogt at tm.uka.de  Thu Aug 10 11:48:03 2006
From: chvogt at tm.uka.de (Christian Vogt)
Date: Thu, 10 Aug 2006 20:48:03 +0200
Subject: [e2e] Internet Packet Loss Measurements?
In-Reply-To: <44DB6328.1080302@cs.unc.edu>
References: <44DB5786.50103@tm.uka.de> <44DB6328.1080302@cs.unc.edu>
Message-ID: <44DB7F63.60006@tm.uka.de>

Thanks a lot, Jasleen and Paddy,

these are excellent pointers.

Best regards,
- Christian

-- 
Christian Vogt, Institute of Telematics, Universitaet Karlsruhe (TH)
www.tm.uka.de/~chvogt/pubkey/


Jasleen Kaur wrote:
> Christian,
> 
> We have recently developed a passive-analysis tool for estimating TCP
>  losses reliably. We've used this tool to study the traces of
> millions of recent TCP connections. Some of our observations have
> been published in the July issue of CCR. You can access the paper at:
> 
> 
> http://www.cs.unc.edu/~jasleen/papers/ccr06.pdf
> 
> This paper can provide loss-rates only within "aggregates" of 
> connections. We have much more data than is presented here. Please
> let us know if you have a specific need.
> 
> Thanks, Jasleen


Paddy Ganti wrote:
> Christian,
> 
> I am not sure if this helps but here's a table of packet loss across
> the world using PingER. I find very useful to derive parameters when
> setting up experiments
> 
> http://www-iepm.slac.stanford.edu/cgi-wrap/table.pl 
> <http://www-iepm.slac.stanford.edu/cgi-wrap/table.pl>
> 
> -Paddy


From touch at ISI.EDU  Thu Aug 10 13:19:59 2006
From: touch at ISI.EDU (Joe Touch)
Date: Thu, 10 Aug 2006 13:19:59 -0700
Subject: [e2e] About the primitives and their value
In-Reply-To: <D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
	<D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>
Message-ID: <44DB94EF.5020300@isi.edu>



Pekka Nikander wrote:
...
>>> So, at least from my point of view, the really hard problem is to device
>>> the new "routing" infrastructure protocols in such a way that the ISPs
>>> benefit from collaboratively knowing which traffic is wanted (by
>>> someone) and which is not.
>>
>> I don't think this CAN be solved by secure or protected routing. Near as
>> I can tell, protected routing presumes highly constrained topologies
>> which aren't feasible in practice. As someone recently told me, there
>> are too many cases where "doing the right thing" is indistinguishable
>> from a "routing protocol attack".
> 
> As long as we try to remain within the current send-receive paradigm,
> I'm afraid you are right.  However, if we consider other fundamental
> paradigms, I wouldn't be that sure.

If we saw a paradigm that didn't relocate the problem (e.g., as
publish/subscribe does), sure. I haven't seen one yet. From an
information theoretic view, I'm not sure it's possible, either, but I'd
be glad to see suggestions.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060810/8a56525c/signature.bin

From dpreed at reed.com  Thu Aug 10 13:42:52 2006
From: dpreed at reed.com (David P. Reed)
Date: Thu, 10 Aug 2006 16:42:52 -0400
Subject: [e2e] About the primitives and their value
In-Reply-To: <44DB94EF.5020300@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
	<D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>
	<44DB94EF.5020300@isi.edu>
Message-ID: <44DB9A4C.9010601@reed.com>

Joe Touch wrote:
> If we saw a paradigm that didn't relocate the problem (e.g., as
> publish/subscribe does), sure. I haven't seen one yet. From an
> information theoretic view, I'm not sure it's possible, either, but I'd
> be glad to see suggestions.
>
>   
May I suggest that information theory is not the relevant way to think 
of this?  Information theory is good for lots of things, but it doesn't 
capture intentionality at all.

All you can do with information theory is quantify conditional entropy 
before and after changes that are defined by a channel model that you 
import from some non-information theoretic source.

It's like saying Hoare's formal annotation of sequential programs tells 
you what programs are.  In fact the annotation tells you nothing about 
the programs, all it lets you do is deduce what any particular program 
may do based on the mapping from its syntax to its predicate-calculus 
model.  Hoare's logical formalism doesn't tell you that all programming 
languages must be sequences of statements executed one after another - 
that comes from the programming language designer's choice.

Most *applied* information theory results talk loosely about "sending" 
and "receiving", but in fact the notion of sending and receiving are 
arbitrary elements to which information theory's tools are applied.

In the case of radio, physics tells you what happens in the field 
between antennas.  In a network, the particular hardware connections and 
manufacturing and code does.  All information theory does is give you a 
language to describe and reason about such systems.  It cannot tell you 
what kind of systems you can design or build.


From touch at ISI.EDU  Thu Aug 10 13:47:41 2006
From: touch at ISI.EDU (Joe Touch)
Date: Thu, 10 Aug 2006 13:47:41 -0700
Subject: [e2e] About the primitives and their value
In-Reply-To: <44DB9A4C.9010601@reed.com>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
	<D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>
	<44DB94EF.5020300@isi.edu> <44DB9A4C.9010601@reed.com>
Message-ID: <44DB9B6D.6070505@isi.edu>



David P. Reed wrote:
> Joe Touch wrote:
>> If we saw a paradigm that didn't relocate the problem (e.g., as
>> publish/subscribe does), sure. I haven't seen one yet. From an
>> information theoretic view, I'm not sure it's possible, either, but I'd
>> be glad to see suggestions.
>>   
> May I suggest that information theory is not the relevant way to think
> of this?  Information theory is good for lots of things, but it doesn't
> capture intentionality at all.
...
> Most *applied* information theory results talk loosely about "sending"
> and "receiving", but in fact the notion of sending and receiving are
> arbitrary elements to which information theory's tools are applied.

Let's just say we differ in our opinions on the impact of info theory
(and entropy) to this area.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060810/2a26fd1f/signature.bin

From dpreed at reed.com  Thu Aug 10 15:39:28 2006
From: dpreed at reed.com (David P. Reed)
Date: Thu, 10 Aug 2006 18:39:28 -0400
Subject: [e2e] About the primitives and their value
In-Reply-To: <44DB9B6D.6070505@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>	<44D8D308.5020601@isi.edu>	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>	<44D9F164.1030906@isi.edu>	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>	<44DA0444.3000103@isi.edu>	<D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>	<44DB94EF.5020300@isi.edu>
	<44DB9A4C.9010601@reed.com> <44DB9B6D.6070505@isi.edu>
Message-ID: <44DBB5A0.7030007@reed.com>

Joe Touch wrote:
> David P. Reed wrote:
>   
>> Joe Touch wrote:
>>     
>>> If we saw a paradigm that didn't relocate the problem (e.g., as
>>> publish/subscribe does), sure. I haven't seen one yet. From an
>>> information theoretic view, I'm not sure it's possible, either, but I'd
>>> be glad to see suggestions.
>>>   
>>>       
>> May I suggest that information theory is not the relevant way to think
>> of this?  Information theory is good for lots of things, but it doesn't
>> capture intentionality at all.
>>     
> ...
>   
>> Most *applied* information theory results talk loosely about "sending"
>> and "receiving", but in fact the notion of sending and receiving are
>> arbitrary elements to which information theory's tools are applied.
>>     
>
> Let's just say we differ in our opinions on the impact of info theory
> (and entropy) to this area.
>
>   
You can use category theory or information theory or any damn theory you 
like, if you first map your problem into the term of that theory, and 
map the various assumptions you are making into givens in that theory.   
But the mapping you are proposing is not obvious.   And the word 
"paradigm" you used earlier is not precise enough to describe a 
relationship between a real system and its formal model.

My "opinion" is, I think, the way most mathematicians understand the 
relation between theory and real systems.


From day at std.com  Thu Aug 10 16:42:30 2006
From: day at std.com (John Day)
Date: Thu, 10 Aug 2006 19:42:30 -0400
Subject: [e2e] About the primitives and their value
In-Reply-To: <44DB9B6D.6070505@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
	<D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>
	<44DB94EF.5020300@isi.edu> <44DB9A4C.9010601@reed.com>
	<44DB9B6D.6070505@isi.edu>
Message-ID: <a06230900c1017412172e@[10.0.1.15]>

At 13:47 -0700 2006/08/10, Joe Touch wrote:
>David P. Reed wrote:
>>  Joe Touch wrote:
>>>  If we saw a paradigm that didn't relocate the problem (e.g., as
>>>  publish/subscribe does), sure. I haven't seen one yet. From an
>>>  information theoretic view, I'm not sure it's possible, either, but I'd
>>>  be glad to see suggestions.
>>>  
>>  May I suggest that information theory is not the relevant way to think
>>  of this?  Information theory is good for lots of things, but it doesn't
>>  capture intentionality at all.
>...
>>  Most *applied* information theory results talk loosely about "sending"
>>  and "receiving", but in fact the notion of sending and receiving are
>>  arbitrary elements to which information theory's tools are applied.
>
>Let's just say we differ in our opinions on the impact of info theory
>(and entropy) to this area.

I am afraid David is right.  There is nothing to differ on.

David's point is a corollary to the point made in the Tractatus that 
all statements in mathematics are tautologies and therefore say 
nothing.  We provide the semantics and they are only as good as the 
veracity of our model.

Take care,
John

From touch at ISI.EDU  Thu Aug 10 17:04:11 2006
From: touch at ISI.EDU (Joe Touch)
Date: Thu, 10 Aug 2006 17:04:11 -0700
Subject: [e2e] About the primitives and their value
In-Reply-To: <44DBB5A0.7030007@reed.com>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>	<44D8D308.5020601@isi.edu>	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>	<44D9F164.1030906@isi.edu>	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>	<44DA0444.3000103@isi.edu>	<D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>	<44DB94EF.5020300@isi.edu>	<44DB9A4C.9010601@reed.com>
	<44DB9B6D.6070505@isi.edu> <44DBB5A0.7030007@reed.com>
Message-ID: <44DBC97B.8080306@isi.edu>



David P. Reed wrote:
> Joe Touch wrote:
>> David P. Reed wrote:
>>  
>>> Joe Touch wrote:
>>>    
>>>> If we saw a paradigm that didn't relocate the problem (e.g., as
>>>> publish/subscribe does), sure. I haven't seen one yet. From an
>>>> information theoretic view, I'm not sure it's possible, either, but I'd
>>>> be glad to see suggestions.
>>>>         
>>> May I suggest that information theory is not the relevant way to think
>>> of this?  Information theory is good for lots of things, but it doesn't
>>> capture intentionality at all.
>>>     
>> ...
>>  
>>> Most *applied* information theory results talk loosely about "sending"
>>> and "receiving", but in fact the notion of sending and receiving are
>>> arbitrary elements to which information theory's tools are applied.
>>
>> Let's just say we differ in our opinions on the impact of info theory
>> (and entropy) to this area.
>>
>>   
> You can use category theory or information theory or any damn theory you
> like, if you first map your problem into the term of that theory, and
> map the various assumptions you are making into givens in that theory.  
> But the mapping you are proposing is not obvious.   And the word
> "paradigm" you used earlier is not precise enough to describe a
> relationship between a real system and its formal model.

Sending and receiving are well-defined in info-thy, notably in the way
in which sending increases entropy and receiving decreases it. But maybe
that's either too applied or not enough to match what you're referring to.

> My "opinion" is, I think, the way most mathematicians understand the
> relation between theory and real systems.

Sure - and I agree with that, but I've seen the mapping.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060810/32502105/signature.bin

From day at std.com  Thu Aug 10 18:02:20 2006
From: day at std.com (John Day)
Date: Thu, 10 Aug 2006 21:02:20 -0400
Subject: [e2e] About the primitives and their value
In-Reply-To: <44DB9B6D.6070505@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
	<D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>
	<44DB94EF.5020300@isi.edu> <44DB9A4C.9010601@reed.com>
	<44DB9B6D.6070505@isi.edu>
Message-ID: <a06230900c1017412172e@[10.0.1.15]>

At 13:47 -0700 2006/08/10, Joe Touch wrote:
>David P. Reed wrote:
>>  Joe Touch wrote:
>>>  If we saw a paradigm that didn't relocate the problem (e.g., as
>>>  publish/subscribe does), sure. I haven't seen one yet. From an
>>>  information theoretic view, I'm not sure it's possible, either, but I'd
>>>  be glad to see suggestions.
>>>  
>>  May I suggest that information theory is not the relevant way to think
>>  of this?  Information theory is good for lots of things, but it doesn't
>>  capture intentionality at all.
>...
>>  Most *applied* information theory results talk loosely about "sending"
>>  and "receiving", but in fact the notion of sending and receiving are
>>  arbitrary elements to which information theory's tools are applied.
>
>Let's just say we differ in our opinions on the impact of info theory
>(and entropy) to this area.

I am afraid David is right.  There is nothing to differ on.

David's point is a corollary to the point made in the Tractatus that 
all statements in mathematics are tautologies and therefore say 
nothing.  We provide the semantics and they are only as good as the 
veracity of our model.

Take care,
John

From day at std.com  Thu Aug 10 18:02:20 2006
From: day at std.com (John Day)
Date: Thu, 10 Aug 2006 21:02:20 -0400
Subject: [e2e] About the primitives and their value
In-Reply-To: <44DB9B6D.6070505@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
	<D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>
	<44DB94EF.5020300@isi.edu> <44DB9A4C.9010601@reed.com>
	<44DB9B6D.6070505@isi.edu>
Message-ID: <a06230900c1017412172e@[10.0.1.15]>

At 13:47 -0700 2006/08/10, Joe Touch wrote:
>David P. Reed wrote:
>>  Joe Touch wrote:
>>>  If we saw a paradigm that didn't relocate the problem (e.g., as
>>>  publish/subscribe does), sure. I haven't seen one yet. From an
>>>  information theoretic view, I'm not sure it's possible, either, but I'd
>>>  be glad to see suggestions.
>>>  
>>  May I suggest that information theory is not the relevant way to think
>>  of this?  Information theory is good for lots of things, but it doesn't
>>  capture intentionality at all.
>...
>>  Most *applied* information theory results talk loosely about "sending"
>>  and "receiving", but in fact the notion of sending and receiving are
>>  arbitrary elements to which information theory's tools are applied.
>
>Let's just say we differ in our opinions on the impact of info theory
>(and entropy) to this area.

I am afraid David is right.  There is nothing to differ on.

David's point is a corollary to the point made in the Tractatus that 
all statements in mathematics are tautologies and therefore say 
nothing.  We provide the semantics and they are only as good as the 
veracity of our model.

Take care,
John

From day at std.com  Fri Aug 11 07:15:17 2006
From: day at std.com (John Day)
Date: Fri, 11 Aug 2006 10:15:17 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <44DACC0E.608@cs.utk.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net>	<a06230917c0fc6ced8fdc@[10.0.1.12]>
	<44D892E9.9060403@isi.edu>	<44D8AEA2.8040306@cs.utk.edu>
	<a06230912c0ffed91dcc2@[10.197.98.218]>
	<20060809175005.6255fec9.moore@cs.utk.edu>
	<a06230917c10028b893f6@[10.197.98.218]> <44DACC0E.608@cs.utk.edu>
Message-ID: <a06230908c10238761716@[10.0.1.24]>

At 2:02 -0400 2006/08/10, Keith Moore wrote:
>>>>  First of all, be careful.  Again, the infatuation with names of hosts
>>>>  are an artifact of the current tradition.  They are not necessary for
>>>>  naming and addressing for communications.  TCP can be used for any
>>>>  application which may or may not use one of the standard protocols.
>>>
>>>I'm not sure what point you are trying to make here.  Applications care
>>>about names of hosts, services, things in DNS, etc. I don't think
>>>that's infatuation, but recognition of a need, as apps have to
>>>interface with humans.  TCP doesn't, and IMHO shouldn't, care about
>>>such things.
>>
>>No Applications care about names of other applications.  TCPs care 
>>about names of other TCPs.  Host names were something that came in 
>>very early and were a bit of sloppiness that we got very attached 
>>to. I used "infatuation" because I have seen the look of abject 
>>terror on some people's faces when I suggested it was irrelevant to 
>>the naming and addressing problem.
>
>Applications care about the names of lots of different things, if 
>for no other reason than they interface with humans that care about 
>names of lots of different things.  Humans care about things that 
>they call computers, and which more-or-less correspond to things 
>that network geeks call hosts, which tend to be boxes with wires 
>attached to them that have lots of components inside them that 
>manipulate or transmit or store data and permit it to be retrieved 
>again.  Applications that help humans deal with computers need to 
>know about names of those computers and how to deal with those 
>names.  They might not need to know much about the structure of 
>those names, they may just treat those names as opaque or they might 
>need to use those names as query strings, but they do need to deal 
>with them on some level.
>
>Now the abstraction we call a host today is somewhat fuzzier than 
>the abstraction we called a host in the mid 1970s.   When a host had 
>clear boundaries, when there were resources like disk drives that 
>were clearly owned by a host (i.e. were private to that host, could 
>only be accessed from that particular host), when hosts were fairly 
>modest in terms of resources and tended to have a single network 
>interface, IP address, and identity; one set of users, one set of 
>files with ownership tied to those users - a lot of uses for "host 
>names" made more sense than they do now.  It made sense to telnet to 
>a host, rather than an instance of a telnet server that happens to 
>sit at an IP address.  It made sense to ftp to a host and be able to 
>access the set of files on that host.  It made sense to treat the 
>users of that host as a group of users for email, such that every 
>user on that host could be reached by sending mail to that host's 
>mail server.
>
>These days, as I said, boundaries are much fuzzier, and the host 
>abstraction makes less sense.  Hosts tend to be single-user in 
>practice, though we also have DNS names (think aol.com, yahoo.com, 
>gmail.com) that correspond to huge user communities.  We have 
>virtual hosts that sit on one box (granted, OS/VM existed a long 
>time ago, but now such things are more commonplace).  We have 
>"hosts" that are really computing clusters of individual processing 
>elements, where one "host" abstraction corresponds to the whole 
>cluster and separate "host" abstractions correspond to each of the 
>nodes, so that we have hosts contained within hosts.  We have 
>multiple hosts that all share resources (disks, sometimes memory) 
>such that any of them is more-or-less equivalent to another, and yet 
>there may or may not be effective IPC or memory sharing between 
>different processes on that host if they reside on different PEs.  A 
>user community is no longer likely to be defined by virtue of having 
>logins and sharing access to resources on a single host.  These days 
>the notion of telnet'ing to a host, FTPing to a host's file system, 
>talking about a host's users, or in general treating the host as an 
>abstraction that has a network interface and an IP address and a 
>collection of services that allow that host's resources to be 
>accessed and the host's users communicated with - these things are 
>still useful and the concept of "host" is still useful to us, but we 
>know of enough exceptions that we no longer treat it as the general 
>case.
>
>So we no longer expect that what used to be called "host names" are 
>inherently names of hosts - we recognize that a name might be the 
>name of something we used to think of as a host, or it might just be 
>some other name that is used to access some resource or collection 
>of resources, and we can't really tell the difference by looking at 
>either the name or its DNS RRset.  The mapping between names and 
>hosts (and for that matter addresses) is essentially arbitrary as 
>the set of services intended to be associated with a DNS name can 
>exist on zero, one, or any number of hosts/addresses/network 
>interfaces; and the set of hosts (etc.) supporting the services 
>associated with DNS name X can be identical to, disjoint from, or 
>intersect with the set of hosts supporting the services associated 
>with DNS name Y.   And with MX and SRV records the set of 
>hosts/addresses one which one service resides can be different than 
>the set of hosts/addresses at which another service resides, even if 
>both services share a common (DNS) name.
>
>So while it's a bit of a stretch to say that host names aren't 
>useful anymore, these days it's pretty difficult to make any 
>reliable statement about the characteristics of a host name.

Methinks he dost protest too much.  ;-)  That was quite a dissertation.

As many have noted, getting the concepts right and the terminology 
used is half the battle.  Being sloppy in our terminology causes us 
to make mistakes and others not as closely involved think we mean 
what we say.

Leaving aside the band-aids and years of commentary and 
interpretation (this really does begin to sound more like 
commentaries on the I-Ching than science.), if one carefully works 
through an abstract model of a network to see who has to talk to who 
about what, one never refers to a host.  One is concerned with the 
names of a various protocol machines and the bindings between them, 
but the name of the container on which they reside never comes up.

The fact that talking about host names is a convenient crutch for 
humans to use when thinking about this is nice.  But it is not 
relevant to the science.


>>The only place I have found a host-name useful is for network 
>>management when you want to know about things you are managing that 
>>are in the same system.  But for communication it is pretty much 
>>irrelevant.  Yes, there are special cases where you want to know 
>>that an application is on a particular system but they are just 
>>that: special cases.
>
>Yes, they're useful but they're special cases.  If you have an 
>application that manages the hardware on a host, you still need a 
>way to specifically name that hardware.
>
>>>There are lots of different ways to solve a problem.  TCP could have
>>>been designed to specify the protocol instead of a port.  Then we would
>>>need some sort of kludge to allow multiple instances of a protocol on a
>>>host.  Or it could have been designed to specify both a protocol and an
>>>instance, and applications designed to run on top of TCP would have
>>>needed to specify protocol instance when connecting (much as they
>>>specify port #s now).
>>
>>Actually it shouldn't have been at all.  This is really none of 
>>TCP's business.  TCP implements mechanisms to create a reliable 
>>channel and the pair of port-ids are there to be a 
>>connection-identifier, i. e. identify an instance.  Binding that 
>>channel to a pair of applications is a separate problem.  It was 
>>done in NCP as a short cut, partly because we didn't know any 
>>better and partly because we had bigger problems to solve.  TCP 
>>just did what NCP did.
>
>Well, the OS needs to deliver the data somewhere, and it needs to 
>mediate access to that channel so that (for instance) not just any 
>random process can write to it, not just any random process can read 
>from it, if multiple processes can write to it there are ways of 
>preventing their writes from getting mixed up with one another, etc. 
>Binding the channel to a process isn't the only possible way of 
>doing this, but it's a fairly obvious one.  (Note that UNIX network 
>sockets don't bind a channel to a process - the same socket can be 
>accessed by multiple processes if the socket is explicitly passed, 
>or if the original process forks and its child inherits the socket 
>file descriptor.  But the default arrangement is for a single 
>process to have access to the socket, and therefore the channel.)
>
>IMHO if we really expected multiple processes to share TCP channels 
>then we'd need some sort of structure beyond an octet stream for TCP 
>channels, so we'd know (at the very least) message boundaries within 
>TCP and a stack wouldn't deliver half of a message to one process 
>and half to another.

Again, if you analyze what goes on in a single system, you will find 
that the mechanism for finding the destination application and 
determining whether the requesting application can access it is 
distinct and should be distinct from the problem of providing a 
channel with specific performance properties. I completely agree with 
everything you said above, but TCP is not and should not be the whole 
IPC. It just provides the reliable channel.

Harkening back to the historical discussion, this was all clear 35 
years ago but it was a lot of work to build a network on machines 
with much less compute power than your cell phone and we didn't have 
time for all of the niceties.  We had to show that this was useful 
for something.  So we (like all engineers) took a few short cuts 
knowing they weren't the final answer.  It is just unfortunate that 
the people who came later have not put it right.



>
>>>  > >(Though I'll admit that it has become easier to do so since the URL
>>>>  >name:port convention because popular.  Before that, the port number
>>>>  >was often wired into applications, and it still is often wired into
>>>>  >some applications, such as SMTP.  But a lot of the need to be able
>>>>  >to use alternate port numbers has resulted from the introduction of
>>>>  >NAPTs. Before NAPTs we could get away with assigning multiple IP
>>>>  >addresses (and multiple DNS names) to a host if we needed to run
>>>>  >multiple instances of a service on that host.  And we still find it
>>>>  >convenient to do that for hosts not behind NAPTs, and for hosts
>>>>  >behind firewalls that restrict which ports can be used.)
>>>>
>>>>  URLs try to do what needs to be done.  But we really on have them for
>>>>  HTTP.  It is not easy to use in general.
>>>
>>>URLs are becoming more and more popular.  They're just more visible in
>>>HTTP than in other apps.  and even apps that don't use URLs are often
>>>now able to specify ports.  Every MUA I know of lets you specify ports
>>>for mail submission, POP, and IMAP.  (I think it would be even better
>>>if they let people type in smtp:, pop:, and imap: URls)
>>>
>>
>>As I said before the great thing about software is that you can 
>>heap band-aid upon band aid and call it a system.
>
>Which is a good thing, since we generally do not have the luxury of 
>either doing things right the first time (anticipating all future 
>needs) or scrapping an old architecture and starting again from 
>scratch. Software will always be working around architectural 
>deficiencies. Similarly we have to keep compatibility and transition 
>issues in mind when considering architectural changes.

Be careful.  We do and we don't.  I have known many companies that 
over time have step by step made wholesale *replacement* of major 
parts of their products as they transition.  Sometimes maintaining 
backward compatibility, sometimes not.  But new releases come out 
with completely new designs for parts of the system.  You are arguing 
that nothing is ever replaced and all changes is by modifying what is 
there.  This is the evolution works.  And 99% of its cases end as 
dead-ends in extinction.  With evolution, it doesn't matter there are 
100s of billions of cases.  But when there is one case, the odds 
aren't too good.  (And don't tell me not to worry because the actions 
of the IETF are not random mutations.  There are those that would 
dispute that! ;-))

>But when I look at ports I think "hey, it's a good thing that they 
>didn't nail down the meaning of hosts or ports too much back in the 
>1970s, because we need them to be a bit more flexible today than 
>they needed to be back then."  We don't need significant 
>architectural changes or any protocol or API changes for apps to be 
>able to specify ports, and that gives us useful flexibility today. 
>If ports had been defined differently - say they had been defined as 
>protocols and there were deep assumptions in hosts that (say) port 
>80 would always be HTTP and only port 80 could be HTTP - we'd be 
>having to kludge around it in other, probably more cumbersome, ways.

I am not arguing to nail down ports more.  I am arguing to nail them 
down less.  Suppose there had been no well-known ports at all.  I 
have never known an IPC mechanism in an OS to require anything like 
that.  Why should the 'Net?

>
>I suppose one measure of an architecture is how complex the kludges 
>have to be in order to make things work.  Allowing apps to specify 
>port #s seems like a fairly minor kludge, compared to say having 
>apps build their own p2p networks to work around NAT and firewall 
>limitations.

An architecture that requires kludges has bugs that should be fixed.

>
>>>  > >>>Actually, if you do it right, no one standard value is necessary at
>>>>  >>>all.  You do have to know the name of the application you want to
>>>>  >>>communicate with, but you needed to know that anyway.
>>>>  >
>>>>  >To me this looks like overloading because you might want more than
>>>>  >one instance of the same application on the host.  I'd say that you
>>>>  >need to know the name of the instance of the service you want to
>>>>  >talk to.  Now in an alternate universe that name might be something
>>>>  >like "foo.example.com:web:http:1" - encompassing server name
>>>>  >(foo.example.com), the name of the service (web) protocol name
>>>>  >(http), and an identifier (1) to distinguish one instance of the
>>>>  >same service/protocol from another. But we might not actually need
>>>>  >that much complexity, and it would expose it to traffic analysis
>>>>  >which is good or bad depending on your point-of-view.
>>>>
>>>>  Indeed. The name would have to allow for both type and instance.  As
>>>>  well as applications with multiple application protocols and multiple
>>>>  instances of them.  But this was worked out year ago.
>>>
>>>I won't claim that it can't be done, but is it really needed?  or worth
>>>it?
>>
>>Only for those that need it.  Remember there are lots of people out 
>>there developing applications for the net that will never see the 
>>IETF or the "common" protocols.  These people are struggling to 
>>solve their problems because there are being forced to use the 
>>network equivalent of DOS, because the "new bell-heads" see no need 
>>to have a "Unix".  Our job is to provide the complete tool set in 
>>such a way that if they don't need it doesn't get in their way and 
>>if they do they have it.  We aren't even debating wrenches vs 
>>sockets, we are debating whether nails can't be used for everything.
>
>Okay fine.  But when I try to understand what a good set of tools 
>for these applications developers looks like, the limitations of 
>ports (or even well known ports) seem like fairly minor problems 
>compared to the limitations of NATs, scoped addresses, IPv6 address 
>selection, firewalls that block traffic in arbitrary ways, and 
>interception proxies that alter traffic.  DNS naming looks fairly 
>ugly

They are all part and parcel of the same problem:  The 'Net only has 
half an addressing architecture.

>(especially if we also consider ad hoc and disconnected networks) 
>but not as bad as the above.    Security is really difficult and the 
>tools we have to implement it are the equivalent of stone knives and 
>bearskins.  Network management and host/network configuration seem 
>downright abysmal.  As architectural deficiencies go, ports seem way 
>down on the list.

Ports are a symptom of the disease.  Curing the disease will fix 
ports as well as the much of what  you listed above.

>>>>  Don't think it is a question of want, a question of need.  25 years
>>>>  ago we figured out how much naming and addressing we need but we
>>>>  choose to ignore the answer.
>>>
>>>Care to supply a pointer?
>>
>>RFC 1498
>
>Oh, that.  Rereading it, I think its concept of nodes is a bit 
>dated. But otherwise it's still prescient, it's still useful, and 
>nothing we've built in the Internet really gets this.  It's 
>saddening to read this and realize that we're still conflating 
>concepts that need to be kept separate (like nodes and attachment 
>points, and occasionally nodes and services).
>
>Of course, RFC 1498 does not describe an architecture.  It makes 
>good arguments for what kinds of naming we need in a network 
>protocol suite (applications would need still more kinds of naming, 
>because users do), but it doesn't explain how to implement those 
>bindings and make them robust, scalable, secure.  It's all well and 
>good to say that a node needs to be able to keep its identity when 
>it changes attachment points but that doesn't explain how to 
>efficiently route traffic to that node across changes in attachment 
>points.  etc.

Gee, you want Jerry to do *all* the work for you! ;-)  Given that we 
haven't done it, maybe that is the problem:  No one in the IETF knows 
how to do it.

>Also, there are valid reasons why we sometimes (occasionally) need 
>to conflate those concepts.  Sometimes we need to send traffic to a 
>service or service instance, sometimes we need to send traffic to a 
>node, sometimes we need to send traffic to an attachment point (the 
>usual reasons involve network or hardware management).

I don't believe it.  If you think we need to conflate these concepts 
then you haven't thought it through carefully enough.  We always send 
traffic to an application.  Now there maybe some things you weren't 
thinking of as applications, but they are.   It is true as I said 
earlier, that any application naming must allow for both type and 
instance.  But that is an implementatin nit.

>It's interesting to reflect on how a new port extension mechanism, 
>or replacement for ports as a demux token, would work if it took RFC 
>1498 in mind.  I think it would be a service (instance) selector 
>(not the same thing as a protocol selector) rather than a port 
>number relative to some IP address.  The service selectors would 
>need to be globally unique so that a service could migrate

You still need ports.  But you need to not conflate ports and 
application naming.

>from one node or attachment point to another.  There would need to 
>be some way of doing distributed assignment of service selectors 
>with a reasonable expectation of global uniqueness,

service selectors?  No.  Application-names, yes.

>without collisions.  That wouldn't solve the problem of making 
>services mobile, but it would make certain things feasible. Service 
>instances could register themselves with the equivalent of NAPTs 
>(maybe advertise them periodically using multicast) and those NAPTs 
>would know how to route traffic to them.  This wouldn't begin to 
>solve the problems associated with scoped addresses  - an app would 
>still have to know which address to use from which realm in order to 
>reach the desired service instance - but it would solve the problem 
>we now have with port collisions in NAPTs by apps that insist on 
>using well known ports.

Unnecessary.

>>>>  >In summary: Port numbers are sufficient for the endpoints, and well
>>>>  >known ports are a useful convention for a default or distinguished
>>>>  >instance of a service as long as we don't expect them to be rigidly
>>>>  >adhered to.  The question is: how much information about what
>>>>  >services/protocols are being used should be exposed to the network?
>>>>  >And if we had such a convention for exposing services/protocols to
>>>>  >the network are we in a position to demand that hosts rigidly
>>>>  >enforce that convention?
>>>>
>>>>  Why would you want to?
>>>
>>>I'm not sure we do.  But I'm also not sure why network-visible service
>>>or protocol identifiers are useful if the network can't take them as a
>>>reliable indicator of content.  So absent some means to police them, I
>>>doubt that they are useful at all.
>>
>>Ahhh, then we are in agreement?
>
>on this point at least, I guess so.

Sorry to be so tardy in responding.  Have to find the time.

Take care,
John

From rchertov at purdue.edu  Fri Aug 11 09:05:52 2006
From: rchertov at purdue.edu (Roman Chertov)
Date: Fri, 11 Aug 2006 12:05:52 -0400
Subject: [e2e] Internet Packet Loss Measurements?
In-Reply-To: <44DB5786.50103@tm.uka.de>
References: <44DB5786.50103@tm.uka.de>
Message-ID: <44DCAAE0.40709@purdue.edu>

Christian Vogt wrote:
> Hello e2e folks,
> 
> does anyone know about recent measurements on end-to-end packet loss in
> the Internet?

I came across this paper the other day which you might find useful.

M. Allman, W. Eddy, S. Ostermann. Estimating Loss Rates With TCP. ACM 
Performance Evaluation Review, 31(3), December 2003.
This paper introduces mechanisms for estimating the real loss rate of 
packets at the TCP sender with TCP Reno and with SACK option. Simply 
counting the number of retransmissions often gives wrong results because 
TCP senders do spurious retransmissions for various reasons. The paper 
identifies some of the reasons for unnecessary retransmissions.

Roman

> 
> After some more or less unsuccessful searching, I'm wondering whether
> there is anything more current than, e.g., Vern Paxson's Ph.D. thesis.
> This thesis covers Internet packet loss quite extensively, but it dates
> back to 1997 (the measurements are actually from 1994/1995) and the
> Internet has evolved since then.  More recent work is kind of sparse...
> 
> If someone could provide a pointer, I'd really appreciate that.
> 
> Thanks a lot,
> - Christian
> 


From adilraja at gmail.com  Fri Aug 11 09:26:34 2006
From: adilraja at gmail.com (Adil Raja)
Date: Fri, 11 Aug 2006 17:26:34 +0100
Subject: [e2e] Distribution of mean end-to-end delay!
Message-ID: <b2dc5c1d0608110926k22930c08l77ab9221772cbe4f@mail.gmail.com>

Hi all,
  I am new to this list and have a few queries. I shall be thankful if they
are addressed:

1) What statistical distribution(s) are normally used to simulate end-to-end
Internet delays. For instance, how do various packet generators simulate
this phenomenon?
2) How can one model the distribution of mean one-way (or RTT) end2end delay
where the mean comes from a relatively large number of samples (300-1000
packets) And the number of means is fair enough too (around 2000 lets say)
as well.
3) How does mean end2end delay correlate with the end2end packet loss (if at
all). Once again the mean is of a sample size ranging between 200-1000
packets.
4) Can someone provide information about a scholarly article that addresses
these issues.

I shall be grateful for any responses.

Regards,
Adil Raja
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20060811/9b706092/attachment.html

From jasleen at cs.unc.edu  Fri Aug 11 10:07:58 2006
From: jasleen at cs.unc.edu (Jasleen Kaur)
Date: Fri, 11 Aug 2006 13:07:58 -0400
Subject: [e2e] Internet Packet Loss Measurements?
In-Reply-To: <44DCAAE0.40709@purdue.edu>
References: <44DB5786.50103@tm.uka.de> <44DCAAE0.40709@purdue.edu>
Message-ID: <44DCB96E.8080304@cs.unc.edu>


Roman and Christian,

The tool I mentioned earlier (full citation provided below) is similar 
in objective to the paper you mention below. It, however, incorporates 
substantially more details --- it partially emulates the sender-side TCP 
state-machine for 5 prominent TCP stacks (and we show that it is crucial 
to incorporate the details and stack diversity). The level-of-detail of 
our analysis enables us to identify and classify segment retransmissions 
with more accuracy and granularity than past work.

S. Rewaskar, J. Kaur, and F.D. Smith, "A Passive State-Machine Approach 
for Accurate Analysis of TCP Out-of-Sequence Segments", in ACM Computer 
Communications Review, July 2006.
 http://www.cs.unc.edu/~jasleen/papers/ccr06.pdf

Hope you find the tool useful. We would appreciate any feedback.

Thanks,
Jasleen




Roman Chertov wrote:

> Christian Vogt wrote:
>
>> Hello e2e folks,
>>
>> does anyone know about recent measurements on end-to-end packet loss in
>> the Internet?
>
>
> I came across this paper the other day which you might find useful.
>
> M. Allman, W. Eddy, S. Ostermann. Estimating Loss Rates With TCP. ACM 
> Performance Evaluation Review, 31(3), December 2003.
> This paper introduces mechanisms for estimating the real loss rate of 
> packets at the TCP sender with TCP Reno and with SACK option. Simply 
> counting the number of retransmissions often gives wrong results 
> because TCP senders do spurious retransmissions for various reasons. 
> The paper identifies some of the reasons for unnecessary retransmissions.
>
> Roman
>
>>
>> After some more or less unsuccessful searching, I'm wondering whether
>> there is anything more current than, e.g., Vern Paxson's Ph.D. thesis.
>> This thesis covers Internet packet loss quite extensively, but it dates
>> back to 1997 (the measurements are actually from 1994/1995) and the
>> Internet has evolved since then.  More recent work is kind of sparse...
>>
>> If someone could provide a pointer, I'd really appreciate that.
>>
>> Thanks a lot,
>> - Christian
>>
>


From day at std.com  Fri Aug 11 11:07:02 2006
From: day at std.com (John Day)
Date: Fri, 11 Aug 2006 14:07:02 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <20060811114619.8b24f93e.moore@cs.utk.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net>	<a06230917c0fc6ced8fdc@[10.0.1.12]>
	<44D892E9.9060403@isi.edu>	<44D8AEA2.8040306@cs.utk.edu>
	<a06230912c0ffed91dcc2@[10.197.98.218]>
	<20060809175005.6255fec9.moore@cs.utk.edu>
	<a06230917c10028b893f6@[10.197.98.218]>	<44DACC0E.608@cs.utk.edu>
	<a06230908c10238761716@[10.0.1.24]>
	<20060811114619.8b24f93e.moore@cs.utk.edu>
Message-ID: <a06230913c102707c3871@[10.0.1.24]>

At 11:46 -0400 2006/08/11, Keith Moore wrote:
>I'll try to keep this terse...

Me too.

>
>>  As many have noted, getting the concepts right and the terminology
>>  used is half the battle.  Being sloppy in our terminology causes us
>>  to make mistakes and others not as closely involved think we mean
>>  what we say.
>
>true.  but there's a downside to too much precision, and/or too much
>abstraction, which is that ordinary mortals can't understand it.  the
>result is either that it's inaccessible to them, or they make some
>semantic associations that weren't intended that end up becoming
>entrenched almost as if they were part of the architecture.

I know what you mean!

>
>>  Leaving aside the band-aids and years of commentary and
>>  interpretation (this really does begin to sound more like
>>  commentaries on the I-Ching than science.), if one carefully works
>>  through an abstract model of a network to see who has to talk to who
>>  about what, one never refers to a host.  One is concerned with the
>>  names of a various protocol machines and the bindings between them,
>>  but the name of the container on which they reside never comes up.
>
>I'd believe that if it weren't for security.  for instance, I don't
>know of any way to let a process securely make visible changes to a
>binding between a name that is associated with it and some attribute of
>that process without giving that process secret information with which
>it can authenticate itself to other visible processes that maintain
>those bindings, and I don't know of any way to protect a process's
>secrets from the host that it lives on.
>
>sure you can talk about a set of abstractions and bindings that don't
>involve hosts, but I don't think you can build a system that actually
>implements them in a useful way without having some notion of the
>container and its boundaries.

My point was that it was not required for naming and addressing.  I 
totally agree with you about security.

>  > Again, if you analyze what goes on in a single system, you will find
>>  that the mechanism for finding the destination application and
>>  determining whether the requesting application can access it is
>>  distinct and should be distinct from the problem of providing a
>>  channel with specific performance properties. I completely agree with
>>  everything you said above, but TCP is not and should not be the whole
>>  IPC. It just provides the reliable channel.
>
>and  TCP doesn't provide the mechanism for finding the destination
>application (which I would not call an application, as an application
>can consist of multiple processes on multiple hosts) and determine
>whether the requesting application can access it.  it's just that the
>layer we often use for finding the destination application (WKP at
>an IP address) is a very thin sliver on top of TCP.  sure that layer
>has limitations, but they're not TCP's problems.

Okay. I was using application for process (see your comment above 
;-))  I agree it is not TCPs problem and shouldn't be.

>  > Harkening back to the historical discussion, this was all clear 35
>>  years ago but it was a lot of work to build a network on machines
>>  with much less compute power than your cell phone and we didn't have
>>  time for all of the niceties.  We had to show that this was useful
>>  for something.  So we (like all engineers) took a few short cuts
>>  knowing they weren't the final answer.  It is just unfortunate that
>>  the people who came later have not put it right.
>
>it would be an interesting exercise to illustrate how to put it right -
>not just how to define the entities and bindings but how to implement
>them in a secure, reliable, and scalable fashion.  having more compute
>power helps, but I'm not sure that it is sufficient to solve the
>problem.

I agree.  And along those lines, I think it is important to work out 
what the "right" answer is even if we know we can never deploy it in 
the Internet.  At least then we know what the goal is and when we 
don't go there we will know precisely what we are giving up.  (The 
current approach sometimes seems like being stranded in the woods and 
trying to find your way out when you have no idea which way North is! 
The advice is always stay put and let the searchers look for you! 
Trouble of course is no one is looking for us! ;-))

>
>>  >Which is a good thing, since we generally do not have the luxury of
>>  >either doing things right the first time (anticipating all future
>>  >needs) or scrapping an old architecture and starting again from
>>  >scratch. Software will always be working around architectural
>>  >deficiencies. Similarly we have to keep compatibility and transition
>>  >issues in mind when considering architectural changes.
>>
>>  Be careful.  We do and we don't.  I have known many companies that
>>  over time have step by step made wholesale *replacement* of major
>>  parts of their products as they transition.  Sometimes maintaining
>>  backward compatibility, sometimes not.  But new releases come out
>>  with completely new designs for parts of the system.  You are arguing
>>  that nothing is ever replaced and all changes is by modifying what is
>>  there.
>
>if the Internet had been controlled by a single company, that company
>might have had the luxury of making wholesale replacement of major
>parts.  of course there are also downsides to single points of control.
>
>but there's a separate issue.  even when individual parts are replaced,
>the relationships and interfaces between those parts are harder to
>change.  take a look at the maps of a city hundreds of years ago and
>today and you'll discover that most of the streets are in the same
>places. individual buildings are replaced over time, one or two at a
>time, but rarely are large areas razed and replaced with new buildings
>in such a way that  the street patterns (and property lines) could
>change significantly.

Right on both counts.  I know what you mean. Although, Paris did it 
in the middle of the 19thC and there have been some recent examples. 
It doesn't happen often, but it does happen.

>  > >But when I look at ports I think "hey, it's a good thing that they
>>  >didn't nail down the meaning of hosts or ports too much back in the
>>  >1970s, because we need them to be a bit more flexible today than
>>  >they needed to be back then."  We don't need significant
>>  >architectural changes or any protocol or API changes for apps to be
>>  >able to specify ports, and that gives us useful flexibility today.
>>  >If ports had been defined differently - say they had been defined as
>>  >protocols and there were deep assumptions in hosts that (say) port
>>  >80 would always be HTTP and only port 80 could be HTTP - we'd be
>>  >having to kludge around it in other, probably more cumbersome, ways.
>>
>>  I am not arguing to nail down ports more.  I am arguing to nail them
>>  down less.  Suppose there had been no well-known ports at all.  I
>>  have never known an IPC mechanism in an OS to require anything like
>>  that.  Why should the 'Net?
>
>you have to have some mechanism for allowing an application to find
>the particular thing it wants to talk to.    well-known ports seem
>limited today for lots of reasons, but to actually get a useful
>increase in functionality beyond WKPs presumes several things, e.g. a
>system to maintain bindings from globally-unique names of things you
>might want to talk to (which I tend to call services or service
>instances), and the locations of those things, which further allows
>those bindings to be securely updated when the locations change, and
>which also (for privacy reasons) avoids exposing too much data about
>application activity to the net at large.  in the days of HOSTS.TXT
>files it's hard to imagine this being practical, and even to try to use
>today's DNS for this would be quite a stretch.   (now if you want to
>argue that DNS is hopelessly flawed you'll get enthusiastic agreement
>from me)

For all intents and purposes we are using DNS for this or trying to. 
We have to find a way to make it scale.

>  > >I suppose one measure of an architecture is how complex the kludges
>>  >have to be in order to make things work.  Allowing apps to specify
>>  >port #s seems like a fairly minor kludge, compared to say having
>>  >apps build their own p2p networks to work around NAT and firewall
>>  >limitations.
>>
>>  An architecture that requires kludges has bugs that should be fixed.
>
>sometimes the fixes are more complex and/or less reliable than the
>kludges, and sometimes they don't provide much more in the way of
>practical functionality.  or maybe they do provide a benefit in the
>long-term, but little benefit in the near-term so the incentives aren't
>in the right places to get them implemented.

Well, what I meant by that is there something fundamentally wrong. If 
you have kludges it is either because you are doing something you 
shouldn't (like passing IP addresses in application protocols) or 
there is something about how the architecture should be structured 
that you don't understand yet.

(Sorry I am an optimist.  I don't believe that the right answer has 
to be kludgey.)

>
>>  >Okay fine.  But when I try to understand what a good set of tools
>>  >for these applications developers looks like, the limitations of
>>  >ports (or even well known ports) seem like fairly minor problems
>>  >compared to the limitations of NATs, scoped addresses, IPv6 address
>>  >selection, firewalls that block traffic in arbitrary ways, and
>>  >interception proxies that alter traffic.  DNS naming looks fairly
>>  >ugly
>>
>>  They are all part and parcel of the same problem:  The 'Net only has
>>  half an addressing architecture.
>
>is there an implemented example of a net with a whole addressing
>architcture that could scale as well as the Internet?

Not sure. Actually you bring up a good point.  I was noticing 
sometime ago that with operating systems we probably had tried 20 or 
more different designs before we started to settle on the 
Multics/Unix line.  But with networks we only had two or three 
examples before we fixed on one and one could argue that we haven't 
fundamentally changed since 1972.  We haven't explored the solution 
space very much at all.  Who knows there could be 100 different 
approaches that are better than this one.  Actually, the Internet 
hasn't scaled very well at all.  Moore's Law has but without it I 
don't think you would find that the Internet was doing all that well. 
Suppose that instead of turning the Moore's Law crank 15 times in the 
last 35 years, we had only turned it 3, but were still trying to do 
as much with the 'Net.

>
>>  >>RFC 1498
>>  >
>>  >Oh, that.  Rereading it, I think its concept of nodes is a bit
>>  >dated. But otherwise it's still prescient, it's still useful, and
>>  >nothing we've built in the Internet really gets this.  It's
>>  >saddening to read this and realize that we're still conflating
>>  >concepts that need to be kept separate (like nodes and attachment
>>  >points, and occasionally nodes and services).
>>  >
>>  >Of course, RFC 1498 does not describe an architecture.  It makes
>>  >good arguments for what kinds of naming we need in a network
>>  >protocol suite (applications would need still more kinds of naming,
>>  >because users do), but it doesn't explain how to implement those
>>  >bindings and make them robust, scalable, secure.  It's all well and
>>  >good to say that a node needs to be able to keep its identity when
>>  >it changes attachment points but that doesn't explain how to
>>  >efficiently route traffic to that node across changes in attachment
>>  >points.  etc.
>>
>>  Gee, you want Jerry to do *all* the work for you! ;-)  Given that we
>>  haven't done it, maybe that is the problem:  No one in the IETF knows
>>  how to do it.
>
>or alternately - it's easy to design the perfect theoretical system if
>you make the hard problems out of scope.  then you (or your admirers)
>can claim that because nobody manages to actually implement it, that
>nobody is smart enough to appreciate your design's elegance :)
>
>but maybe that's not really the case here.   I do think it's a useful
>exercise to try to describe how Jerry's design - or something close to
>it - could be practically, reliably, scalably implemented by adapting
>the existing internet protocols and architecture.

I completely agree.  To your first point:  It is almost traditional 
in science for one person to throw out a new idea and then let others 
propose ways to realize it or confirm it.  Notice Einstein didn't 
prove relativity, others did; etc.

It would have been nice if he had saved us the trouble, but a little 
hard work never hurt!  He was clearly sorting out the problems as 
well.  And as I indicated in another email on this thread, he missed 
multiple paths between next hops which I think really cements his 
model.

>
>>
>>  >Also, there are valid reasons why we sometimes (occasionally) need
>>  >to conflate those concepts.  Sometimes we need to send traffic to a
>>  >service or service instance, sometimes we need to send traffic to a
>>  >node, sometimes we need to send traffic to an attachment point (the
>>  >usual reasons involve network or hardware management).
>>
>>  I don't believe it.  If you think we need to conflate these concepts
>>  then you haven't thought it through carefully enough.  We always send
>>  traffic to an application.  Now there maybe some things you weren't
>>  thinking of as applications, but they are.
>
>if you were a developer who routinely built applications that
>consisted of large numbers of processes on different hosts that talk to
>each other, you'd probably want to call those things something besides
>applications.  :)

Yea, in fact I have a much more refined terminology I prefer, but as 
I said above, I was using "application" as short hand for 
application-process.

>  > >It's interesting to reflect on how a new port extension mechanism,
>>  >or replacement for ports as a demux token, would work if it took RFC
>>  >1498 in mind.  I think it would be a service (instance) selector
>>  >(not the same thing as a protocol selector) rather than a port
>>  >number relative to some IP address.  The service selectors would
>>  >need to be globally unique so that a service could migrate
>>
>>  You still need ports.  But you need to not conflate ports and
>>  application naming.
>>
>>  >from one node or attachment point to another.  There would need to
>>  >be some way of doing distributed assignment of service selectors
>>  >with a reasonable expectation of global uniqueness,
>>
>>  service selectors?  No.  Application-names, yes.
>
>I think we're just arguing about what to call them.  I don't want to
>call them application names partially because it's too easy to think of
>an application name as something like "ftp" when that's the wrong level
>of precision, and partially because to me an application is larger than
>any single process that implements it.

Well, I don't like selectors because it too strongly implies some 
kind of central authority.  I don't want some one who wants to put a 
distributed application with his own protocol to have to see anyone 
to do it.  As I said, earlier selectors and WKP are like "hard wiring 
low core."

Take care,
john

From rhee at ncsu.edu  Fri Aug 11 13:44:58 2006
From: rhee at ncsu.edu (rhee@ncsu.edu)
Date: Fri, 11 Aug 2006 16:44:58 -0400 (EDT)
Subject: [e2e] Distribution of mean end-to-end delay!
In-Reply-To: <b2dc5c1d0608110926k22930c08l77ab9221772cbe4f@mail.gmail.com>
References: <b2dc5c1d0608110926k22930c08l77ab9221772cbe4f@mail.gmail.com>
Message-ID: <1997.211.48.230.94.1155329098.squirrel@webmail.ncsu.edu>

The following paper has some discussion on the measured distribution of
network delays in the Internet.

Jay Aikat, Jasleen Kaur, F. Donelson Smith, and Kevin Jeffay, ?Variability
in TCP Roundtrip Times?, ACM IMC 2003.

In our simulation work, we typically use a lognormal distribution within a
range of [1ms, 350ms].

> Hi all,
>   I am new to this list and have a few queries. I shall be thankful if
> they
> are addressed:
>
> 1) What statistical distribution(s) are normally used to simulate
> end-to-end
> Internet delays. For instance, how do various packet generators simulate
> this phenomenon?
> 2) How can one model the distribution of mean one-way (or RTT) end2end
> delay
> where the mean comes from a relatively large number of samples (300-1000
> packets) And the number of means is fair enough too (around 2000 lets say)
> as well.
> 3) How does mean end2end delay correlate with the end2end packet loss (if
> at
> all). Once again the mean is of a sample size ranging between 200-1000
> packets.
> 4) Can someone provide information about a scholarly article that
> addresses
> these issues.
>
> I shall be grateful for any responses.
>
> Regards,
> Adil Raja
>



From mellia at tlc.polito.it  Fri Aug 11 14:29:37 2006
From: mellia at tlc.polito.it (Mellia Marco)
Date: Fri, 11 Aug 2006 23:29:37 +0200
Subject: [e2e] Internet Packet Loss Measurements?
In-Reply-To: <44DB5786.50103@tm.uka.de>
References: <44DB5786.50103@tm.uka.de>
Message-ID: <44DCF6C1.8090302@tlc.polito.it>


We have developed a tool to passively monitor traffic, called Tstat - 
TCP Statistic and Analysis Tool.
Among different measurement index, Tstat monitors TCP anomalies, i.e., 
packet retransmission by RTO, FR, unnecessary retransmission, network 
reordering, duplicates, etc.
We presented a summary of our work at ICC, and TiWDC, and preparing an 
extended version of the paper for a journal.
You can find details from
http://tstat.tlc.polito.it
which includes a set of measurements we are continously updating from 
different probe point in the network. Just select the main picture in 
the middle, then select a trace, and under the Stats:TCP menu, select
"Total number of anomalies".

 From the web page you have access also to all the paper we published, 
including the two mentioned above. I can send you the extended version 
of the paper if you want.

Hope this helps,
Marco

> Hello e2e folks,
> 
> does anyone know about recent measurements on end-to-end packet loss in
> the Internet?
> 
> After some more or less unsuccessful searching, I'm wondering whether
> there is anything more current than, e.g., Vern Paxson's Ph.D. thesis.
> This thesis covers Internet packet loss quite extensively, but it dates
> back to 1997 (the measurements are actually from 1994/1995) and the
> Internet has evolved since then.  More recent work is kind of sparse...
> 
> If someone could provide a pointer, I'd really appreciate that.
> 
> Thanks a lot,
> - Christian
> 


From pganti at gmail.com  Fri Aug 11 17:07:37 2006
From: pganti at gmail.com (Paddy Ganti)
Date: Fri, 11 Aug 2006 17:07:37 -0700
Subject: [e2e] Internet Packet Loss Measurements?
In-Reply-To: <44DCF6C1.8090302@tlc.polito.it>
References: <44DB5786.50103@tm.uka.de> <44DCF6C1.8090302@tlc.polito.it>
Message-ID: <2ff1f08a0608111707l6f10feci389ef3795b71c49b@mail.gmail.com>

For an accurate path based packet loss analysis, refer to the methods laid
out in the paper

http://www.cs.ucsd.edu/~savage/papers/Usits99.pdf

which has some very interesting ideas about measuring packet loss.

-Paddy

ps: tstat looks promising as well.

On 8/11/06, Mellia Marco <mellia at tlc.polito.it> wrote:

>
> We have developed a tool to passively monitor traffic, called Tstat -
> TCP Statistic and Analysis Tool.
> Among different measurement index, Tstat monitors TCP anomalies, i.e.,
> packet retransmission by RTO, FR, unnecessary retransmission, network
> reordering, duplicates, etc.
> We presented a summary of our work at ICC, and TiWDC, and preparing an
> extended version of the paper for a journal.
> You can find details from
> http://tstat.tlc.polito.it
> which includes a set of measurements we are continously updating from
> different probe point in the network. Just select the main picture in
> the middle, then select a trace, and under the Stats:TCP menu, select
> "Total number of anomalies".
>
> From the web page you have access also to all the paper we published,
> including the two mentioned above. I can send you the extended version
> of the paper if you want.
>
> Hope this helps,
> Marco
>
> > Hello e2e folks,
> >
> > does anyone know about recent measurements on end-to-end packet loss in
> > the Internet?
> >
> > After some more or less unsuccessful searching, I'm wondering whether
> > there is anything more current than, e.g., Vern Paxson's Ph.D. thesis.
> > This thesis covers Internet packet loss quite extensively, but it dates
> > back to 1997 (the measurements are actually from 1994/1995) and the
> > Internet has evolved since then.  More recent work is kind of sparse...
> >
> > If someone could provide a pointer, I'd really appreciate that.
> >
> > Thanks a lot,
> > - Christian
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20060811/0655da82/attachment.html

From day at std.com  Sat Aug 12 07:42:39 2006
From: day at std.com (John Day)
Date: Sat, 12 Aug 2006 10:42:39 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <20060811152044.18a57c03.moore@cs.utk.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net>	<a06230917c0fc6ced8fdc@[10.0.1.12]>
	<44D892E9.9060403@isi.edu>	<44D8AEA2.8040306@cs.utk.edu>
	<a06230912c0ffed91dcc2@[10.197.98.218]>
	<20060809175005.6255fec9.moore@cs.utk.edu>
	<a06230917c10028b893f6@[10.197.98.218]>	<44DACC0E.608@cs.utk.edu>
	<a06230908c10238761716@[10.0.1.24]>
	<20060811114619.8b24f93e.moore@cs.utk.edu>
	<a06230913c102707c3871@[10.0.1.24]>
	<20060811152044.18a57c03.moore@cs.utk.edu>
Message-ID: <a0623092dc10395323ea5@[10.0.1.24]>

At 15:20 -0400 2006/08/11, Keith Moore wrote:
>  > >sometimes the fixes are more complex and/or less reliable than the
>>  >kludges, and sometimes they don't provide much more in the way of
>>  >practical functionality.  or maybe they do provide a benefit in the
>>  >long-term, but little benefit in the near-term so the incentives aren't
>>  >in the right places to get them implemented.
>>
>>  Well, what I meant by that is there something fundamentally wrong. If
>>  you have kludges it is either because you are doing something you
>>  shouldn't (like passing IP addresses in application protocols) or
>>  there is something about how the architecture should be structured
>>  that you don't understand yet.
>
>It is very hard to avoid passing IP addresses (or names of attachment
>points) in applications protocols.  Sure you can pass some other kind of
>identifier, but that requires a fast, up-to-date, accurate, robust,
>scalable indirection service.  It's really difficult to provide that
>service in such a way that's suitable for a wide range of

Yes, but there seems to be a contradiction.  Are we saying we have to 
use IP addresses because it is impossible to build a system that 
would provide the fast, up-to-date, accurate, robust, scalable 
indirection service?  Sort of begs the question as to why IP 
addresses work then.  If anything, they should change more frequently 
than some form of "application-identiifer"

Passing IP addresses in applications is like passing physical memory 
pointers in Java.


>applications.  (DNS, for example, falls way short of being able to do
>this).  If you can't provide such a service then you need to let
>applications build their own ways of finding the locations of their
>intended peers, and that requires exposing attachment points to
>applications.  From this point of view DNS is just another application,
>and just one of many ways for an application to learn IP addresses of
>intended peers.  Also, even if ordinary apps don't need to deal with
>names of attachment points, some apps (such as network management apps)
>do.  Insisting on rigid layer separation makes the system unnecessarily
>complex and less adaptable to unanticipated needs.

I don't see a problem.  What network management works with are not 
addresses.  It may use addresses or names to establish communication, 
but what it uses to do management are completely different.  Don't 
confuse the two.  Contrary to what 802 does, addresses are not serial 
numbers.

>
>>  (Sorry I am an optimist.  I don't believe that the right answer has
>>  to be kludgey.)
>
>Whether something is a kludge is a matter of judgement.  Notions of
>purity do not always agree with cost-vs-benefit evaluation.

Yea, I know.  Sometimes seems that way, but in my experience it 
usually turns out that it was a case of, shall we say, misplaced 
purity.

>  > Suppose that instead of turning the Moore's Law crank 15 times in the
>>  last 35 years, we had only turned it 3, but were still trying to do
>>  as much with the 'Net.
>
>Sort of an meaningless question, since without the low cost in cpu
>cycles there's no way we'd be sending live video over the net, probably
>not much high quality audio either because effective compression would
>still be fairly expensive.  and bit rates would be lower.  we might
>still have the web (with mostly text content) but we wouldn't have
>really effective large-scale search engines.    it would be really
>difficult to make web sites scalable for large populations. we'd see
>less dynamic content, more replication of resources to geographically
>diverse sites.  if we somehow managed to have as many hosts as we have
>in our current network, we would be doing routing very differently -
>relying much more on hierarchy because that's the only way we could
>compute forwarding tables.

My point was that if Moore's Law hadn't been turning quite so fast we 
would have had to have solved some of the scaling and other problems 
long before now.  Moore's Law has allowed some to make the argument 
that we don't have to solve these hard problems just throw hardware 
at it.  Or perhaps more correctly, the research guys aren't solving 
the problem, so the guys building product have to do something so 
they throw hardware at it and for the time being the problem is 
"solved."

>  > >  > >It's interesting to reflect on how a new port extension mechanism,
>>  >>  >or replacement for ports as a demux token, would work if it took RFC
>>  >>  >1498 in mind.  I think it would be a service (instance) selector
>>  >>  >(not the same thing as a protocol selector) rather than a port
>>  >>  >number relative to some IP address.  The service selectors would
>>  >>  >need to be globally unique so that a service could migrate
>>  >>
>>  >>  You still need ports.  But you need to not conflate ports and
>>  >>  application naming.
>>  >>
>>  >>  >from one node or attachment point to another.  There would need to
>>  >>  >be some way of doing distributed assignment of service selectors
>>  >>  >with a reasonable expectation of global uniqueness,
>>  >>
>>  >>  service selectors?  No.  Application-names, yes.
>>  >
>>  >I think we're just arguing about what to call them.  I don't want to
>>  >call them application names partially because it's too easy to think of
>>  >an application name as something like "ftp" when that's the wrong level
>>  >of precision, and partially because to me an application is larger than
>>  >any single process that implements it.
>>
>>  Well, I don't like selectors because it too strongly implies some
>>  kind of central authority.  I don't want some one who wants to put a
>>  distributed application with his own protocol to have to see anyone
>>  to do it.
>
>ah, not my intent at all.  by selector I really meant just another kind
>of demux token, but one that is globally unique rather than relative to
>the concept of 'host'.  (though it occurs to me that it's difficult to
>assure global uniqueness of name-to-process bindings for processes
>that can migrate from one host to another without causing some fairly
>serious problems)

It is an interesting problem.

Take care,
John

From adilraja at gmail.com  Sun Aug 13 13:12:31 2006
From: adilraja at gmail.com (Adil Raja)
Date: Sun, 13 Aug 2006 21:12:31 +0100
Subject: [e2e] Correlation between Packet loss rate and self similarity!
Message-ID: <b2dc5c1d0608131312w76e0b771w8f0faad3bb2d6bb8@mail.gmail.com>

Hi all,
   Could somone suggest of any articles that quantify the issue of
correlation between self-similarity (Hurst parameter) and packet loss rate?

Thanks,
Adil Raja
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.postel.org/pipermail/end2end-interest/attachments/20060813/0407727f/attachment.html

From matta at cs.bu.edu  Sun Aug 13 15:50:46 2006
From: matta at cs.bu.edu (Abraham Matta)
Date: Sun, 13 Aug 2006 18:50:46 -0400
Subject: [e2e] Correlation between Packet loss rate and self similarity!
In-Reply-To: <b2dc5c1d0608131312w76e0b771w8f0faad3bb2d6bb8@mail.gmail.com>
Message-ID: <0511C607B17F804EBE96FFECD1FD9859E210A8@cs-exs2.cs-nt.bu.edu>

 
Dear Adil,

You may look at the following article, which relates packet loss rate to
the "high variability" of TCP traffic (note: NOT "self-similarity" due
to limits on RTO etc.):

D. R. Figueiredo, B. Liu, V. Misra, and D. Towsley. On the
Autocorrelation Structure of TCP Traffic. Technical Report
UMass-CMPSC-00-55, University of Massachusetts,
Amherst, Computer Science Department, November 2000.

http://www.cs.uml.edu/~bliu/pub/COMNET-2002.pdf


ibrahim

________________________________

From: end2end-interest-bounces at postel.org
[mailto:end2end-interest-bounces at postel.org] On Behalf Of Adil Raja
Sent: Sunday, August 13, 2006 4:13 PM
To: end2end-interest at postel.org
Subject: [e2e] Correlation between Packet loss rate and self similarity!


Hi all,
   Could somone suggest of any articles that quantify the issue of
correlation between self-similarity (Hurst parameter) and packet loss
rate?

Thanks,
Adil Raja


From mellia at tlc.polito.it  Mon Aug 14 05:27:24 2006
From: mellia at tlc.polito.it (Mellia Marco)
Date: Mon, 14 Aug 2006 14:27:24 +0200
Subject: [e2e] Correlation between Packet loss rate and self similarity!
In-Reply-To: <b2dc5c1d0608131312w76e0b771w8f0faad3bb2d6bb8@mail.gmail.com>
References: <b2dc5c1d0608131312w76e0b771w8f0faad3bb2d6bb8@mail.gmail.com>
Message-ID: <44E06C2C.70100@tlc.polito.it>

You may find some results in

M.Mellia, M.Meo, L.Muscariello, D.Rossi,
"Passive Identification and Analysis of TCP anomalies"
IEEE ICC 06, Instanbul, June 2006
http://tstat.tlc.polito.it/~mellia/icc06_passive.pdf


> Hi all,
>    Could somone suggest of any articles that quantify the issue of 
> correlation between self-similarity (Hurst parameter) and packet loss rate?
> 
> Thanks,
> Adil Raja


From day at std.com  Mon Aug 14 07:02:06 2006
From: day at std.com (John Day)
Date: Mon, 14 Aug 2006 10:02:06 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <44DDFDD9.9060205@cs.utk.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net>	<a06230917c0fc6ced8fdc@[10.0.1.12]>
	<44D892E9.9060403@isi.edu>	<44D8AEA2.8040306@cs.utk.edu>
	<a06230912c0ffed91dcc2@[10.197.98.218]>
	<20060809175005.6255fec9.moore@cs.utk.edu>
	<a06230917c10028b893f6@[10.197.98.218]>	<44DACC0E.608@cs.utk.edu>
	<a06230908c10238761716@[10.0.1.24]>
	<20060811114619.8b24f93e.moore@cs.utk.edu>
	<a06230913c102707c3871@[10.0.1.24]>
	<20060811152044.18a57c03.moore@cs.utk.edu>
	<a0623092dc10395323ea5@[10.0.1.24]> <44DDFDD9.9060205@cs.utk.edu>
Message-ID: <a06230944c1062e88c01b@[10.0.1.24]>

At 12:12 -0400 2006/08/12, Keith Moore wrote:
>>>It is very hard to avoid passing IP addresses (or names of
>>>attachment points) in applications protocols.  Sure you can pass
>>>some other kind of identifier, but that requires a fast,
>>>up-to-date, accurate, robust, scalable indirection service.  It's
>>>really difficult to provide that service in such a way that's
>>>suitable for a wide range of
>>
>>Yes, but there seems to be a contradiction.  Are we saying we have to
>>  use IP addresses because it is impossible to build a system that
>>would provide the fast, up-to-date, accurate, robust, scalable
>>indirection service?
>
>perhaps not impossible, but we don't have a worked example.
>
>>Sort of begs the question as to why IP addresses work then.   If
>>anything, they should change more frequently than some form of 
>>"application-identiifer"
>
>not really, because the current internet doesn't support mobile 
>processes.  IP addresses work well enough for referrals in an 
>internet where hosts generally have fixed locations, processes 
>cannot be mobile, there is only a single global addressing realm, 
>and the network itself is small and/or relatively static (thus not 
>requiring renumbering).  if they didn't work "well enough" under 
>those conditions, we wouldn't still be able to get away with using 
>DNS.
>
>today an application identifier (not that I would call them that) 
>would be analogous to IP address + port (not necessarily well known 
>port). but in a network that supported mobile processes, you would 
>not want to tie such an identifier to an IP address, nor would you 
>want to force them to be as ephemeral as port assignments. 
>basically you end up with something like a UUID except that you need 
>some way to facilitate lookup at a trusted lookup server.  so maybe 
>a combination of a UUID and a DNS name.

Agreed.  Makes for an interesting problem doesn't it?  Thinking this 
one through taking into account the nature of its use will tell us 
much about networks that we had not previously understood.  One of 
the problems I see in all of this is that we concentrate too much on 
solving specific problems and not deepening our understanding of what 
we are doing.  Pursuing the latter might actually lead to much better 
solutions than the more immediate approach.  In other words, more 
science and less engineering.

>
>>Passing IP addresses in applications is like passing physical memory
>>  pointers in Java.
>
>it's like passing physical memory pointers in an environment where 
>all objects are heap allocated and the system garbage collector is 
>free to move objects at any time.  (which might or might not be the 
>case for a Java implementation)

Managing virtual memory has been understood for sometime.  I can't 
imagine that a Java implementation could not handle such a thing.

>>>Also, even if ordinary apps don't
>>>need to deal with names of attachment points, some apps (such as
>>>network management apps) do.  Insisting on rigid layer separation
>>>makes the system unnecessarily complex and less adaptable to
>>>unanticipated needs.
>>
>>I don't see a problem.  What network management works with are not 
>>addresses.  It may use addresses or names to establish 
>>communication,
>>  but what it uses to do management are completely different.  Don't 
>>confuse the two.  Contrary to what 802 does, addresses are not 
>>serial
>>  numbers.
>
>network management does work with addresses, because one of the 
>things network management cares about are _where_ things are 
>attached to the network.  it also cares about _what_ is attached to 
>the network, but the most reliable way of determining what is 
>attached at any point on the network is to talk to that attachment 
>point.

I think we are speaking of different things.  You seem to be looking 
at it from the pov of things today and I am looking at it from the 
pov of how it should be. (a hopeless idealistic streak I have ;-)) 
Management does care about how things are attached and should be 
assigning addresses accordingly. But I think you have the causality 
backwards.  Addresses are assigned based on where it attached, not 
the other way around.  The address does not determine where it is 
attached.  Management must know where it is attached before the 
address can be assigned.  It is quirk of history that we tended to do 
it the other way.

>>My point was that if Moore's Law hadn't been turning quite so fast we
>>  would have had to have solved some of the scaling and other problems
>>  long before now.
>
>yes, but we would have "solved" many of those problems by 
>introducing other constraints.  e.g. we might have insisted that IP 
>addresses be stable identifiers.  then we would have outlawed NATs 
>and forced static routing on the net but also said that we don't 
>need multihoming or mobility support at the internet level because 
>they're just too expensive.

Huh?  You mean like so-called MAC addresses?  which are really serial 
numbers and not addresses at all.

Actually, I would contend would contend that addresses from any 
*address space* are always static.  They always identify the *same* 
place in the *topology* of the network (not graph).  The only 
question is whether there is an entity at that "place" to be 
identified.  ;-)

More I was referring to various scaling problems in routing such as 
table size.  Moore's Law allowed IP addresses to remain a flat 
address space far longer than it was healthy.  We never had to come 
to grips with the scaling problems inherent in what we were creating. 
If we had had to solve some of these problems it would have cleared 
up others along the way.

Take care,
John


From pekka.nikander at nomadiclab.com  Mon Aug 14 12:21:05 2006
From: pekka.nikander at nomadiclab.com (Pekka Nikander)
Date: Mon, 14 Aug 2006 22:21:05 +0300
Subject: [e2e] About the primitives and their value
In-Reply-To: <44DB94EF.5020300@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
	<D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>
	<44DB94EF.5020300@isi.edu>
Message-ID: <3000A686-0FE4-4E70-A7B7-0B008C6E3AAB@nomadiclab.com>

Joe,

> If we saw a paradigm that didn't relocate the problem (e.g., as
> publish/subscribe does), sure. I haven't seen one yet. From an
> information theoretic view, I'm not sure it's possible, either, but  
> I'd
> be glad to see suggestions.

I'm afraid you may be underestimating the value relocating a problem  
might have.

Having a tussle locally, where there may be some jurisdiction that  
applies to both parties, may be easier to solve than having a tussle  
that must be fought over the network, with many interest holders and  
jurisdictions over the way.

Hence, while sub/pub or pub/sub may "just" relocate "the problem"  
compared to send/receive, that relocation may have real life effects  
on the scale of macro-economic problems (spam) caused by "the  
problem" (the cost of sending being lower than the cost of receiving).

In other words, IMHO, it is important for us to understand that macro  
behaviour results as an emergent property from individual micro- 
behaviours and strategies, which in turn are affected by the local  
tussle grounds created by the limitations posed by the technical  
system AND the local jurisdictions (and other factors, such as  
culture in the large).  If a technical system manages to limit some  
problem so that it can be localised to apply between two mutually- 
strongly-identifiable parties, preferably under a single  
jurisdiction, that changes the tussle grounds by creating larger  
possibilities for local retaliation, acting as disincentive for  
misbehaviour.  That in turn may have sizeable visible effects in the  
macro behaviour, since it has the potential of changing "good  
behaviour" from a bad strategy into a sustainable one.

A partial reading list for me to still wade through: Shelling,  
Micromotives and Macrobehaviour; Young, Individual Strategy and  
Social Behaviour; and D?rner, Logic of Failure.  Plus of course  
Axelrod, which I already referred to.

--Pekka


From touch at ISI.EDU  Mon Aug 14 16:46:08 2006
From: touch at ISI.EDU (Joe Touch)
Date: Mon, 14 Aug 2006 16:46:08 -0700
Subject: [e2e] About the primitives and their value
In-Reply-To: <3000A686-0FE4-4E70-A7B7-0B008C6E3AAB@nomadiclab.com>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
	<D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>
	<44DB94EF.5020300@isi.edu>
	<3000A686-0FE4-4E70-A7B7-0B008C6E3AAB@nomadiclab.com>
Message-ID: <44E10B40.1000207@isi.edu>



Pekka Nikander wrote:
> Joe,
> 
>> If we saw a paradigm that didn't relocate the problem (e.g., as
>> publish/subscribe does), sure. I haven't seen one yet. From an
>> information theoretic view, I'm not sure it's possible, either, but I'd
>> be glad to see suggestions.
> 
> I'm afraid you may be underestimating the value relocating a problem
> might have.

Relocation can have a benefit - if the issue were whether the receiver
can keep pace with unsolicited input. It does not remove or reduce the
amount of unsolicited input that must be handled. I.e., affecting scale
helps only where scale is the perceived problem. Simply shuffling the
unsolicited requests around doesn't reduce the number of them.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060814/8dc85e50/signature.bin

From pekka.nikander at nomadiclab.com  Mon Aug 14 22:04:50 2006
From: pekka.nikander at nomadiclab.com (Pekka Nikander)
Date: Tue, 15 Aug 2006 08:04:50 +0300
Subject: [e2e] About the primitives and their value
In-Reply-To: <44E10B40.1000207@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
	<D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>
	<44DB94EF.5020300@isi.edu>
	<3000A686-0FE4-4E70-A7B7-0B008C6E3AAB@nomadiclab.com>
	<44E10B40.1000207@isi.edu>
Message-ID: <A02C2A9A-F833-49BD-9BCF-7AA95B23131A@nomadiclab.com>

>> I'm afraid you may be underestimating the value relocating a problem
>> might have.
>
> Simply shuffling the unsolicited requests around doesn't reduce the  
> number of them.

I read this as you failing to understand or failing to want to  
understand my point, for a reason or another.  If I am mistaken, my  
apologies.

Technically, you are right, of course.  Changing primitives doesn't  
remove any traffic that someone sends.

What I am trying to say is that moving around the problem or changing  
the technical primitives may change the situation so that some people  
do not want to bad send traffic any more, due to reduced potential  
benefit or increased risks.  Furthermore, I am also trying to say  
that if enough of people change their mind in this way, the large- 
scale behaviour as observed may change drastically.

--Pekka


From gds at best.com  Tue Aug 15 00:11:58 2006
From: gds at best.com (Greg Skinner)
Date: Tue, 15 Aug 2006 07:11:58 +0000
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <a06230908c10238761716@[10.0.1.24]>;
	from day@std.com on Fri, Aug 11, 2006 at 10:15:17AM -0400
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net>
	<a06230917c0fc6ced8fdc@[10.0.1.12]> <44D892E9.9060403@isi.edu>
	<44D8AEA2.8040306@cs.utk.edu>
	<a06230912c0ffed91dcc2@[10.197.98.218]>
	<20060809175005.6255fec9.moore@cs.utk.edu>
	<a06230917c10028b893f6@[10.197.98.218]>
	<44DACC0E.608@cs.utk.edu> <a06230908c10238761716@[10.0.1.24]>
Message-ID: <20060815071158.A85157@gds.best.vwh.net>

On Fri, Aug 11, 2006 at 10:15:17AM -0400, John Day wrote:
> Again, if you analyze what goes on in a single system, you will find 
> that the mechanism for finding the destination application and 
> determining whether the requesting application can access it is 
> distinct and should be distinct from the problem of providing a 
> channel with specific performance properties. I completely agree with 
> everything you said above, but TCP is not and should not be the whole 
> IPC. It just provides the reliable channel.
> 
> Harkening back to the historical discussion, this was all clear 35 
> years ago but it was a lot of work to build a network on machines 
> with much less compute power than your cell phone and we didn't have 
> time for all of the niceties.  We had to show that this was useful 
> for something.  So we (like all engineers) took a few short cuts 
> knowing they weren't the final answer.  It is just unfortunate that 
> the people who came later have not put it right.

I'm somewhat confused at where this general line of reasoning is
going.  You're making an argument for some generalized notion of
well-known services.  I wonder if the effort is really worth it (when
weighed against other things that need to be done).  There's been
talk of spam, for example, and how a different notion of well-known
services than what we have now would reduce the spam problem.  I'm not
so sure.  The problem with spam seems to stem from (1) email is free,
and (2) it's easy to write applications that compromise machines that
take advantage of (1).

If a more generalized notion of well-known service had been developed,
would other things have been able to progress as well as they did?
Other things needed to be built in order for the ARPAnet and Internet
to be useful (especially to the people paying for the research) such
as scalable naming and congestion avoidance.  There is only so much
time (money) available to do work.  Are we really that much worse off
because we have well-known ports?

> At 2:02 -0400 2006/08/10, Keith Moore wrote:
> >Which is a good thing, since we generally do not have the luxury of 
> >either doing things right the first time (anticipating all future 
> >needs) or scrapping an old architecture and starting again from 
> >scratch. Software will always be working around architectural 
> >deficiencies. Similarly we have to keep compatibility and transition 
> >issues in mind when considering architectural changes.
> Be careful.  We do and we don't.  I have known many companies that 
> over time have step by step made wholesale *replacement* of major 
> parts of their products as they transition.  Sometimes maintaining 
> backward compatibility, sometimes not.  But new releases come out 
> with completely new designs for parts of the system.  You are arguing 
> that nothing is ever replaced and all changes is by modifying what is 
> there.  This is the evolution works.  And 99% of its cases end as 
> dead-ends in extinction.  With evolution, it doesn't matter there are 
> 100s of billions of cases.  But when there is one case, the odds 
> aren't too good.  (And don't tell me not to worry because the actions 
> of the IETF are not random mutations.  There are those that would 
> dispute that! ;-))

But after a certain point, the Internet could not be completely
replaced.  There was too much "infrastructure" that people were
dependent on, so new work continued around the architectural
deficiencies.  It would be nice to apply Frederick Brooks' "prepare to
throw one away," but that's not always practical, even in research.

--gregbo

From day at std.com  Tue Aug 15 07:14:33 2006
From: day at std.com (John Day)
Date: Tue, 15 Aug 2006 10:14:33 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <20060815071158.A85157@gds.best.vwh.net>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net> <a06230917c0fc6ced8fdc@[10.0.1.12]>
	<44D892E9.9060403@isi.edu> <44D8AEA2.8040306@cs.utk.edu>
	<a06230912c0ffed91dcc2@[10.197.98.218]>
	<20060809175005.6255fec9.moore@cs.utk.edu>
	<a06230917c10028b893f6@[10.197.98.218]> <44DACC0E.608@cs.utk.edu>
	<a06230908c10238761716@[10.0.1.24]>
	<20060815071158.A85157@gds.best.vwh.net>
Message-ID: <a06230900c1078120632d@[10.0.1.24]>

At 7:11 +0000 2006/08/15, Greg Skinner wrote:
>On Fri, Aug 11, 2006 at 10:15:17AM -0400, John Day wrote:
>>  Again, if you analyze what goes on in a single system, you will find
>>  that the mechanism for finding the destination application and
>>  determining whether the requesting application can access it is
>>  distinct and should be distinct from the problem of providing a
>>  channel with specific performance properties. I completely agree with
>>  everything you said above, but TCP is not and should not be the whole
>>  IPC. It just provides the reliable channel.
>>
>>  Harkening back to the historical discussion, this was all clear 35
>>  years ago but it was a lot of work to build a network on machines
>>  with much less compute power than your cell phone and we didn't have
>>  time for all of the niceties.  We had to show that this was useful
>>  for something.  So we (like all engineers) took a few short cuts
>>  knowing they weren't the final answer.  It is just unfortunate that
>>  the people who came later have not put it right.
>
>I'm somewhat confused at where this general line of reasoning is
>going.  You're making an argument for some generalized notion of
>well-known services.  I wonder if the effort is really worth it (when
>weighed against other things that need to be done).  There's been
>talk of spam, for example, and how a different notion of well-known
>services than what we have now would reduce the spam problem.  I'm not
>so sure.  The problem with spam seems to stem from (1) email is free,
>and (2) it's easy to write applications that compromise machines that
>take advantage of (1).
>
>If a more generalized notion of well-known service had been developed,
>would other things have been able to progress as well as they did?
>Other things needed to be built in order for the ARPAnet and Internet
>to be useful (especially to the people paying for the research) such
>as scalable naming and congestion avoidance.  There is only so much
>time (money) available to do work.  Are we really that much worse off
>because we have well-known ports?

I am arguing for a complete addressing architecture, not the half an 
architecture we have.  What we have is an unfinished demo.  If it 
were an OS, it would make DOS look good.

While I understand that your appraisal of how the funding was being 
allocated is completely rationale, it bears little resemblance to the 
reality.

What I find really remarkable is the inability of current researchers 
to see beyond what is there.  It is interesting that they are so 
focused on current developments that they are unable to see beyond 
them.

Are we really worse off for having half an architecture?  Yes.

>
>>  At 2:02 -0400 2006/08/10, Keith Moore wrote:
>>  >Which is a good thing, since we generally do not have the luxury of
>>  >either doing things right the first time (anticipating all future
>>  >needs) or scrapping an old architecture and starting again from
>>  >scratch. Software will always be working around architectural
>>  >deficiencies. Similarly we have to keep compatibility and transition
>>  >issues in mind when considering architectural changes.
>>  Be careful.  We do and we don't.  I have known many companies that
>>  over time have step by step made wholesale *replacement* of major
>>  parts of their products as they transition.  Sometimes maintaining
>>  backward compatibility, sometimes not.  But new releases come out
>>  with completely new designs for parts of the system.  You are arguing
>>  that nothing is ever replaced and all changes is by modifying what is
>>  there.  This is the evolution works.  And 99% of its cases end as
>>  dead-ends in extinction.  With evolution, it doesn't matter there are
>>  100s of billions of cases.  But when there is one case, the odds
>>  aren't too good.  (And don't tell me not to worry because the actions
>>  of the IETF are not random mutations.  There are those that would
>>  dispute that! ;-))
>
>But after a certain point, the Internet could not be completely
>replaced.  There was too much "infrastructure" that people were

A common myth intended to protect vested interests.

>dependent on, so new work continued around the architectural
>deficiencies.  It would be nice to apply Frederick Brooks' "prepare to
>throw one away," but that's not always practical, even in research.

;-)  This is the argument I always find the most entertaining.  We 
have been hearing it since the early 1980s.  It assumes that the Net 
is nearing the end of its growth.  That we have done just about 
everything with it we can. That Telnet and FTP are all we need (a 
view expressed to me in 1975)  Hence there is no reason to go on 
improving.  Frankly, I believe that the Net is still in the earliest 
stages of its growth.

I don't expect to see a wholesale replacement overnight.  But I do 
think it is possible to move to a much better Net over time.  Those 
that rely on the argument  that there is just too much infrastructure 
in place, etc for change generally are the ones who either lack 
imagination or are merely protecting their vested interest (I hope 
more the latter than the former).  It would seem that the Internet 
has become even more blinded to next step than the phone companies of 
the 70s were when we first started this.  When I look back over the 
last 25 years, it saddens me to see a field that had such vibrance in 
its early days fall so quickly into acting like stodgy old men. 
Maybe they just ran out of ideas. I don't really know.

Take care,
John


From touch at ISI.EDU  Tue Aug 15 07:57:47 2006
From: touch at ISI.EDU (Joe Touch)
Date: Tue, 15 Aug 2006 07:57:47 -0700
Subject: [e2e] About the primitives and their value
In-Reply-To: <A02C2A9A-F833-49BD-9BCF-7AA95B23131A@nomadiclab.com>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
	<D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>
	<44DB94EF.5020300@isi.edu>
	<3000A686-0FE4-4E70-A7B7-0B008C6E3AAB@nomadiclab.com>
	<44E10B40.1000207@isi.edu>
	<A02C2A9A-F833-49BD-9BCF-7AA95B23131A@nomadiclab.com>
Message-ID: <44E1E0EB.7030302@isi.edu>



Pekka Nikander wrote:
>>> I'm afraid you may be underestimating the value relocating a problem
>>> might have.
>>
>> Simply shuffling the unsolicited requests around doesn't reduce the
>> number of them.
> 
> I read this as you failing to understand or failing to want to
> understand my point, for a reason or another.  If I am mistaken, my
> apologies.
> 
> Technically, you are right, of course.  Changing primitives doesn't
> remove any traffic that someone sends.
> 
> What I am trying to say is that moving around the problem or changing
> the technical primitives may change the situation so that some people do
> not want to bad send traffic any more, due to reduced potential benefit
> or increased risks.  Furthermore, I am also trying to say that if enough
> of people change their mind in this way, the large-scale behaviour as
> observed may change drastically.

The key is what the problem is:
	1- shedding unsolicited load per se
	2- shedding load at underpowered places
	3- reducing the incentive to attack

Addressing 1 also addresses 2 and 3.

Addressing 2 does not address either 1 or 3. Attackers may still want to
take down Akamai, Google, or Microsoft (and have), even though they're
resource-rich.

IMO, we have to live with unsolicited load everywhere; the key question
is how to incrementally invest in establishing mutual communication in a
way that doesn't create a DOS opportunity. Doing that means attackers
must invest more - as much as a legitimate endpoint - to accomplish an
attack. That presents its own disincentive, plus the longer, more
detailed exchange presents opportunity for tracing.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060815/35ea9612/signature.bin

From dpreed at reed.com  Tue Aug 15 12:36:37 2006
From: dpreed at reed.com (David P. Reed)
Date: Tue, 15 Aug 2006 15:36:37 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <a06230900c1078120632d@[10.0.1.24]>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>	<44D4C94F.8020801@dcrocker.net>
	<a06230917c0fc6ced8fdc@[10.0.1.12]>	<44D892E9.9060403@isi.edu>
	<44D8AEA2.8040306@cs.utk.edu>	<a06230912c0ffed91dcc2@[10.197.98.218]>	<20060809175005.6255fec9.moore@cs.utk.edu>	<a06230917c10028b893f6@[10.197.98.218]>
	<44DACC0E.608@cs.utk.edu>	<a06230908c10238761716@[10.0.1.24]>	<20060815071158.A85157@gds.best.vwh.net>
	<a06230900c1078120632d@[10.0.1.24]>
Message-ID: <44E22245.5040203@reed.com>

John Day wrote:
>  When I look back over the last 25 years, it saddens me to see a field 
> that had such vibrance in its early days fall so quickly into acting 
> like stodgy old men. Maybe they just ran out of ideas. I don't really 
> know.
I sympathize. However, when you think about it the old ones like you and 
I are not particularly stodgy about chucking the first draft of an idea 
and trying again a different way.  It's the youngsters that seem to me 
to be acting stodgy here.  I suspect it's that any design that was built 
before they were in high school seems to be part of the pre-ordained 
natural world, rather than just a science project that turned out to be 
pretty cool.

The youngsters that aren't stodgy are the ones who are hacking synthetic 
biology, making organisms and so on.   It's wide open because it's never 
been done before.   And I'm hoping "mobility" is wide open still, for 
me, but even more so for youngsters.  There's no mobile interoperability 
- and don't give me MobileIP or IMS as the answer...


From pekka.nikander at nomadiclab.com  Tue Aug 15 21:30:33 2006
From: pekka.nikander at nomadiclab.com (Pekka Nikander)
Date: Wed, 16 Aug 2006 07:30:33 +0300
Subject: [e2e] About the primitives and their value
In-Reply-To: <44E1E0EB.7030302@isi.edu>
References: <E1G9zsL-0005Hp-00@mta1.cl.cam.ac.uk>
	<44D73A04.1080003@isi.edu>	<44D7614E.6050202@reed.com>
	<44D8B5D9.2070802@isi.edu>	<44D8C055.9060903@reed.com>
	<44D8C958.9020809@isi.edu>	<44D8CFFC.3060004@reed.com>
	<44D8D308.5020601@isi.edu>
	<4E186EF2-063A-4025-A182-AE712795CE30@nomadiclab.com>
	<44D9F164.1030906@isi.edu>
	<CF0BE656-698F-4D13-B04C-4E2526031AD7@nomadiclab.com>
	<44DA0444.3000103@isi.edu>
	<D21B2505-FEBE-4155-9031-C9E90969D5BF@nomadiclab.com>
	<44DB94EF.5020300@isi.edu>
	<3000A686-0FE4-4E70-A7B7-0B008C6E3AAB@nomadiclab.com>
	<44E10B40.1000207@isi.edu>
	<A02C2A9A-F833-49BD-9BCF-7AA95B23131A@nomadiclab.com>
	<44E1E0EB.7030302@isi.edu>
Message-ID: <63B7A4C9-16EC-4568-A369-ADEFBF1AF864@nomadiclab.com>

> The key is what the problem is:
> 	1- shedding unsolicited load per se
> 	2- shedding load at underpowered places
> 	3- reducing the incentive to attack

Ok, I finally got your point of view, thanks.  Sorry for my  
stubbornness.

> Addressing 1 also addresses 2 and 3.

I think the relationship between 1 and 3 is much more complex,  
working in both directions.

Your focus seems to be in 1, with the intention of thereby working  
out 3.  My focus (at least for the purposes of this discussion) has  
been in looking at the fundamental architectural primitives, with the  
intention of changing the economic game that affects 3, which in turn  
would reduce the need for 1.

To me, your approach seems to be very appropriate for the short-to- 
medium term, solving the practical problems.  OTOH, as you stated  
yourself, it has its limitations; I'm afraid that given the current  
game created by the current Internet primitives, the situation is  
unlikely to improve dramatically.  But we can probably live with the  
resulting rat run between the bad guys and good guys, just as we have  
been doing for the past 10-15 years.

My alleged approach, if doable in the first place, might help in the  
longer run.  However, I can't tell whether it would work in the first  
place at all; it's too early.  Hence, at the time being it seems like  
a very interesting research direction, gaining some attention among  
some people planning to work under the auspices of NSF FIND, GENI,  
and CEC FP7.

--Pekka


From gds at best.com  Wed Aug 16 14:47:28 2006
From: gds at best.com (Greg Skinner)
Date: Wed, 16 Aug 2006 21:47:28 +0000
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <a06230900c1078120632d@[10.0.1.24]>;
	from day@std.com on Tue, Aug 15, 2006 at 10:14:33AM -0400
References: <a06230917c0fc6ced8fdc@[10.0.1.12]> <44D892E9.9060403@isi.edu>
	<44D8AEA2.8040306@cs.utk.edu>
	<a06230912c0ffed91dcc2@[10.197.98.218]>
	<20060809175005.6255fec9.moore@cs.utk.edu>
	<a06230917c10028b893f6@[10.197.98.218]>
	<44DACC0E.608@cs.utk.edu> <a06230908c10238761716@[10.0.1.24]>
	<20060815071158.A85157@gds.best.vwh.net>
	<a06230900c1078120632d@[10.0.1.24]>
Message-ID: <20060816214728.A37623@gds.best.vwh.net>

On Tue, Aug 15, 2006 at 10:14:33AM -0400, John Day wrote:
> I am arguing for a complete addressing architecture, not the half an 
> architecture we have.  What we have is an unfinished demo.  If it 
> were an OS, it would make DOS look good.
> 
> While I understand that your appraisal of how the funding was being 
> allocated is completely rationale, it bears little resemblance to the 
> reality.

If I understand you correctly, there was research money available
(earmarked?) for more addressing architecture research during the
ARPAnet and Internet's research phase?  What happened to that money?
Was it never claimed, or used for something else?

> What I find really remarkable is the inability of current researchers 
> to see beyond what is there.  It is interesting that they are so 
> focused on current developments that they are unable to see beyond 
> them.

Hmmm.  What do you think of the results of the NewArch group?  In your
opinion, do they seem to be headed in the right direction?

> Greg Skinner wrote:
> >But after a certain point, the Internet could not be completely
> >replaced.  There was too much "infrastructure" that people were
> A common myth intended to protect vested interests.

If working prototypes were developed, but didn't get folded into the
worldwide Internet, would you still consider the research a success?
Arguably, the vested interests (at least ISPs and vendors) have a lot
to lose if for some reason, the better addressing architecture can't
be deployed successfully, or costs customers too much money.  I
suppose the same can be said for researchers who are funded from
commercial vested interests.

> I don't expect to see a wholesale replacement overnight.  But I do 
> think it is possible to move to a much better Net over time.  Those 
> that rely on the argument  that there is just too much infrastructure 
> in place, etc for change generally are the ones who either lack 
> imagination or are merely protecting their vested interest (I hope 
> more the latter than the former).  It would seem that the Internet 
> has become even more blinded to next step than the phone companies of 
> the 70s were when we first started this.  When I look back over the 
> last 25 years, it saddens me to see a field that had such vibrance in 
> its early days fall so quickly into acting like stodgy old men. 
> Maybe they just ran out of ideas. I don't really know.

Sometimes I think that computer networking lacks the luster of other
areas of computer science.  It has its hardcore contributors (such as
people who participate on this list), but the vast majority of people
gravitate towards other areas, either out of necessity (money), or
because they find the work more
interesting/challenging/rewarding. You'll find no end of people who
want to do information retrieval, or data mining, because they want to
work for Google or some other search engine.  Are these the types of
people the computer networking industry and research community need to
create the next generation Internet?  What would compel them to do so?
I don't know either.

--gregbo

From dpreed at reed.com  Wed Aug 16 20:20:52 2006
From: dpreed at reed.com (David P. Reed)
Date: Wed, 16 Aug 2006 23:20:52 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <20060816214728.A37623@gds.best.vwh.net>
References: <a06230917c0fc6ced8fdc@[10.0.1.12]>
	<44D892E9.9060403@isi.edu>	<44D8AEA2.8040306@cs.utk.edu>	<a06230912c0ffed91dcc2@[10.197.98.218]>	<20060809175005.6255fec9.moore@cs.utk.edu>	<a06230917c10028b893f6@[10.197.98.218]>	<44DACC0E.608@cs.utk.edu>
	<a06230908c10238761716@[10.0.1.24]>	<20060815071158.A85157@gds.best.vwh.net>	<a06230900c1078120632d@[10.0.1.24]>
	<20060816214728.A37623@gds.best.vwh.net>
Message-ID: <44E3E094.4090101@reed.com>

Greg Skinner wrote:
> Sometimes I think that computer networking lacks the luster of other
> areas of computer science.  It has its hardcore contributors (such as
> people who participate on this list), but the vast majority of people
> gravitate towards other areas, either out of necessity (money), or
> because they find the work more
> interesting/challenging/rewarding. You'll find no end of people who
> want to do information retrieval, or data mining, because they want to
> work for Google or some other search engine.  Are these the types of
> people the computer networking industry and research community need to
> create the next generation Internet?  What would compel them to do so?
> I don't know either.
>   
I don't get this.   You honestly think those of us who got into 
networking in the 1960's and 1970's did it for the luster and money?   
There was no money in it, none of us had dreams of being rich that I 
know of.   Most of the folks a couple of years older than me were in 
grad school  because they liked playing with computers and gave them a 
draft deferment on top of that.  And my generation didn't get draft 
deferments, but we still loved playing with computers (which we couldn't 
afford to do because there were no microprocessors and no PCs). I could 
go on - but networking has never been about making lots of money for its 
inventors (though some were lucky to get hired into companies like 
Worldcom and Cisco, almost by accident).

Besides that, most of the people I knew in networking and PCs were 
primarily motivated by making people's lives better through better 
communications and better man-machine-symbiosis.   That was what you 
heard from Licklider, Roberts, Taylor, Kahn, Cerf, Kirstein, Pouzin, 
Farber, Kleinrock, ... and many, many others in the network arena, and 
also for all of the AI, PC, CHI, etc folks.   If you knew Jon Postel, 
you know it was about mankind for him.

That motivation is still there for anyone who finds it inspiring, and 
many still do.

But why *should* anyone care about a "next Internet" defined as just 
another packet network architecture that does the same thing in a new 
way?   The grand vision of the Internet was written down by Licklider 
and Taylor in 1967  or so in what became a Scientific American article 
that all should read.   The challenge is to create as profound a vision 
(one that will guide people for 30+ years), and if you read that 
article, it was not about "computer networking" it was about human 
potential.   That level of vision will be "the next Internet" in a 
purely metaphorical sense - but it won't be about the same aspects of 
people and machines, I'm sure.

The problem arises because some in academia want "computer networking" 
to be a discipline, just as some want "computer science" to be a 
discipline.   Disciplines are for the Divinity School to study.

There's no special discipline for *computer* networking, or even for 
networking.  There are just systems that get designed according to 
principles that are hard won and oft debated by folks who care to make 
things work and to make them work better and apply them to more and more 
important and meaningful things.

The idea that "data mining" could be thought to be in the same category 
as creating communications systems in that grand and interdisciplinary 
sense just suggests to me the poverty of thinking that pervades what 
used to be a great country like America.

Don't worry, this poverty of thinking has diminished DARPA and most of 
the rest of the government research establishment today.  (there are 
petty "crimes" in the way DARPA is run, but the biggest "crime" is that 
it has substituted for conceptual vision a notion that you can get 
creativity by running contests against a set of rules and 
requirements).  Those who "dance" listlessly to that tune also seem to 
think that being quoted in Wired magazine defines their technical 
worthiness, or that thoughts by blogging pundits written in an orgy of 
self-absorption represent profundity.

Follow a path with a heart - one that you and others resonate with.  
Stop looking to find external validation and "incentives" that will win 
you praise or make you rich.   Do what you can honestly say is worth the 
effort.




From day at std.com  Wed Aug 16 21:18:07 2006
From: day at std.com (John Day)
Date: Thu, 17 Aug 2006 00:18:07 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <44E3E094.4090101@reed.com>
References: <a06230917c0fc6ced8fdc@[10.0.1.12]> <44D892E9.9060403@isi.edu>
	<44D8AEA2.8040306@cs.utk.edu>	<a06230912c0ffed91dcc2@[10.197.98.218]>
	<20060809175005.6255fec9.moore@cs.utk.edu>
	<a06230917c10028b893f6@[10.197.98.218]>	<44DACC0E.608@cs.utk.edu>
	<a06230908c10238761716@[10.0.1.24]>
	<20060815071158.A85157@gds.best.vwh.net>
	<a06230900c1078120632d@[10.0.1.24]>
	<20060816214728.A37623@gds.best.vwh.net> <44E3E094.4090101@reed.com>
Message-ID: <a06230925c1099966b9a2@[10.0.1.7]>

At 23:20 -0400 2006/08/16, David P. Reed wrote:
>Greg Skinner wrote:
>>Sometimes I think that computer networking lacks the luster of other
>>areas of computer science.  It has its hardcore contributors (such as
>>people who participate on this list), but the vast majority of people
>>gravitate towards other areas, either out of necessity (money), or
>>because they find the work more
>>interesting/challenging/rewarding. You'll find no end of people who
>>want to do information retrieval, or data mining, because they want to
>>work for Google or some other search engine.  Are these the types of
>>people the computer networking industry and research community need to
>>create the next generation Internet?  What would compel them to do so?
>>I don't know either.
>>
>I don't get this.   You honestly think those of us who got into 
>networking in the 1960's and 1970's did it for the luster and money?

I don't think it meant it that way at all.  I took Greg to mean the 
luster of the intellectual challenge.  Not really the potential for 
money, although that does seem to drive a lot of people today.

My experience is that you only find that sort of success when you 
don't go looking for it.

>There was no money in it, none of us had dreams of being rich that I 
>know of.   Most of the folks a couple of years older than me were in 
>grad school  because they liked playing with computers and gave them 
>a draft deferment on top of that.  And my generation didn't get 
>draft deferments, but we still loved playing with computers (which 
>we couldn't afford to do because there were no microprocessors and 
>no PCs). I could go on - but networking has never been about making 
>lots of money for its inventors (though some were lucky to get hired 
>into companies like Worldcom and Cisco, almost by accident).
>
>Besides that, most of the people I knew in networking and PCs were 
>primarily motivated by making people's lives better through better 
>communications and better man-machine-symbiosis.   That was what you 
>heard from Licklider, Roberts, Taylor, Kahn, Cerf, Kirstein, Pouzin, 
>Farber, Kleinrock, ... and many, many others in the network arena, 
>and also for all of the AI, PC, CHI, etc folks.   If you knew Jon 
>Postel, you know it was about mankind for him.

Well, it was all of that and a helluva lot of fun as well. ;-)  I 
remember Louis and Danthine referring to the "Network Traveling 
Circus" around 75.  The intellectual challenge, that we were working 
out the deeper structure of the problem and the idea that we were 
doing something that would change the world. When people ask me if we 
expected the 'Net to turn out the way it has, I give them a very 
emphatic YES. Then I tell them I/we didn't know what form it would 
precisely take or how it was going to change the world but we always 
knew it would.

>That motivation is still there for anyone who finds it inspiring, 
>and many still do.
>
>But why *should* anyone care about a "next Internet" defined as just 
>another packet network architecture that does the same thing in a 
>new way?   The grand vision of the Internet was written down by 
>Licklider and Taylor in 1967  or so in what became a Scientific 
>American article that all should read.   The challenge is to create 
>as profound a vision (one that will guide people for 30+ years), and 
>if you read that article, it was not about "computer networking" it 
>was about human potential.   That level of vision will be "the next 
>Internet" in a purely metaphorical sense - but it won't be about the 
>same aspects of people and machines, I'm sure.
>
>The problem arises because some in academia want "computer 
>networking" to be a discipline, just as some want "computer science" 
>to be a discipline.   Disciplines are for the Divinity School to 
>study.
>
>There's no special discipline for *computer* networking, or even for 
>networking.  There are just systems that get designed according to 
>principles that are hard won and oft debated by folks who care to 
>make things work and to make them work better and apply them to more 
>and more important and meaningful things.

I don't know about disciplines.  Disciplines are for technicians. 
Networking should be a science and it isn't.  Not even close.  A 
scientist is only a success when he proves a theory wrong.  We should 
be constantly pushing at the edges, trying to prove everything we 
ever believed wrong or at least gaining a better understanding of 
what it is.  Networking has not been doing that for 25 years.

Why are the academics intent on creating technicians and not 
scientists?  I have yet to find a single university level networking 
textbook.  They are all vocational-ed.  They regurgitate what is 
popular or common now. (Has anyone pricked the P2P bubble? No they 
just gush.) No one teaches what are the principles from which one 
could derive a network.  This isn't how I was taught EE.  We weren't 
taught how the common amplifiers or radios were built by the major 
companies.  We were taught the principles by which we could do it 
ourselves.

>The idea that "data mining" could be thought to be in the same 
>category as creating communications systems in that grand and 
>interdisciplinary sense just suggests to me the poverty of thinking 
>that pervades what used to be a great country like America.
>
>Don't worry, this poverty of thinking has diminished DARPA and most 
>of the rest of the government research establishment today.  (there 
>are petty "crimes" in the way DARPA is run, but the biggest "crime" 
>is that it has substituted for conceptual vision a notion that you 
>can get creativity by running contests against a set of rules and 
>requirements).  Those who "dance" listlessly to that tune also seem 
>to think that being quoted in Wired magazine defines their technical 
>worthiness, or that thoughts by blogging pundits written in an orgy 
>of self-absorption represent profundity.

Totally agree.  I wrote an essay a few years back when I noticed that 
this behavior of funding agencies looking for quick ROI was 
generating lots of technique and not much depth of knowledge.  This 
is the same phenomena (for different reasons) that lead to the 
stagnation of Chinese science by the 16th Century and it is having 
the same effect today.

>Follow a path with a heart - one that you and others resonate with. 
>Stop looking to find external validation and "incentives" that will 
>win you praise or make you rich.   Do what you can honestly say is 
>worth the effort.

Totally agree.

Take care,
John

From touch at ISI.EDU  Thu Aug 17 06:49:55 2006
From: touch at ISI.EDU (Joe Touch)
Date: Thu, 17 Aug 2006 06:49:55 -0700
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <a06230900c1078120632d@[10.0.1.24]>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>	<44D4C94F.8020801@dcrocker.net>
	<a06230917c0fc6ced8fdc@[10.0.1.12]>	<44D892E9.9060403@isi.edu>
	<44D8AEA2.8040306@cs.utk.edu>	<a06230912c0ffed91dcc2@[10.197.98.218]>	<20060809175005.6255fec9.moore@cs.utk.edu>	<a06230917c10028b893f6@[10.197.98.218]>
	<44DACC0E.608@cs.utk.edu>	<a06230908c10238761716@[10.0.1.24]>	<20060815071158.A85157@gds.best.vwh.net>
	<a06230900c1078120632d@[10.0.1.24]>
Message-ID: <44E47403.8070609@isi.edu>



John Day wrote:
...
> What I find really remarkable is the inability of current researchers to
> see beyond what is there.  It is interesting that they are so focused on
> current developments that they are unable to see beyond them.

Yeah, so far all they've come up with is:

- the web (addressing at the app layer)
- DHTs (hash-based addressing)
- overlays (arbitrary addressing using the Internet as a link layer)

It's sad that they haven't gotten beyond the Internet's original vision
of email and remote login. Oh well, back to the drawing board ;-)

As to whether we are scientists or technicians, that depends on your
definition. The last time I checked, scientists created theories about
reality and validated them via observation and iteration. There are
plenty of those out there; in a sense, the Internet is just a theory
about how to network, and the iterations are about resolving the theory
with new uses and ideas - including indirection, virtualization,
separating process (function) from location from communication
association - which is how this discussion originated.  It's in the
abstraction of these ideas that there is science.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060817/0d593dfc/signature.bin

From day at std.com  Thu Aug 17 07:26:44 2006
From: day at std.com (John Day)
Date: Thu, 17 Aug 2006 10:26:44 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <44E47403.8070609@isi.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net> <a06230917c0fc6ced8fdc@[10.0.1.12]>
	<44D892E9.9060403@isi.edu> <44D8AEA2.8040306@cs.utk.edu>
	<a06230912c0ffed91dcc2@[10.197.98.218]>
	<20060809175005.6255fec9.moore@cs.utk.edu>
	<a06230917c10028b893f6@[10.197.98.218]> <44DACC0E.608@cs.utk.edu>
	<a06230908c10238761716@[10.0.1.24]>
	<20060815071158.A85157@gds.best.vwh.net>
	<a06230900c1078120632d@[10.0.1.24]> <44E47403.8070609@isi.edu>
Message-ID: <a06230931c10a27da2e33@[10.0.1.7]>

At 6:49 -0700 2006/08/17, Joe Touch wrote:
>John Day wrote:
>...
>>  What I find really remarkable is the inability of current researchers to
>>  see beyond what is there.  It is interesting that they are so focused on
>>  current developments that they are unable to see beyond them.
>
>Yeah, so far all they've come up with is:
>
>- the web (addressing at the app layer)
>- DHTs (hash-based addressing)
>- overlays (arbitrary addressing using the Internet as a link layer)

The web on the one hand is just a souped up version of Englebart's 
NLS.  Addressing within an application doesn't count.

DHTs- How to turn one flat address space into another flat address 
space.  I see you haven't seen through this one yet.

Overlays - an interesting thought but for now really just trying to 
paper over the real problems.

>It's sad that they haven't gotten beyond the Internet's original vision
>of email and remote login. Oh well, back to the drawing board ;-)
>
>As to whether we are scientists or technicians, that depends on your
>definition. The last time I checked, scientists created theories about
>reality and validated them via observation and iteration. There are

That is only part of it.  Remember Newton's Regulae Philosphandi 
(guidelines): (in part) Theory should be the fewest number of 
concepts to cover the space.

This is why I said engineers are infatuated with creating 
differences, while scientists are infatuated with finding 
similarities.  I don't see much simplification in the Internet over 
the last 35 years.  In fact, what I see are complexities heaped on 
complexity.

>plenty of those out there; in a sense, the Internet is just a theory
>about how to network, and the iterations are about resolving the theory

Ahhh, now I see, this is the root of the problem.  The Internet is 
not a theory.  It is a very specific engineering example.

>with new uses and ideas - including indirection, virtualization,
>separating process (function) from location from communication
>association - which is how this discussion originated.  It's in the
>abstraction of these ideas that there is science.

You are getting closer.

Take care,
John

From touch at ISI.EDU  Thu Aug 17 07:42:07 2006
From: touch at ISI.EDU (Joe Touch)
Date: Thu, 17 Aug 2006 07:42:07 -0700
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <a06230931c10a27da2e33@[10.0.1.7]>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net>
	<a06230917c0fc6ced8fdc@[10.0.1.12]> <44D892E9.9060403@isi.edu>
	<44D8AEA2.8040306@cs.utk.edu>
	<a06230912c0ffed91dcc2@[10.197.98.218]>
	<20060809175005.6255fec9.moore@cs.utk.edu>
	<a06230917c10028b893f6@[10.197.98.218]>
	<44DACC0E.608@cs.utk.edu> <a06230908c10238761716@[10.0.1.24]>
	<20060815071158.A85157@gds.best.vwh.net>
	<a06230900c1078120632d@[10.0.1.24]> <44E47403.8070609@isi.edu>
	<a06230931c10a27da2e33@[10.0.1.7]>
Message-ID: <44E4803F.1030905@isi.edu>



John Day wrote:
> At 6:49 -0700 2006/08/17, Joe Touch wrote:
>> John Day wrote:
>> ...
>>>  What I find really remarkable is the inability of current
>>> researchers to
>>>  see beyond what is there.  It is interesting that they are so
>>> focused on
>>>  current developments that they are unable to see beyond them.
>>
>> Yeah, so far all they've come up with is:
>>
>> - the web (addressing at the app layer)
>> - DHTs (hash-based addressing)
>> - overlays (arbitrary addressing using the Internet as a link layer)
> 
> The web on the one hand is just a souped up version of Englebart's NLS. 
> Addressing within an application doesn't count.
>
> DHTs- How to turn one flat address space into another flat address
> space.  I see you haven't seen through this one yet.
> 
> Overlays - an interesting thought but for now really just trying to
> paper over the real problems.

So basically anything that doesn't look like the conventional Internet
addresses doesn't count? Maybe it's *that* kind of metric that exhibits
the 'fascination with the past' that we're accusing the new generation
of... ;-)

>> It's sad that they haven't gotten beyond the Internet's original vision
>> of email and remote login. Oh well, back to the drawing board ;-)
>>
>> As to whether we are scientists or technicians, that depends on your
>> definition. The last time I checked, scientists created theories about
>> reality and validated them via observation and iteration. There are
> 
> That is only part of it.  Remember Newton's Regulae Philosphandi
> (guidelines): (in part) Theory should be the fewest number of concepts
> to cover the space.

A variant of Occam's Razor, of course. IMO, better to recall Maslow,
"when the only tool you have is a hammer, everything looks like a nail".
Just because you can't map these new ideas into Internet concepts
doesn't mean they're not useful, or that they're 'complexities' as per
below.

> This is why I said engineers are infatuated with creating differences,
> while scientists are infatuated with finding similarities.  I don't see
> much simplification in the Internet over the last 35 years.  In fact,
> what I see are complexities heaped on complexity.
> 
>> plenty of those out there; in a sense, the Internet is just a theory
>> about how to network, and the iterations are about resolving the theory
> 
> Ahhh, now I see, this is the root of the problem.  The Internet is not a
> theory.  It is a very specific engineering example.

The concepts of the Internet can be abstracted, even if they originated
as an engineering example.

>> with new uses and ideas - including indirection, virtualization,
>> separating process (function) from location from communication
>> association - which is how this discussion originated.  It's in the
>> abstraction of these ideas that there is science.
> 
> You are getting closer.

Some of us have been here, working in abstractions for a long time,
having a hard time explaining it in terms of the old Internet just to
make it accessible and convincing.

Joe

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 250 bytes
Desc: OpenPGP digital signature
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060817/50245742/signature-0001.bin

From day at std.com  Thu Aug 17 08:11:45 2006
From: day at std.com (John Day)
Date: Thu, 17 Aug 2006 11:11:45 -0400
Subject: [e2e] Port numbers, SRV records or...?
In-Reply-To: <44E4803F.1030905@isi.edu>
References: <20060802221706.85F07136C82@aharp.ittns.northwestern.edu>
	<44D4C94F.8020801@dcrocker.net> <a06230917c0fc6ced8fdc@[10.0.1.12]>
	<44D892E9.9060403@isi.edu> <44D8AEA2.8040306@cs.utk.edu>
	<a06230912c0ffed91dcc2@[10.197.98.218]>
	<20060809175005.6255fec9.moore@cs.utk.edu>
	<a06230917c10028b893f6@[10.197.98.218]> <44DACC0E.608@cs.utk.edu>
	<a06230908c10238761716@[10.0.1.24]>
	<20060815071158.A85157@gds.best.vwh.net>
	<a06230900c1078120632d@[10.0.1.24]> <44E47403.8070609@isi.edu>
	<a06230931c10a27da2e33@[10.0.1.7]> <44E4803F.1030905@isi.edu>
Message-ID: <a06230937c10a35e9799b@[10.0.1.7]>

At 7:42 -0700 2006/08/17, Joe Touch wrote:
>John Day wrote:
>>  At 6:49 -0700 2006/08/17, Joe Touch wrote:
>>>  John Day wrote:
>>>  ...
>>>>   What I find really remarkable is the inability of current
>>>>  researchers to
>>>>   see beyond what is there.  It is interesting that they are so
>>>>  focused on
>>>>   current developments that they are unable to see beyond them.
>>>
>>>  Yeah, so far all they've come up with is:
>>>
>>>  - the web (addressing at the app layer)
>>>  - DHTs (hash-based addressing)
>>>  - overlays (arbitrary addressing using the Internet as a link layer)
>>
>>  The web on the one hand is just a souped up version of Englebart's NLS.
>>  Addressing within an application doesn't count.
>>
>>  DHTs- How to turn one flat address space into another flat address
>>  space.  I see you haven't seen through this one yet.
>>
>>  Overlays - an interesting thought but for now really just trying to
>>  paper over the real problems.
>
>So basically anything that doesn't look like the conventional Internet
>addresses doesn't count? Maybe it's *that* kind of metric that exhibits
>the 'fascination with the past' that we're accusing the new generation
>of... ;-)

No, that is not what I said at all.  I said, your example of the web 
was specific to a single application.  It does not address the issue 
of application naming in general.

If you boil DHTs down to their elements, you will find that they are 
nothing more than a somewhat inefficient routing scheme.

And as for overlays, they paper over the real problem which is 
understanding the deeper structure of what is going on below the 
application.

>  >> It's sad that they haven't gotten beyond the Internet's original vision
>>>  of email and remote login. Oh well, back to the drawing board ;-)
>>>
>>>  As to whether we are scientists or technicians, that depends on your
>>>  definition. The last time I checked, scientists created theories about
>>>  reality and validated them via observation and iteration. There are
>>
>>  That is only part of it.  Remember Newton's Regulae Philosphandi
>>  (guidelines): (in part) Theory should be the fewest number of concepts
>>  to cover the space.
>
>A variant of Occam's Razor, of course. IMO, better to recall Maslow,
>"when the only tool you have is a hammer, everything looks like a nail".
>Just because you can't map these new ideas into Internet concepts
>doesn't mean they're not useful, or that they're 'complexities' as per
>below.

Precisely, the disease I see that plagues the the Internet community 
today.  They have a hammer and and only see nails.

>
>>  This is why I said engineers are infatuated with creating differences,
>>  while scientists are infatuated with finding similarities.  I don't see
>>  much simplification in the Internet over the last 35 years.  In fact,
>>  what I see are complexities heaped on complexity.
>>
>>>  plenty of those out there; in a sense, the Internet is just a theory
>>>  about how to network, and the iterations are about resolving the theory
>>
>>  Ahhh, now I see, this is the root of the problem.  The Internet is not a
>>  theory.  It is a very specific engineering example.
>
>The concepts of the Internet can be abstracted, even if they originated
>as an engineering example.

Yes, as I often say, the number one principle of computer science is 
that we build what we measure.  No other science has that luxury or 
that curse.  Consequently we must often engineer before we can do 
science.  This makes it hard to separate principle from artifact.

>  >> with new uses and ideas - including indirection, virtualization,
>>>  separating process (function) from location from communication
>>>  association - which is how this discussion originated.  It's in the
>>>  abstraction of these ideas that there is science.
>>
>>  You are getting closer.
>
>Some of us have been here, working in abstractions for a long time,
>having a hard time explaining it in terms of the old Internet just to
>make it accessible and convincing.

And some of us here have been working on the abstractions even longer 
and in far more detail and over a much wider range of data than 
simply the Internet.

Take care,
John

From chvogt at tm.uka.de  Thu Aug 17 09:20:09 2006
From: chvogt at tm.uka.de (Christian Vogt)
Date: Thu, 17 Aug 2006 18:20:09 +0200
Subject: [e2e] Internet Packet Loss Measurements?
In-Reply-To: <44DCB96E.8080304@cs.unc.edu>
References: <44DB5786.50103@tm.uka.de> <44DCAAE0.40709@purdue.edu>
	<44DCB96E.8080304@cs.unc.edu>
Message-ID: <44E49739.7060001@tm.uka.de>

Hi all,

again, thanks a lot for the valuable pointers regarding packet loss
statistics.

The papers you cited were certainly an interesting read.  Extremely
helpful for our purpose -- which was to set up a testbed as
realistically as possible -- proved the PingER tool.  Access to the
available data ranging from 1995 until today is a real benefit.

One more reference:  CAIDA and NLANR also provide some useful resources.

Best,
- Christian

-- 
Christian Vogt, Institute of Telematics, Universitaet Karlsruhe (TH)
www.tm.uka.de/~chvogt/pubkey/



From gorinsky at arl.wustl.edu  Sat Aug 19 13:53:14 2006
From: gorinsky at arl.wustl.edu (Sergey Gorinsky)
Date: Sat, 19 Aug 2006 15:53:14 -0500 (CDT)
Subject: [e2e] more theory on benefits and handicaps of additive increase
Message-ID: <Pine.LNX.4.44.0608191535570.28548-100000@dom.arl.wustl.edu>


  Dear colleagues,

  Already six years have passed since I announced on this list about the 
report "Additive Increase Appears Inferior", which stirred some interest 
then. Now, I would like to draw your attention to another report "A Theory 
of Load Adjustments and its Implications for Congestion Control" written 
in collaboration with my students:

     http://www.arl.wustl.edu/~gorinsky/pdf/WUCSE-TR-2006-40.pdf

  The full abstract is at the bottom of this posting but in a nutshell 
the paper extends even further the classical Chiu-Jain analysis of MAIMD 
algorithms (which generalize AIMD via optional inclusion of MI) and 
uncovers some surprises. One particularly counterintuitive result is the 
ability of an MAIMD to provide a faster asymptotic speed of convergence 
to equal individual loads than under a less smooth AIMD (the classical 
metric of smoothness captures the maximal oscillations of the total load 
after it reaches the target). The report also derives other theoretical 
results (e.g., convergence to a unique canonical cycle of oscillations 
and various scalability properties of MAIMD algorithms) and briefly 
discusses practical implications of the theory.

  All your comments are certainly welcome.
  
  Thank you,

    Sergey Gorinsky

    Applied Research Laboratory
    Department of Computer Science and Engineering
    Washington University in St. Louis
________________________________________________________________________
S. Gorinsky, M. Georg, M. Podlesny, and C. Jechlitschek, "A Theory of 
Load Adjustments and its Implications for Congestion Control", Technical 
Report WUCSE-2006-40, Department of Computer Science and Engineering, 
Washington University in St. Louis, August 2006

Abstract:   Multiplicative Increase (MI), Additive Increase (AI), and 
Multiplicative Decrease (MD) are linear adjustments used extensively in 
networking. However, their properties are not fully understood. We 
analyze responsiveness (time for the total load to reach the target 
load), smoothness (maximal size of the total load oscillations after 
reaching the target load), fairing speed (speed of convergence to equal 
individual loads) and scalabilities of MAIMD algorithms, which generalize  
AIMD algorithms via optional inclusion of MI. We prove that an MAIMD can  
provide faster asymptotic fairing than a less smooth AIMD. Furthermore,  
we discover that loads under a specific MAIMD converge from any initial  
state to the same periodic pattern, called a canonical cycle. While  
imperfectly correlated with smoothness, the canonical cycle reliably 
predicts the asymptotic fairing speed. We also show that AIMD algorithms 
offer the best trade-off between smoothness and responsiveness. Then, we 
introduce smoothness-responsiveness diagrams to investigate MAIMD 
scalabilities. Finally, we discuss implications of the theory for the 
practice of congestion control.
________________________________________________________________________


From alexander_carot at gmx.net  Sun Aug 20 03:20:09 2006
From: alexander_carot at gmx.net (=?iso-8859-1?Q?=22Alexander_Car=F4t=22?=)
Date: Sun, 20 Aug 2006 12:20:09 +0200
Subject: [e2e] ethernet jitter
In-Reply-To: <1997.211.48.230.94.1155329098.squirrel@webmail.ncsu.edu>
References: <b2dc5c1d0608110926k22930c08l77ab9221772cbe4f@mail.gmail.com>
	<1997.211.48.230.94.1155329098.squirrel@webmail.ncsu.edu>
Message-ID: <20060820102009.121100@gmx.net>

Hello to all,

after a period of silence from my side I have a question coming up.

I could tell you the whole story which would be pretty long so for now I just give you the conclusion : 

I observed that in ethernet networks, that jitter starts even when 30% of the bandwidth capacity is used and increases with the used bandwidth. At 30% it appears every 2-3 seconds with a variation of 1-2 ms which goes up to 15-25ms constantly at 90% used bandwidth. In all the cases there was enough bandwidth for my existing UDP-audio stream which I also measured the jitter with. The additional traffic was either TCP-traffic or additional UDP-traffic - all the same behavior.

In order to have the same preconditions I used a 10MBit/s line between two buildings on campus as the reference. Interesing is that the same behavior was observed with just two machines connected with a crossed cable and network cards configured with 10MBit/s.

Configured with 100MBit/s it was the same behavior but of course it required 10 times more traffic to reach the 30% load.

Can anyone confirm these measurements or do you know papers concerning this subject ? 

I'd appreciate any hint from you.

Thanks a lot in advance,
regards 

-- A l e x
-- 
Dipl.-Ing. Alexander Car?t
Email : Alexander at Carot.de
Tel.: +49 (0)177 5719797

phd-candidate at www.isnm.de

Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
Ideal f?r Modem und ISDN: http://www.gmx.net/de/go/smartsurfer

From jms at central.cis.upenn.edu  Sun Aug 20 13:54:56 2006
From: jms at central.cis.upenn.edu (Jonathan M. Smith)
Date: Sun, 20 Aug 2006 16:54:56 -0400 (EDT)
Subject: [e2e] ethernet jitter
In-Reply-To: <20060820102009.121100@gmx.net>
References: <b2dc5c1d0608110926k22930c08l77ab9221772cbe4f@mail.gmail.com>
	<1997.211.48.230.94.1155329098.squirrel@webmail.ncsu.edu>
	<20060820102009.121100@gmx.net>
Message-ID: <Pine.GSO.4.58.0608201650520.17171@central.cis.upenn.edu>


The definitive reference is "A Distributed Experimental Communications
System", by DeTreville and Sincoskie, IEEE JSAC, Dec. 1983.

-JMS


-------------------------------------------------------------------------
Jonathan M. Smith
Olga and Alberico Pompa Professor of Engineering and Applied Science
Professor of Computer and Information Science, University of Pennsylvania
Levine Hall, 3330 Walnut Street, Philadelphia, PA 19104

On Sun, 20 Aug 2006, [iso-8859-1] "Alexander Car?t" wrote:

> Hello to all,
>
> after a period of silence from my side I have a question coming up.
>
> I could tell you the whole story which would be pretty long so for now I just give you the conclusion :
>
> I observed that in ethernet networks, that jitter starts even when 30% of the bandwidth capacity is used and increases with the used bandwidth. At 30% it appears every 2-3 seconds with a variation of 1-2 ms which goes up to 15-25ms constantly at 90% used bandwidth. In all the cases there was enough bandwidth for my existing UDP-audio stream which I also measured the jitter with. The additional traffic was either TCP-traffic or additional UDP-traffic - all the same behavior.
>
> In order to have the same preconditions I used a 10MBit/s line between two buildings on campus as the reference. Interesing is that the same behavior was observed with just two machines connected with a crossed cable and network cards configured with 10MBit/s.
>
> Configured with 100MBit/s it was the same behavior but of course it required 10 times more traffic to reach the 30% load.
>
> Can anyone confirm these measurements or do you know papers concerning this subject ?
>
> I'd appreciate any hint from you.
>
> Thanks a lot in advance,
> regards
>
> -- A l e x
> --
> Dipl.-Ing. Alexander Car?t
> Email : Alexander at Carot.de
> Tel.: +49 (0)177 5719797
>
> phd-candidate at www.isnm.de
>
> Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
> Ideal f?r Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
>

From mallman at icir.org  Wed Aug 23 12:39:52 2006
From: mallman at icir.org (Mark Allman)
Date: Wed, 23 Aug 2006 15:39:52 -0400
Subject: [e2e] Internet Packet Loss Measurements?
In-Reply-To: <44DB5786.50103@tm.uka.de> 
Message-ID: <20060823193952.160481BAC3C@guns.icir.org>

An embedded and charset-unspecified text was scrubbed...
Name: not available
Url: http://mailman.postel.org/pipermail/end2end-interest/attachments/20060823/4b09aeba/attachment.ksh
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 185 bytes
Desc: not available
Url : http://mailman.postel.org/pipermail/end2end-interest/attachments/20060823/4b09aeba/attachment.bin

From bartek at fastsoft.com  Sun Aug 27 10:43:59 2006
From: bartek at fastsoft.com (bartek@fastsoft.com)
Date: Sun, 27 Aug 2006 12:43:59 -0500
Subject: [e2e] Introducing MaxNet Explicit Congestion Control Protocol
Message-ID: <1156700639.44f1d9df07c7a@email.ixwebhosting.com>

Dear E2E,

I would like to introduce to you the first implementation of the MaxNet 
Congestion Control Protocol. MaxNet is an explicit signal congestion control 
protocol, based on a theoretically proveable design. 

The implementation and papers are available at http://netlab.caltech.edu/maxnet/

The protocol is a culmination of many people's input over the years, and was 
developed at the University of Melbourne (Australia) and Caltech (USA). 
It is open to further input and improvement from you. 

Regards,
Bartek Wydrowski
www.fastsoft.com