[e2e] Port numbers, SRV records or...?

John Day day at std.com
Fri Aug 11 11:07:02 PDT 2006

At 11:46 -0400 2006/08/11, Keith Moore wrote:
>I'll try to keep this terse...

Me too.

>>  As many have noted, getting the concepts right and the terminology
>>  used is half the battle.  Being sloppy in our terminology causes us
>>  to make mistakes and others not as closely involved think we mean
>>  what we say.
>true.  but there's a downside to too much precision, and/or too much
>abstraction, which is that ordinary mortals can't understand it.  the
>result is either that it's inaccessible to them, or they make some
>semantic associations that weren't intended that end up becoming
>entrenched almost as if they were part of the architecture.

I know what you mean!

>>  Leaving aside the band-aids and years of commentary and
>>  interpretation (this really does begin to sound more like
>>  commentaries on the I-Ching than science.), if one carefully works
>>  through an abstract model of a network to see who has to talk to who
>>  about what, one never refers to a host.  One is concerned with the
>>  names of a various protocol machines and the bindings between them,
>>  but the name of the container on which they reside never comes up.
>I'd believe that if it weren't for security.  for instance, I don't
>know of any way to let a process securely make visible changes to a
>binding between a name that is associated with it and some attribute of
>that process without giving that process secret information with which
>it can authenticate itself to other visible processes that maintain
>those bindings, and I don't know of any way to protect a process's
>secrets from the host that it lives on.
>sure you can talk about a set of abstractions and bindings that don't
>involve hosts, but I don't think you can build a system that actually
>implements them in a useful way without having some notion of the
>container and its boundaries.

My point was that it was not required for naming and addressing.  I 
totally agree with you about security.

>  > Again, if you analyze what goes on in a single system, you will find
>>  that the mechanism for finding the destination application and
>>  determining whether the requesting application can access it is
>>  distinct and should be distinct from the problem of providing a
>>  channel with specific performance properties. I completely agree with
>>  everything you said above, but TCP is not and should not be the whole
>>  IPC. It just provides the reliable channel.
>and  TCP doesn't provide the mechanism for finding the destination
>application (which I would not call an application, as an application
>can consist of multiple processes on multiple hosts) and determine
>whether the requesting application can access it.  it's just that the
>layer we often use for finding the destination application (WKP at
>an IP address) is a very thin sliver on top of TCP.  sure that layer
>has limitations, but they're not TCP's problems.

Okay. I was using application for process (see your comment above 
;-))  I agree it is not TCPs problem and shouldn't be.

>  > Harkening back to the historical discussion, this was all clear 35
>>  years ago but it was a lot of work to build a network on machines
>>  with much less compute power than your cell phone and we didn't have
>>  time for all of the niceties.  We had to show that this was useful
>>  for something.  So we (like all engineers) took a few short cuts
>>  knowing they weren't the final answer.  It is just unfortunate that
>>  the people who came later have not put it right.
>it would be an interesting exercise to illustrate how to put it right -
>not just how to define the entities and bindings but how to implement
>them in a secure, reliable, and scalable fashion.  having more compute
>power helps, but I'm not sure that it is sufficient to solve the

I agree.  And along those lines, I think it is important to work out 
what the "right" answer is even if we know we can never deploy it in 
the Internet.  At least then we know what the goal is and when we 
don't go there we will know precisely what we are giving up.  (The 
current approach sometimes seems like being stranded in the woods and 
trying to find your way out when you have no idea which way North is! 
The advice is always stay put and let the searchers look for you! 
Trouble of course is no one is looking for us! ;-))

>>  >Which is a good thing, since we generally do not have the luxury of
>>  >either doing things right the first time (anticipating all future
>>  >needs) or scrapping an old architecture and starting again from
>>  >scratch. Software will always be working around architectural
>>  >deficiencies. Similarly we have to keep compatibility and transition
>>  >issues in mind when considering architectural changes.
>>  Be careful.  We do and we don't.  I have known many companies that
>>  over time have step by step made wholesale *replacement* of major
>>  parts of their products as they transition.  Sometimes maintaining
>>  backward compatibility, sometimes not.  But new releases come out
>>  with completely new designs for parts of the system.  You are arguing
>>  that nothing is ever replaced and all changes is by modifying what is
>>  there.
>if the Internet had been controlled by a single company, that company
>might have had the luxury of making wholesale replacement of major
>parts.  of course there are also downsides to single points of control.
>but there's a separate issue.  even when individual parts are replaced,
>the relationships and interfaces between those parts are harder to
>change.  take a look at the maps of a city hundreds of years ago and
>today and you'll discover that most of the streets are in the same
>places. individual buildings are replaced over time, one or two at a
>time, but rarely are large areas razed and replaced with new buildings
>in such a way that  the street patterns (and property lines) could
>change significantly.

Right on both counts.  I know what you mean. Although, Paris did it 
in the middle of the 19thC and there have been some recent examples. 
It doesn't happen often, but it does happen.

>  > >But when I look at ports I think "hey, it's a good thing that they
>>  >didn't nail down the meaning of hosts or ports too much back in the
>>  >1970s, because we need them to be a bit more flexible today than
>>  >they needed to be back then."  We don't need significant
>>  >architectural changes or any protocol or API changes for apps to be
>>  >able to specify ports, and that gives us useful flexibility today.
>>  >If ports had been defined differently - say they had been defined as
>>  >protocols and there were deep assumptions in hosts that (say) port
>>  >80 would always be HTTP and only port 80 could be HTTP - we'd be
>>  >having to kludge around it in other, probably more cumbersome, ways.
>>  I am not arguing to nail down ports more.  I am arguing to nail them
>>  down less.  Suppose there had been no well-known ports at all.  I
>>  have never known an IPC mechanism in an OS to require anything like
>>  that.  Why should the 'Net?
>you have to have some mechanism for allowing an application to find
>the particular thing it wants to talk to.    well-known ports seem
>limited today for lots of reasons, but to actually get a useful
>increase in functionality beyond WKPs presumes several things, e.g. a
>system to maintain bindings from globally-unique names of things you
>might want to talk to (which I tend to call services or service
>instances), and the locations of those things, which further allows
>those bindings to be securely updated when the locations change, and
>which also (for privacy reasons) avoids exposing too much data about
>application activity to the net at large.  in the days of HOSTS.TXT
>files it's hard to imagine this being practical, and even to try to use
>today's DNS for this would be quite a stretch.   (now if you want to
>argue that DNS is hopelessly flawed you'll get enthusiastic agreement
>from me)

For all intents and purposes we are using DNS for this or trying to. 
We have to find a way to make it scale.

>  > >I suppose one measure of an architecture is how complex the kludges
>>  >have to be in order to make things work.  Allowing apps to specify
>>  >port #s seems like a fairly minor kludge, compared to say having
>>  >apps build their own p2p networks to work around NAT and firewall
>>  >limitations.
>>  An architecture that requires kludges has bugs that should be fixed.
>sometimes the fixes are more complex and/or less reliable than the
>kludges, and sometimes they don't provide much more in the way of
>practical functionality.  or maybe they do provide a benefit in the
>long-term, but little benefit in the near-term so the incentives aren't
>in the right places to get them implemented.

Well, what I meant by that is there something fundamentally wrong. If 
you have kludges it is either because you are doing something you 
shouldn't (like passing IP addresses in application protocols) or 
there is something about how the architecture should be structured 
that you don't understand yet.

(Sorry I am an optimist.  I don't believe that the right answer has 
to be kludgey.)

>>  >Okay fine.  But when I try to understand what a good set of tools
>>  >for these applications developers looks like, the limitations of
>>  >ports (or even well known ports) seem like fairly minor problems
>>  >compared to the limitations of NATs, scoped addresses, IPv6 address
>>  >selection, firewalls that block traffic in arbitrary ways, and
>>  >interception proxies that alter traffic.  DNS naming looks fairly
>>  >ugly
>>  They are all part and parcel of the same problem:  The 'Net only has
>>  half an addressing architecture.
>is there an implemented example of a net with a whole addressing
>architcture that could scale as well as the Internet?

Not sure. Actually you bring up a good point.  I was noticing 
sometime ago that with operating systems we probably had tried 20 or 
more different designs before we started to settle on the 
Multics/Unix line.  But with networks we only had two or three 
examples before we fixed on one and one could argue that we haven't 
fundamentally changed since 1972.  We haven't explored the solution 
space very much at all.  Who knows there could be 100 different 
approaches that are better than this one.  Actually, the Internet 
hasn't scaled very well at all.  Moore's Law has but without it I 
don't think you would find that the Internet was doing all that well. 
Suppose that instead of turning the Moore's Law crank 15 times in the 
last 35 years, we had only turned it 3, but were still trying to do 
as much with the 'Net.

>>  >>RFC 1498
>>  >
>>  >Oh, that.  Rereading it, I think its concept of nodes is a bit
>>  >dated. But otherwise it's still prescient, it's still useful, and
>>  >nothing we've built in the Internet really gets this.  It's
>>  >saddening to read this and realize that we're still conflating
>>  >concepts that need to be kept separate (like nodes and attachment
>>  >points, and occasionally nodes and services).
>>  >
>>  >Of course, RFC 1498 does not describe an architecture.  It makes
>>  >good arguments for what kinds of naming we need in a network
>>  >protocol suite (applications would need still more kinds of naming,
>>  >because users do), but it doesn't explain how to implement those
>>  >bindings and make them robust, scalable, secure.  It's all well and
>>  >good to say that a node needs to be able to keep its identity when
>>  >it changes attachment points but that doesn't explain how to
>>  >efficiently route traffic to that node across changes in attachment
>>  >points.  etc.
>>  Gee, you want Jerry to do *all* the work for you! ;-)  Given that we
>>  haven't done it, maybe that is the problem:  No one in the IETF knows
>>  how to do it.
>or alternately - it's easy to design the perfect theoretical system if
>you make the hard problems out of scope.  then you (or your admirers)
>can claim that because nobody manages to actually implement it, that
>nobody is smart enough to appreciate your design's elegance :)
>but maybe that's not really the case here.   I do think it's a useful
>exercise to try to describe how Jerry's design - or something close to
>it - could be practically, reliably, scalably implemented by adapting
>the existing internet protocols and architecture.

I completely agree.  To your first point:  It is almost traditional 
in science for one person to throw out a new idea and then let others 
propose ways to realize it or confirm it.  Notice Einstein didn't 
prove relativity, others did; etc.

It would have been nice if he had saved us the trouble, but a little 
hard work never hurt!  He was clearly sorting out the problems as 
well.  And as I indicated in another email on this thread, he missed 
multiple paths between next hops which I think really cements his 

>>  >Also, there are valid reasons why we sometimes (occasionally) need
>>  >to conflate those concepts.  Sometimes we need to send traffic to a
>>  >service or service instance, sometimes we need to send traffic to a
>>  >node, sometimes we need to send traffic to an attachment point (the
>>  >usual reasons involve network or hardware management).
>>  I don't believe it.  If you think we need to conflate these concepts
>>  then you haven't thought it through carefully enough.  We always send
>>  traffic to an application.  Now there maybe some things you weren't
>>  thinking of as applications, but they are.
>if you were a developer who routinely built applications that
>consisted of large numbers of processes on different hosts that talk to
>each other, you'd probably want to call those things something besides
>applications.  :)

Yea, in fact I have a much more refined terminology I prefer, but as 
I said above, I was using "application" as short hand for 

>  > >It's interesting to reflect on how a new port extension mechanism,
>>  >or replacement for ports as a demux token, would work if it took RFC
>>  >1498 in mind.  I think it would be a service (instance) selector
>>  >(not the same thing as a protocol selector) rather than a port
>>  >number relative to some IP address.  The service selectors would
>>  >need to be globally unique so that a service could migrate
>>  You still need ports.  But you need to not conflate ports and
>>  application naming.
>>  >from one node or attachment point to another.  There would need to
>>  >be some way of doing distributed assignment of service selectors
>>  >with a reasonable expectation of global uniqueness,
>>  service selectors?  No.  Application-names, yes.
>I think we're just arguing about what to call them.  I don't want to
>call them application names partially because it's too easy to think of
>an application name as something like "ftp" when that's the wrong level
>of precision, and partially because to me an application is larger than
>any single process that implements it.

Well, I don't like selectors because it too strongly implies some 
kind of central authority.  I don't want some one who wants to put a 
distributed application with his own protocol to have to see anyone 
to do it.  As I said, earlier selectors and WKP are like "hard wiring 
low core."

Take care,

More information about the end2end-interest mailing list