[e2e] Port numbers, SRV records or...?

John Day day at std.com
Tue Aug 15 07:14:33 PDT 2006


At 7:11 +0000 2006/08/15, Greg Skinner wrote:
>On Fri, Aug 11, 2006 at 10:15:17AM -0400, John Day wrote:
>>  Again, if you analyze what goes on in a single system, you will find
>>  that the mechanism for finding the destination application and
>>  determining whether the requesting application can access it is
>>  distinct and should be distinct from the problem of providing a
>>  channel with specific performance properties. I completely agree with
>>  everything you said above, but TCP is not and should not be the whole
>>  IPC. It just provides the reliable channel.
>>
>>  Harkening back to the historical discussion, this was all clear 35
>>  years ago but it was a lot of work to build a network on machines
>>  with much less compute power than your cell phone and we didn't have
>>  time for all of the niceties.  We had to show that this was useful
>>  for something.  So we (like all engineers) took a few short cuts
>>  knowing they weren't the final answer.  It is just unfortunate that
>>  the people who came later have not put it right.
>
>I'm somewhat confused at where this general line of reasoning is
>going.  You're making an argument for some generalized notion of
>well-known services.  I wonder if the effort is really worth it (when
>weighed against other things that need to be done).  There's been
>talk of spam, for example, and how a different notion of well-known
>services than what we have now would reduce the spam problem.  I'm not
>so sure.  The problem with spam seems to stem from (1) email is free,
>and (2) it's easy to write applications that compromise machines that
>take advantage of (1).
>
>If a more generalized notion of well-known service had been developed,
>would other things have been able to progress as well as they did?
>Other things needed to be built in order for the ARPAnet and Internet
>to be useful (especially to the people paying for the research) such
>as scalable naming and congestion avoidance.  There is only so much
>time (money) available to do work.  Are we really that much worse off
>because we have well-known ports?

I am arguing for a complete addressing architecture, not the half an 
architecture we have.  What we have is an unfinished demo.  If it 
were an OS, it would make DOS look good.

While I understand that your appraisal of how the funding was being 
allocated is completely rationale, it bears little resemblance to the 
reality.

What I find really remarkable is the inability of current researchers 
to see beyond what is there.  It is interesting that they are so 
focused on current developments that they are unable to see beyond 
them.

Are we really worse off for having half an architecture?  Yes.

>
>>  At 2:02 -0400 2006/08/10, Keith Moore wrote:
>>  >Which is a good thing, since we generally do not have the luxury of
>>  >either doing things right the first time (anticipating all future
>>  >needs) or scrapping an old architecture and starting again from
>>  >scratch. Software will always be working around architectural
>>  >deficiencies. Similarly we have to keep compatibility and transition
>>  >issues in mind when considering architectural changes.
>>  Be careful.  We do and we don't.  I have known many companies that
>>  over time have step by step made wholesale *replacement* of major
>>  parts of their products as they transition.  Sometimes maintaining
>>  backward compatibility, sometimes not.  But new releases come out
>>  with completely new designs for parts of the system.  You are arguing
>>  that nothing is ever replaced and all changes is by modifying what is
>>  there.  This is the evolution works.  And 99% of its cases end as
>>  dead-ends in extinction.  With evolution, it doesn't matter there are
>>  100s of billions of cases.  But when there is one case, the odds
>>  aren't too good.  (And don't tell me not to worry because the actions
>>  of the IETF are not random mutations.  There are those that would
>>  dispute that! ;-))
>
>But after a certain point, the Internet could not be completely
>replaced.  There was too much "infrastructure" that people were

A common myth intended to protect vested interests.

>dependent on, so new work continued around the architectural
>deficiencies.  It would be nice to apply Frederick Brooks' "prepare to
>throw one away," but that's not always practical, even in research.

;-)  This is the argument I always find the most entertaining.  We 
have been hearing it since the early 1980s.  It assumes that the Net 
is nearing the end of its growth.  That we have done just about 
everything with it we can. That Telnet and FTP are all we need (a 
view expressed to me in 1975)  Hence there is no reason to go on 
improving.  Frankly, I believe that the Net is still in the earliest 
stages of its growth.

I don't expect to see a wholesale replacement overnight.  But I do 
think it is possible to move to a much better Net over time.  Those 
that rely on the argument  that there is just too much infrastructure 
in place, etc for change generally are the ones who either lack 
imagination or are merely protecting their vested interest (I hope 
more the latter than the former).  It would seem that the Internet 
has become even more blinded to next step than the phone companies of 
the 70s were when we first started this.  When I look back over the 
last 25 years, it saddens me to see a field that had such vibrance in 
its early days fall so quickly into acting like stodgy old men. 
Maybe they just ran out of ideas. I don't really know.

Take care,
John



More information about the end2end-interest mailing list