[e2e] Port numbers, SRV records or...?

Greg Skinner gds at best.com
Tue Aug 15 00:11:58 PDT 2006

On Fri, Aug 11, 2006 at 10:15:17AM -0400, John Day wrote:
> Again, if you analyze what goes on in a single system, you will find 
> that the mechanism for finding the destination application and 
> determining whether the requesting application can access it is 
> distinct and should be distinct from the problem of providing a 
> channel with specific performance properties. I completely agree with 
> everything you said above, but TCP is not and should not be the whole 
> IPC. It just provides the reliable channel.
> Harkening back to the historical discussion, this was all clear 35 
> years ago but it was a lot of work to build a network on machines 
> with much less compute power than your cell phone and we didn't have 
> time for all of the niceties.  We had to show that this was useful 
> for something.  So we (like all engineers) took a few short cuts 
> knowing they weren't the final answer.  It is just unfortunate that 
> the people who came later have not put it right.

I'm somewhat confused at where this general line of reasoning is
going.  You're making an argument for some generalized notion of
well-known services.  I wonder if the effort is really worth it (when
weighed against other things that need to be done).  There's been
talk of spam, for example, and how a different notion of well-known
services than what we have now would reduce the spam problem.  I'm not
so sure.  The problem with spam seems to stem from (1) email is free,
and (2) it's easy to write applications that compromise machines that
take advantage of (1).

If a more generalized notion of well-known service had been developed,
would other things have been able to progress as well as they did?
Other things needed to be built in order for the ARPAnet and Internet
to be useful (especially to the people paying for the research) such
as scalable naming and congestion avoidance.  There is only so much
time (money) available to do work.  Are we really that much worse off
because we have well-known ports?

> At 2:02 -0400 2006/08/10, Keith Moore wrote:
> >Which is a good thing, since we generally do not have the luxury of 
> >either doing things right the first time (anticipating all future 
> >needs) or scrapping an old architecture and starting again from 
> >scratch. Software will always be working around architectural 
> >deficiencies. Similarly we have to keep compatibility and transition 
> >issues in mind when considering architectural changes.
> Be careful.  We do and we don't.  I have known many companies that 
> over time have step by step made wholesale *replacement* of major 
> parts of their products as they transition.  Sometimes maintaining 
> backward compatibility, sometimes not.  But new releases come out 
> with completely new designs for parts of the system.  You are arguing 
> that nothing is ever replaced and all changes is by modifying what is 
> there.  This is the evolution works.  And 99% of its cases end as 
> dead-ends in extinction.  With evolution, it doesn't matter there are 
> 100s of billions of cases.  But when there is one case, the odds 
> aren't too good.  (And don't tell me not to worry because the actions 
> of the IETF are not random mutations.  There are those that would 
> dispute that! ;-))

But after a certain point, the Internet could not be completely
replaced.  There was too much "infrastructure" that people were
dependent on, so new work continued around the architectural
deficiencies.  It would be nice to apply Frederick Brooks' "prepare to
throw one away," but that's not always practical, even in research.


More information about the end2end-interest mailing list