[e2e] Interoperable Convergence of Storage, Networking and Computing
mbeck at utk.edu
Fri Aug 18 10:53:17 PDT 2017
I would like to draw your attention to a paper that I think people on this list might be interested in.
Interoperable Convergence of Storage, Networking and Computation
Micah Beck, Terry Moore, Piotr Luszczek
The central concept of the paper is that a platform to support distributed systems with deployment scalability (ability to grow beyond a great variety of different domain boundaries, such a geographic and administrative) should have as its spanning layer (common service and basis of interoperability) a generalization of the Internet’s Layer 2 that models storage and processing resources as well as data transfer. This would enable heterogeneity in the implementation of distributed services in a generalized Layer 3.
All comments and feedback are encouraged!
Assoc. Professor, EECS
University of Tennessee, Knoxville
Some additional words about why you might find this paper interesting:
Many (15+) years ago some colleagues and I addressed the question whether “end-to-end arguments” could help in the design of a communication platform that exhibits “deployment scalability” (ability to grow beyond a great variety of different domain boundaries, such a geographic and administrative) while incorporating storage and processing/computing resources as well as data transfer. From the E2E interest list I got a variety of feedback, ranging from “yes” to “NO!”
We were led to an analogy between storage and networking [“Memory locations are just wires turned sideways in time”, Dan Hillis, “Why Computer Science Is No Good”, IJTP 1982] in the design of Logistical Networking ["An End-to-End Approach to Globally Scalable Network Storage” Beck, Moore & Plank, SIGCOMM 2002]. We were so taken with this idea that we wrote another paper making a close analogy of both storage and networking to processing [“An End-to-End Approach to Globally Scalable Programmable Networking”, Beck, Moore & Plank, FDNA 2003]. This gives rise to the somewhat less elegant observation that “Processes are just wiggly paths through a high-dimensional vector space in which each bit of the process state is modeled as a discrete dimension with just two values, zero and one.” [see http://web.eecs.utk.edu/~mbeck/MultidimensionalNetworking.pdf for a picture from my forthcoming paper “35 Years Later Computer Science Is Still No Good”].
2. What This Paper Proposes
This paper is based on the observation that Storage, Networking and Computation can all be expressed in terms of operations on memory/storage buffers. It argues further than such buffer operations and services can be described using a common model of the buffer, enabling them to interoperable at a very low level, and that the development of generalized and programmable networking and distributed systems that aspire to “deployment scalability” requires a platform that enables such interoperation. The claim is that successful research prototypes have not achieved such deployment scalability (“becoming the next Internet”) because they are based on a stateless model of networking (i.e. the Internet) that hides topology and intermediate buffers. The central concept of the paper is that a platform to support distributed systems with deployment scalability should have as its spanning layer (common service and basis of interoperability) a generalization of the Internet’s Layer 2 that can model substantial storage and processing resources as well as data transfer. This would enable heterogeneity in the implementation of distributed services in a generalized Layer 3.
The claim is that the desire for the creation of such a platform has driven much of the last 30 years worth of research and development in generalized and programmable networking as well as wide area distributed system platforms and standardized application environments. Some have sought to converge at the application layer, others have sought to add features to the Network or Transport Layers, and some (notably in the community that gave rise to GENI) have suggested a more general “thin waist” at Layer 2.
The service proposed in this paper is motivated by a number of design principles, primary among them one that I have named “The Deployment Scalability Tradeoff”. It may be partially explained by an end-to-end argument, but it is primarily an account of the Hourglass Model of layered service interfaces ["End-to-end Arguments in System Design”, Saltzer, Reed & Clark, ACM TOCS, 1984]. The paper’s thesis is that this principle accounts at least partially for the success of the Internet and of the Unix kernel as service specifications and that Exposed Buffer Processing is the (necessary? prudent? plausible?) analog in the design of a generalized Layer 2 to be the basis of the next stage of ICT infrastructure development.
More information about the end2end-interest