The vision of the MOC is to create a cloud architecture that allows providers of computing and storage resources to compete for tenant services at multiple levels, all the way down to the bare metal. Networking, however, is traditionally is viewed as a commodity, much like power or cooling, necessary for even the existence of the cloud. This view implicitly excludes networking as a resource that can be offered by multiple providers and negotiated in a market.
Our vision is to go beyond this traditional view of networking in the cloud datacenter, by creating and deploying a network architecture where multiple network providers can offer connectivity services — even at the physical layer — to tenants, and where these services can be negotiated in a marketplace.
The goal of the MOC Network Working Group is to define this vision of networking as a first-class market resource, and an architecture that implements the vision in the context of the OCX. While it is still an open question whether a market will develop for networking resources within a datacenter, our architecture should not preclude this from happening.
But why do this, isn’t providing a full-cross-sectional-bandwidth network among all machines in the datacenter enough? Our view is that although sufficient for many applications, this hinders innovation in the network itself. If, instead, multiple providers can compete for value-added network services, including at the physical layer, the architecture can become a testing ground for innovative services and technologies. New technologies that require deployment across entire paths, such as pFabric or DCTCP, or technologies that are too expensive to deploy to the entire datacenter, such as (currently) 100Gbps Ethernet or Infiniband, currently do not have a simple path for deployment. Our goal is to allow incremental deployment of these and other technologies where they make sense from the functionality and economic points of view.
The Internet offers a powerful analogy. Since its early prototype in 1970s, the Internet has evolved from a single-provider environment into a successful ecosystem that can accommodate multiple network providers to compete in a marketplace. An organization today can physically connect to an IXP, and from there choose services from different transit providers that compete on capacity, reliability, connectivity, and cost. This open architecture allows the coexistence of 18+ low-latency providers between the New York and Chicago exchanges, or the multiple undersea cables that share similar routes.
As of May 2016, we have a preliminary design for the architecture, which centers around a network exchange component, NetEx. We have published a paper that describes an initial take on the architecture, as well as a prototype in Mininet. We are now working on incremental design steps to deploy a simplified version of NetEx in the MOC, allowing, among other things, multiple network service providers, and the provisioning of network resources for a given project across pods.
- (Lead) Rodrigo Fonseca, Brown University
- Da Yu, Brown University
- Luo Mai, Imperial College London
- Piyanai Saowarattitada, Boston University
- Orran Krieger, Boston University
- Somaya Arianfar, Cisco Systems
- David Oran, Cisco Systems
- Peter Desnoyers, Northeastern University
- Jason Hennessey, Boston University
- Sahil Tikale, Boston University
- Guy Fedorkow, Juniper Networks
- Tom Nadeau, Brocade
- Larry Rudolph, MIT and Two Sigma
In the OCX architecture, multiple pools of physical resources — compute and storage — are provisioned and managed by different providers. These sets are called pods, and they can be, for example, a rack-scale computer, a physical container, or a storage pod. Each pod is responsible for its internal networking, and they are inter-connected by a commodity network.
We extend this architecture by allowing multiple network providers to bridge these pods and compete by offering services in an open marketplace. To realize this, we design a network marketplace called Network Exchange (NetEx). To register in NetEx, network providers physically connect to a set of programmable Edge-of-Pod (EoP) switches. Similarly to the role of Internet Exchange Points (IXPs), these switches break a datacenter network into an inter-pod network that has alternative physical networking infrastructure exchanged in NetEx, and a closed, fast intra-pod network, e.g., a FatTree network that provides full-bisection intra-pod bandwidth.
Users interact with NetEx by submitting high-level requests for connectivity. The marketplace forwards these requests to eligible providers (i.e., those connecting the relevant pods), which, in turn, return offers consisting of priced path segments, along with their characteristics. NetEx then facilitates the transfer of payment and the provisioning of the path segments selected by users. This high-level interface allows users to obtain service without being aware of the complexities of the different underlays, and providers to only expose the minimum required information for the market operation. Providers are free to implement paths however they see fit, and to use a wide spectrum of valuation and business strategies.
We are, as far as we know, the first to propose a network marketplace extending to the physical layer for a cloud datacenter, but our design borrows liberally from the datapath mechanisms from Pathlet routing, Segment routing, and from some of the market aspects of the ChoiceNet architecture.
Integrating with the MOC
The first step works with a slightly less general definition of which traffic goes onto each path segment allocated with network service providers. Currently, MOC projects can define different VLANs as their networking abstraction. NetEx will be used initially to select path segments between two or more pods that will serve as the connection among customer VLANs provisioned in each pod.
Initial prototype on GitHub
Send email to Rodrigo Fonseca.