The NSF “Open Cloud Testbed” (OCT) project will build and support a testbed for research and experimentation into new cloud platforms – the underlying software which provides cloud services to applications. Testbeds such as OCT are critical for enabling research into new cloud technologies – research that requires experiments which potentially change the operation of the cloud itself.
The Center for Systems Innovation at Scale (i-Scale) is a partnership between academia and industry to explore the unique challenges of computing systems at large scale, across software, networking, and hardware layers. i-Scale researchers seek answers to problems of pre-competitive research which are relevant to our industrial partners, often collaborating directly via open-source partnerships.
This project designs and develops D3N, a novel multi-layer cooperative caching architecture that mitigates network imbalances by caching data on the access side of each layer of hierarchical network topology.
Future data centers are moving towards a more fluid model, with computation and communication no longhttps://i-scale.org/er localized to commodity CPUs and routers. Next generation “data-centric” data centers will “compute everywhere,” whether data is stationary (in memory) or on the move (in network). Reconfigurable hardware, in the form of Field Programmable Gate Arrays (FPGAs), are transforming ordinary clouds into massive supercomputers.
ESI encompasses work in several areas to design, build and evaluate secure bare-metal elastic infrastructure for data centers. Additional research focuses on market-based models for resource allocation.
Unikernels allow applications to be deployed in a highly optimized manner with numerous use cases for the cloud, such as function-as-a-service. An application running in a octunikernel does not have to incur the overhead of context switches, the entire software stack has a smaller size, and the deployment is easier in comparison to conventional kernels. However, unikernels have never been fully adopted due to a number of roadblocks.
Secure Multiparty Computation (MPC) is a cryptographic primitive that allows several parties to jointly and privately compute desired functions over secret data. Building and deploying practical MPC applications faces several obstacles, including performance overhead, complicated deployment and setup procedures, and adoption of MPC protocols into modern software stacks.
This project extends the octvirtualization capabilities of QEMU and KVM by adding partitioning hypervisor functionality. With this implementation, hardware resources can be exclusively assigned to specific tasks and VMs.
Memory bandwidth is increasingly the bottleneck in modern systems and a resource that, until today, we could not schedule. This means that, depending on what else is running on a server, performance may be highly unpredictable, impacting the 99% tail latency, which is increasingly important in modern distributed systems. Moreover, the increasing importance of high-performance computing applications, such as machine learning and real-time systems, demands more deterministic performance, even in shared environments.
To optimize performance, Automatically Scalable Computation (ASC), a Harvard/BU collaboration attempts to auto-parallelize single threaded workloads, reducing any new effort required from programmers to achieve wall clock speedup. SEUSS takes a different approach by splicing a custom operating system into the backend of a high throughput distributedoct serverless platform, Apache OpenWhisk.
Tracing and Monitoring
Description yet to come!
Six years after the MOC’s 2014 launch, the work done by the MOC team, its partners, and numerous collaborators has led to a much larger constellation of deeply connected initiatives in 2020 – driving the broader and deeper discussion at the 2020 Open Cloud Workshop. We are excited to be in a place where there is more opportunity than ever before to get involved.