Cloud Research

Tracing and Monitoring

Description yet to come!


D3N: A multi-layer cache for data centers

This project designs and develops D3N, a novel multi-layer cooperative caching architecture that mitigates network imbalances by caching data on the access side of each layer of hierarchical network topology.



Open Cloud Testbed

The NSF “Open Cloud Testbed” (OCT) project will build and support a testbed for research and experimentation into new cloud platforms – the underlying software which provides cloud services to applications. Testbeds such as OCT are critical for enabling research into new cloud technologies – research that requires experiments which potentially change the operation of the cloud itself.

FPGAs in Large-scale Computer Systems

Future data centers are moving towards a more fluid model, with computation and communication no longer localized to commodity CPUs and routers. Next generation “data-centric” data centers will “compute everywhere,” whether data is stationary (in memory) or on the move (in network). Reconfigurable hardware, in the form of Field Programmable Gate Arrays (FPGAs), are transforming ordinary clouds into massive supercomputers. 

ESI encompasses work in several areas to design, build and evaluate secure bare-metal elastic infrastructure for data centers. Additional research focuses on market-based models for resource allocation. 


Linux Unikernels

Unikernels allow applications to be deployed in a highly optimized manner with numerous use cases for the cloud, such as function-as-a-service. An application running in a unikernel does not have to incur the overhead of context switches, the entire software stack has a smaller size, and the deployment is easier in comparison to conventional kernels. However, unikernels have never been fully adopted due to a number of roadblocks.

Implementing Secure Multi-Party Computing

Secure Multiparty Computation (MPC) is a cryptographic primitive that allows several parties to jointly and privately compute desired functions over secret data.  Building and deploying practical MPC applications faces several obstacles, including performance overhead, complicated deployment and setup procedures, and adoption of MPC protocols into modern software stacks.  



Outfitting QEMU/KVM with Partitioning Hypervisor Functionality

This project extends the virtualization capabilities of QEMU and KVM by adding partitioning hypervisor functionality. With this implementation, hardware resources can be exclusively assigned to specific tasks and VMs.

Removing Memory as a Noise Factor

Memory bandwidth is increasingly the bottleneck in modern systems and a resource that, until today, we could not schedule. This means that, depending on what else is running on a server, performance may be highly unpredictable, impacting the 99% tail latency, which is increasingly important in modern distributed systems. Moreover, the increasing importance of high-performance computing applications, such as machine learning and real-time systems, demands more deterministic performance, even in shared environments. 

An Optimizing Operating System: Accelerating Execution with Speculation

To optimize performance, Automatically Scalable Computation (ASC), a Harvard/BU collaboration attempts to auto-parallelize single threaded workloads, reducing any new effort required from programmers to achieve wall clock speedup. SEUSS takes a different approach by splicing a custom operating system into the backend of a high throughput distributed serverless platform, Apache OpenWhisk.

Want to get Involved?

Six years after the MOC’s 2014 launch, the work done by the MOC team, its partners, and numerous collaborators has led to a much larger constellation of deeply connected initiatives in 2020 – driving the broader and deeper discussion at the 2020 Open Cloud Workshop. We are excited to be in a place where there is more opportunity than ever before to get involved.