February 7, 2020

Announcing the winners of the Systems for ML research awards

By: Meta Research

In September 2019 at the annual AI Systems Faculty Summit, Facebook launched the Systems for Machine Learning request for proposals with the goal of funding impactful solutions in the areas of developer toolkits, compilers/code generation, system architecture, memory technologies, and ML accelerator support. The winners have now been chosen and are listed below.

“Due to the great successes of collaborations that have come from our previous RFPs in systems and machine learning, we’re very excited to continue with another round of investments in academic research in this important domain,” says Kim Hazelwood, Senior Engineering Manager.

Previous RFPs have included the 2017 Hardware and Software Systems RFP and the 2019 AI System Hardware/Software Co-Design. This year, we were particularly interested in proposals that fell into the following categories:

  • Scalable, elastic, and reliable distributed machine learning and inference
  • System and architecture support for personalized recommendation systems
  • Programming language and compilers for platform-agnostic machine learning
  • Resource provisioning for efficient inference/training in heterogeneous data centers
  • On-device training and inference
  • System and architecture support for privacy-preserving machine learning
  • System support for multi-party compute and private/secure inference
  • Emerging technologies, such as near-memory processing and in-memory computing systems applied to machine learning
  • Novel machine learning systems beyond neural networks
  • Emerging technologies for efficient machine learning

We received 167 proposals from more than 100 universities in 26 countries around the world. Thank you to all the researchers who took the time to submit a proposal, and congratulations to the award recipients. Winners will be invited to the next AI Systems Faculty Summit in Fall 2020.

For more details about the Systems for ML RFP, including background information, eligibility, and proposal requirements, visit its application page.

Research award winners

Principal investigators are listed first unless otherwise noted.

A Near-Memory Processing Architecture for Training Recommendation Systems
Minsoo Rhu (KAIST)

Accelerating and Deploying Natural Language Processing Systems
Lisa Wu Wills (Duke University)

Bounded Non-Determinism for Real-Time Accelerators
Kunle Olukotun, Alexander Rucker, and Muhammad Shahbaz (Stanford University)

Concerto: Generating Embeddings with Systems-level Objectives
Asaf Cidon, Junfeng Yang, and Suman Jana (Columbia University)

Efficient and Private Deep Learning using 3-party Secure Computation
Prateek Mittal and Sameer Wagh (Princeton University)

Efficient On-Device Distributed Deep Learning via Importance Sampling
Mi Zhang and Ming Yan (Michigan State University)

Improving the Performance & Efficiency of Deep Learning Recommender Systems
Christopher Batten (Cornell University) and Michael Taylor (University of Washington)

Learning To Compress and Compile Neural Networks
Michael Carbin and Saman Amarasinghe (MIT)

Massively Parallel Graph Sampling on GPUs
Marco Serafini and Arjun Guha (University of Massachusetts – Amherst)

Scheduling Jobs with Complex Delay Costs in Data Processing Platforms
Milan Vojnovic (London School of Economics and Political Science)

Finalists

Accelerating sparse operations in DNN using PIM and demonstration with FPGA
Hyesoon Kim (Georgia Tech)

Decoupling Deep Learning Models from Underlying Hardware
Michael J. Freedman (Princeton University)

DeepBuild: Building Safe Machine Learning Systems
Sarfraz Khurshid and Corina Pasareanu (The University of Texas at Austin)

Densely Multiplexed and Highly Predictable DNN Serving
Jonathan Mace and Antoine Kaufmann (Max Planck Institute)

Edge-Centric Distributed Deep Learning
Abhishek Chandra (University of Minnesota)

Intelligent Photonic Platforms for Network-edge In-Memory Computing
Mario Miscuglio and Volker J. Sorger (George Washington University)

Model-less Inference Serving for Ease-of-use and Cost Efficiency
Christos Kozyrakis and Neeraja Yadwadkar (Stanford University)

On-Device Federated Deep Learning via Hardware-Neural Network Co-Design
Diana Marculescu (The University of Texas at Austin)

OS Techniques for Large-Scale Personalization and Recommendation Models
Abhishek Bhattacharjee (Yale University)

Prefetching and Near-Data-Acceleration for Large Recommendation Systems
Mattan Erez and Michael Orshansky (University of Texas at Austin)

Privacy-preserving Models in Federated Learning
Giuseppe Ateniese (Stevens Institute of Technology)

RELIEF: RL-based Wireless Network Management System for QoE Optimization
Arpit Gupta, Elizabeth Belding, and Yu-Xiang Wang (University of California, Santa Barbara)

Resilient and low cost machine learning systems via coding-theoretic tools
Rashmi Vinayak (Carnegie Mellon University)

Scalable Privacy-preserving Deep Learning on Encrypted Data with GPUs
Daniel Takabi (Georgia State University)

Scalable Resource Provisioning for ML in Heterogeneous Systems
David Bader and Chase W. Wu (New Jersey Institute of Technology)

Secure Multi-Party Computation for Privacy Preserving Machine Learning
Vipul Goyal (Carnegie Mellon University)

To view our currently open research awards and to subscribe to our email list, visit our Research Awards page.