In May, Meta launched the 2022 AI System Hardware/Software Codesign request for proposals (RFP). Today, we’re announcing the winners of this award.
Deep learning has been particularly amenable to simultaneous design and optimization of several aspects of the system, including hardware and software, to achieve a set target for a given system metric, such as throughput, latency, power, or size, or their combination.
Through this RFP, we hope to support academics looking to further explore codesign opportunities across a number of new dimensions.
The RFP attracted 62 proposals from 47 universities and institutions around the world. Thank you to everyone who took the time to submit a proposal, and congratulations to the winners.
Principal investigators are listed first unless otherwise noted.
Accelerating communication in DLRM via frequency-aware lossy compression
Dingwen Tao (Indiana University Bloomington), Tong Geng (University of Rochester)
Efficient GDR-based communication schemes for distributed DLRM training
Xiaoyi Lu (University of California, Merced)
Hardware-NN architecture co-design for efficient transformer inference
Kurt Keutzer, Amir Gholami, Hasan Genc, Sehoon Kim, Sophia Shao, Thanakul Wattanawong (University of California, Berkeley)
Hardware/software co-design for sparse neural networks
Fredrik Berg Kjoelstad (Stanford University)
Mixed-precision tensor-train methods for neural network training
Zheng Zhang (University of California, Santa Barbara)
OS and hardware support for multi-tenant inference on heterogeneous computers
Dimitrios Skarlatos (Carnegie Mellon University)
Scaling nearest neighbor language models
Danqi Chen, Zexuan Zhong (Princeton University)
Serverless and scalable GNN training with disaggregated compute and storage
Yue Cheng (University of Virginia), Liang Zhao (George Mason University)
ViT acceleration via dedicated algorithm and accelerator co-design
Yingyan Lin (Georgia Tech University)