In April 2021, Facebook launched the 2021 Statistics for Improving Insights, Models, and Decisions request for proposals live at The Web Conference. Today, we’re announcing the winners of this award.
At Facebook, our research teams strive to improve decision-making for a business that touches the lives of billions of people across the globe. Making advances in data science methodologies helps us make the best decisions for our community, products, and infrastructure.
This RFP is a continuation of the 2019 and 2020 RFPs in applied statistics. Through this series of RFPs, the Facebook Core Data Science team, Infrastructure Data Science team, and Statistics and Privacy team aim to foster further innovation and deepen their collaboration with academia in applied statistics, in areas including, but not limited to, the following:
The team reviewed 134 high-quality proposals and are pleased to announce the 10 winning proposals below, as well as the 15 finalists. Thank you to everyone who took the time to submit a proposal, and congratulations to the winners.
Breaking the accuracy-privacy-communication trilemma in federated analytics
Ayfer Ozgur (Stanford University)
Certifiably private, robust, and explainable federated learning
Bo Li, Han Zhao (University of Illinois Urbana-Champaign)
Experimental design in market equilibrium
Stefan Wager, Evan Munro, Kuang Xu (Stanford University)
Learning to trust graph neural networks
Claire Donnat (University of Chicago)
Negative-unlabeled learning for online datacenter straggler prediction
Michael Carbin, Henry Hoffmann, Yi Ding (Massachusetts Institute of Technology)
Non-parametric methods for calibrated hierarchical time-series forecasting
B. Aditya Prakash, Chao Zhang (Georgia Institute of Technology)
Privacy in personalized federated learning and analytics
Suhas Diggavi (University of California Los Angeles)
Reducing simulation-to-reality gap as remedy to learning under uncertainty
Mahsa Baktashmotlagh (University of Queensland)
Reducing the theory-practice gap in private and distributed learning
Ambuj Tewari (University of Michigan)
Robust wait-for graph inference for performance diagnosis
Ryan Huang (Johns Hopkins University)
An integrated framework for learning and optimization over networks
Eric Balkanski, Adam Elmachtoub (Columbia University)
Auditing bias in large-scale language models
Soroush Vosoughi (Dartmouth College)
Cross-functional experiment prioritization with decision maker in-the-loop
Emma McCoy, Bryan Liu (Imperial College London)
Data acquisition and social network intervention codesign: Privacy and equity
Amin Rahimian (University of Pittsburgh)
Efficient and practical A/B testing for multiple nonstationary experiments
Nicolò Cesa-Bianchi, Nicola Gatti (Università degli Studi di Milano)
Empirical Bayes deep neural networks for predictive uncertainty
Xiao Wang, Yijia Liu (Purdue University)
Global forecasting framework for large scale hierarchical time series
Rob Hyndman, Christoph Bergmeir, Kasun Bandara, Shanika Wickramasuriya (Monash University)
High-dimensional treatments in causal inference
Kosuke Imai (Harvard University)
Nowcasting time series aggregates: Textual machine learning analysis
Eric Ghysels (University of North Carolina at Chapel Hill)
Online sparse deep learning for large-scale dynamic systems
Faming Liang, Dennis KJ Lin, Qifan Song (Purdue University)
Optimal use of data for reliable off-policy policy evaluation
Hongseok Namkoong (Columbia University)
Principled uncertainty quantification for deep neural networks
Tengyu Ma, Ananya Kumar, Jeff Haochen (Stanford University)
Reliable causal inference with continual learning
Sheng Li (University of Georgia)
Training individual-level machine learning models on noisy aggregated data
Martine De Cock, Steven Golob (University of Washington Tacoma)
Understanding instance-dependent label noise: Learnability and solutions
Yang Liu (University of California Santa Cruz)