In January 2020, Facebook launched a Probability and Programming request for proposals (RFP) designed to support research that addressed fundamental problems at the intersection of machine learning, programming languages, statistics, and software engineering. Today, we’re announcing the recipients of these research awards.
VIEW RFPAs a continuation of investments made in this area in 2019, this year’s Probability and Programming RFP broadened its areas of interest for research proposals. We were especially interested in proposals that advanced foundations or practice of the aforementioned research areas in probability and programming, including the following:
“This was the second incarnation of the Probability and Programming RFP, and certainly not the last,” says Erik Meijer, Engineering Director at Facebook. “Both the volume and quality of the submissions were yet again excellent, making it a really tough choice to select the 19 winners from this batch. It is great to see the programming language community embracing the machine learning space and vice versa.”
“While it was expected that the proposals would be excellent scientific directions in their own right, it is great they also are on topics of current interest to us from a more applied point of view,” says Satish Chandra, Software Engineer at Facebook. “We look forward to having rich technical exchanges with each of the project teams.”
We received 70 proposals from more than 51 universities in 16 countries. Thank you to all the researchers who took the time to submit a proposal, and congratulations to the award recipients.
Accelerated and robust probabilistic programming for autonomous systems
Sasa Misailovic (University of Illinois at Urbana-Champaign)
Adversarial machine learning by morphological abstract interpretation
Francesco Ranzato, Roberto Giacobazzi (University of Padova)
Attribution-based approaches to learning from code and non-code artifacts
Thomas Reps, Jordan Henkel (University of Wisconsin–Madison)
Ensuring semantic robustness of machine learning models
Isil Dillig (University of Texas at Austin)
Formalizing and verifying fair data use in collaborative machine learning
Olya Ohrimenko, Ben Rubinstein, Toby Murray (University of Melbourne)
Generalizable AI through neuro-symbolic systems
Animashree Anandkumar (California Institute of Technology)
Grisette: Plug-and-play components for autodiff, inference
Rastislav Bodik (University of Washington)
Improving Bayesian optimization with probabilistic programs
Brooks Paige (University College London)
Machine learning algorithms for imprecise and bad data
Edoardo Patelli (University of Strathclyde), Ander Gray, Marco de Angelis, Scott Ferson (University of Liverpool)
Manas: Big code assisted neural architecture search
Hridesh Rajan (Iowa State University)
Neural code review
Eran Yahav, Uri Alon (Technion)
Precise and scalable inference for discrete probabilistic programs
Guy Van den Broeck, Todd Millstein (University of California, Los Angeles)
Programmatic robustness in models over discrete data
Aws Albarghouthi, Loris D’Antoni (University of Wisconsin–Madison)
Provable polytope patching of deep neural networks
Aditya Thakur (University of California, Davis)
Proving robustness to data poisoning
Loris D’Antoni, Aws Albarghouthi (University of Wisconsin–Madison)
Quantifying, testing, and controlling the uncertainty of deep learning
Lin Tan (Purdue University)
RAPID: Reasoning about probabilistic independence and dependence
Justin Hsu (University of Wisconsin–Madison)
Static analysis of probabilistic programs
Jeannette M. Wing, Andrew Gelma (Columbia University)
UI2code: Automatically generating code for user interface
Chunyang Chen (Monash University)