Carnegie Mellon University
Ensuring data driven systems are reliably aligned with privacy, security, safety, fairness, and robustness expectations is of foundational importance. The potential for AI to benefit society — through personalized experiences, better science, assistive technology, and generating opportunity — is a captivating promise. But without a clear, reliable ability to understand and detect issues such as privacy risks and fairness harms, trust in data driven systems and AI-powered tools is likely to remain elusive. The goal of this RFP is to help academics build the trusted tools to more effectively monitor systems to help spot concerns in areas like fairness, privacy, and safety.
If privacy researchers, advocates and other stakeholders can more confidently monitor AI systems along important dimensions, researchers can in turn more freely innovate AI advances in ways that can benefit society. Facebook aims to invest in such tooling to improve fairness and privacy to support positive societal change.
We further believe private sector organizations should actively collaborate on solutions with academics, advocates, and regulators to improve consumer privacy and uphold societal values. To do this, we aim to invest in efforts that increase transparency by empowering privacy stakeholders to evaluate systems for their performance in important domains like privacy, safety, interpretability, fairness, and robustness. Specifically, we believe the following are key areas where enhanced transparency and accountability would be valuable:
To foster further innovation in this area, and to deepen our collaboration with academia, Facebook is pleased to invite faculty to respond to this call for research proposals pertaining to the aforementioned topics. We anticipate awarding up to six awards, for up to $100,000 each. Payment will be made to the proposer’s host university as an unrestricted gift.
Carnegie Mellon University
University of Wisconsin–Madison
University of Toronto
University Carlos III de Madrid
National University of Singapore
University of Massachusetts Amherst
Applications Are Currently CLosed
Areas of interest include, but are not limited to, the following:
1. Privacy Leakage Detection
Violations of user privacy should be detected by monitoring and detection of information leakage. For example, poisoning analyses, where faulty user data is fed to the system and then detected emerging in a reconstructable fashion from the predictions, are an exemplar approach. We are interested in supporting tools that automate detection of privacy risks, particularly of black box systems.
2. Safety
AI systems should have safeguards and processes in place to actively prevent harms. Safeguards themselves need to be trusted and transparent. We are interested in novel approaches to linking monitoring to automated safety actions, as well as tools for the monitoring of safety measures themselves.
3. Fairness Issue Detection
Competing fairness objectives and frameworks can involve difficult tradeoffs or incompatibilities and there is no consensus measure of fairness. At the same time, observing fairness issues in an actionable, clear and timely fashion would help facilitate more collaborative discussions around appropriate remedies, motiviate speedy corrections, reduce potential aggregate harm, and provide greater accountability. We invite proposals that actively monitor for potential fairness issues, but also welcome work that specifies proposed fairness goals, measures and tradeoffs.
4. Interpretability and Explainability
The opacity of ML systems often adds weight to distrust in how systems are operating and can obscure potentially unrecognized harm. Automated monitoring that can uncover and describe the relative interpretability and defensibility of the learning patterns of ML systems can allow explainable systems to be recognized and promoted, and encourage opaque systems to be simplified or rebuilt to enhance trust. We welcome proposals for tools that provide interpretability and understandable explanations to ML decisions.
5. Stability
Systems that fail for users whose data is uncommon may create fairness risks for such individuals, and privacy leakage when those failures are observable. Analysis by fuzzing—whereby a continuously generated random stream of new files and edge cases are fed into a system to attempt to provoke errors—has proven to be one of the most powerful tools for building resilient operating systems, browsers and cloud environments, yet the same approach has made fewer inroads to large scale ML deployed systems. We are interested in proposals that can increase the stability of ML systems through continuous testing, such as fuzzing or other forms of automated analysis.
6. Robustness
Reliable, accurate, replicable, auditable performance, constrained to a delimited purpose, are all hallmarks of a robust system. We are interested in tools that can monitor these dimensions, especially in cross-cutting ways that monitor the overarching integrity of performance.
Most of the RFP awards are an unrestricted gift. Because of its nature, salary/headcount could be included as part of the budget presented for the RFP. Since the award/gift is paid to the university, they will be able to allocate the funds to that winning project and have the freedom to use as they need. All Facebook teams are different and have different expectations concerning deliverables, timing, etc. Long story short – yes, money for salary/headcount can be included. It’s up to the reviewing team to determine if the percentage spend is reasonable and how that relates to the decision if the project is a winner or not.
We are flexible, but ideally proposals submitted are single-spaced, Times New Roman, 12 pt font.
Yes, award funds can be used to cover a researcher’s salary.
Budgets can vary by institution and geography, but overall research funds ideally cover the following: graduate or post-graduate students’ employment/tuition; other research costs (e.g., equipment, laptops, incidental costs); travel associated with the research (conferences, workshops, summits, etc.); overhead for research gifts is limited to 5%
One person will need to be the primary PI (i.e., the submitter that will receive all email notifications); however, you’ll be given the opportunity to list collaborators/co-PIs in the submission form. Please note in your budget breakdown how the funds should be dispersed amongst PIs.
Facebook’s decisions will be final in all matters relating to Facebook RFP solicitations, including whether or not to grant an award and the interpretation of Facebook RFP Terms and Conditions. By submitting a proposal, applicants affirm that they have read and agree to these Terms and Conditions.