Applications closed

2022 Privacy Enhancing Technologies request for proposals

About

Over the past few years, we’ve established research award opportunities to support privacy-focused projects in academia. Most recently, our 2021 Privacy Enhancing Technologies request for proposals was met with great interest and we were pleased to award ten excellent projects. We will continue this momentum and refine our topics of interest under the Privacy Enhancing Technologies (PETs) area for 2022.

By integrating Privacy Enhancing Technologies into our products, we are building trustworthy experiences that billions of people use worldwide. Our primary goal is to help design and deploy new privacy enhancing solutions that minimize the data we collect, process, and externally share, while supporting business and delighting consumers across the Meta family of products. As we continue making strides in Privacy Enhancing Technologies at Meta, one of the key elements is learning from outside experts.

To foster further innovation in this area, and to deepen our collaboration with academia, Meta is pleased to invite faculty to respond to this call for research proposals pertaining to the areas of interest highlighted below. We anticipate awarding eight to ten awards, each in the $80,000–100,000 range. Payment will be made to the proposer's host university as an unrestricted gift.


Applications Are Currently CLosed

Application Timeline

Launch Date

March 16, 2022

Deadline

April 20, 2022, at 5:00pm AOE (Anywhere on Earth)

Winners Announced

June 2022

Areas of Interest

1. Privacy preserving analytics

  • Differential privacy for anonymization: Combining differential privacy with other secure computation and privacy improving techniques is a natural way to improve the privacy profile of any solution. Research is required to formalize the combination of techniques such as shuffling, k-anonymity, and aggregation with differential privacy to provide optimal privacy protections.
  • Differential privacy for database management systems: Applying differential privacy to real-world use cases when it involves a large number of tables, streaming data, or complex queries is still challenging. More research is needed to improve the privacy bounds for realistic query workloads on large analytics systems. In addition, understanding the best practices to perform longitudinal analyses of privacy over time, supporting differential privacy in complex data types or efficiently incorporating differential privacy into modern database management systems could advance the adoption of differential privacy.
  • Re-identification measurement: Measuring the risks of privacy loss or re-identification is critical to evaluate the privacy profile of various systems. There is opportunity to build systems that measure real-world impact of correlation and other attacks.

2. Private record linkage and aggregation

  • Asymmetric MPC: The most efficient MPC protocols generally have a cost that is symmetric to both parties. Current techniques (e.g., Functional Encryption, Oblivious RAM, or FHE) that shift the bulk of cost to one party either limit the functions that can be computed or come at a significant increase in total cost. Are there protocols that have more desirable asymmetric scaling properties?
  • Multi-key private record linkage: Research in private set intersection has focused on the case where each record has a single identifier. More work is needed on the extended problem where records are identified by multiple identifiers and we seek flexibility in the matching logic used to join the set, while maintaining privacy requirements. See this paper for a more complete problem statement.
  • Complex aggregations in MPC: Secure aggregation systems such as PRIO offer scalable solutions for many distributed aggregation problems, but further research is needed to unlock efficient solutions to more complex aggregations, such as a summation conditioned on a set intersection of sparse elements.

3. Privacy preserving machine learning

  • Model capacity: A key challenge in privacy preserving machine learning remains the limited communication and computation capabilities for various techniques in private model training, such as on-device federated learning or training with secure computations. How do we improve model capacity to make models as accurate as possible, while still preserving privacy?
  • Trust models: Stronger trust and security models often means higher computation and communication costs, making large-scale ML infrastructure less feasible. What are the right frameworks for designing and validating privacy attacks on ML models? How to improve the trust and security assumptions without sacrificing the feasibility of practical model training?
  • Differential privacy in PPML: Existing differential privacy mechanisms have limitations in their applicability in real-world large-scale ML tasks, especially when we take into account the large number of compositions involved in end-to-end processing on private data (including model training, feature engineering, and model evaluations, etc). How do we improve the privacy bounds and budget allocation to enable practical use cases of DP to support large scale models and development cycles?
  • Mixed training data sets: Data sets being leveraged for ML tasks can sometimes have partitions of both private and non-private portions (a special case being when only the label in a training data set is private, whereas features are public). The notion of privacy can also vary depending on the actor (data owner vs processor, local vs shuffler vs global privacy). How do we design optimal mechanisms to improve utility with the view of these mixed training data sets in mind?

4. Privacy of messaging

  • Improving transparency, security, integrity, and reliability of end-to-end encryption technology remains a high priority for Meta. Where do current cryptographic techniques fall short? How should we handle transient errors and how can we make safe retry logic? How can users know that their peers’ encryption keys are correct? How can we apply end-to-end encryption to encrypt backups and ensure people can recover their data? How does the threat model of end-to-end encryption apply to websites and web messaging?

5. Anonymous credentials

  • Being able to authenticate to Meta’s services with enhanced privacy guarantees for the client is an active area of research for Meta. Can we bring practical instantations of anonymous credentials to scale with Meta’s user base? How do we measure privacy loss when anonymous credentials are reused in practice? Can we use anonymous credentials in conjunction with 2-factor authentication?

6. Privacy preserving techniques in Data for Good

  • Companies have a lot of valuable data that could provide social good if shared more widely, while still preserving privacy of users. Some open questions include the following: How to use PETs to perform research on private data, how to enable research to be conducted across data from several companies, and how to use PETs to perform research on very sensitive data without even collecting it. Could we build tools to empirically measure the privacy risk of research data that is released, or automate potential attacks on data? We are particularly interested in how PETs such as MPC, differential privacy, and zero knowledge proofs can be used to empower research on data types from numeric attributes to text data, images, videos, and high-dimensional data.

7. Privacy in Metaverse

  • Virtual reality and augmented reality are emerging forms of computing, introducing unique privacy challenges. Privacy enhancing technology can play a role in making sure that people can have an immersive experience, while minimizing data collection and use. Can PETs safeguard user privacy while interoperating between different networks? Can PETs be used to keep eye tracking, avatars, and models of users’ data private so that they can be their authentic self? Can PETs be used to keep people’s location private while they are interacting with virtual objects in the real world? We are particularly interested in how PETs such as MPC, client secure enclaves, zero knowledge proofs, and differential privacy can be used in the metaverse.

Requirements

Proposals should include

  • A summary of the project (one to two pages), in English, explaining the area of focus (including the keyword), a description of techniques, any relevant prior work, and a timeline with milestones and expected outcomes
  • A draft budget description (one page) including an approximate cost of the award and explanation of how funds would be spent
  • Curriculum Vitae for all project participants
  • Organization details; this will include tax information and administrative contact details

Eligibility

  • The proposal must comply with applicable U.S. and international laws, regulations, and policies.
  • Applicants must be current full-time faculty at an accredited academic institution that awards research degrees to PhD students.
  • Applicants must be the Principal Investigator on any resulting award.
  • Meta cannot consider proposals submitted, prepared, or to be carried out by individuals residing in or affiliated with an academic institution located in a country or territory subject to comprehensive U.S. trade sanctions.
  • Government officials (excluding faculty and staff of public universities, to the extent they may be considered government officials), political figures, and politically affiliated businesses (all as determined by Meta in its sole discretion) are not eligible.

Frequently Asked Questions

Terms & Conditions

Meta’s decisions will be final in all matters relating to Meta RFP solicitations, including whether or not to grant an award and the interpretation of Meta RFP Terms and Conditions. By submitting a proposal, applicants affirm that they have read and agree to these Terms and Conditions.

  • Meta is authorized to evaluate proposals submitted under its RFPs, to consult with outside experts, as needed, in evaluating proposals, and to grant or deny awards using criteria determined by Meta to be appropriate and at Meta sole discretion. Meta’s decisions will be final in all matters relating to its RFPs, and applicants agree not to challenge any such decisions.
  • Meta will not be required to treat any part of a proposal as confidential or protected by copyright, and may use, edit, modify, copy, reproduce and distribute all or a portion of the proposal in any manner for the sole purposes of administering the Meta RFP website and evaluating the contents of the proposal.
  • Personal data submitted with a proposal, including name, mailing address, phone number, and email address of the applicant and other named researchers in the proposal may be collected, processed, stored and otherwise used by Meta for the purposes of administering Meta’s RFP website, evaluating the contents of the proposal, and as otherwise provided under Meta’s Privacy Policy.
  • Neither Meta nor the applicant is obligated to enter into a business transaction as a result of the proposal submission. Meta is under no obligation to review or consider the proposal.
  • Feedback provided in a proposal regarding Meta products or services will not be treated as confidential or protected by copyright, and Meta is free to use such feedback on an unrestricted basis with no compensation to the applicant. The submission of a proposal will not result in the transfer of ownership of any IP rights.
  • Applicants represent and warrant that they have authority to submit a proposal in connection with a Meta RFP and to grant the rights set forth herein on behalf of their organization. All awards provided by Meta in connection with this RFP shall be used only in accordance with applicable laws and shall not be used in any way, directly or indirectly, to facilitate any act that would constitute bribery or an illegal kickback, an illegal campaign contribution, or would otherwise violate any applicable anti-corruption or political activities law.
  • Funding for winning RFP proposals will be provided to the academic institution with which the primary investigator/applicant is affiliated pursuant to a gift or other funding model as specified in the RFP call. Applicants understand and acknowledge that their affiliated academic institution will need to agree to the terms and conditions of such gift or other agreement to receive funding.
  • Applicants acknowledge and agree that by submitting an application they are consenting to their name, university / organization’s name and proposal title being made public on Meta’s blog on the research.facebook.com website if they are chosen as an RFP winner or finalist. If an applicant is selected as a winner or finalist, they will then have the opportunity to provide written notification that they do not consent to the research.facebook.com blog inclusion.
Stay Connected
Receive email notifications about our research awards