Macquarie University
The exploration of realistic avatars is growing both in academia and industry. With these technical developments come ethical questions about data management and privacy, diversity and inclusion, agency and identity, societal impact, among other topics. These questions require careful consideration both early and often in the process of research and development.
As part of its work to build the future of connection in augmented reality (AR) and virtual reality (VR), Reality Labs Research is developing Codec Avatars — highly realistic virtual representations of real people that could one day enable people to interact in VR as naturally as they do in the real world. Existing social VR and digital ethics research provides numerous thoughtful frameworks and guidelines for thinking through potential implications of such a space. However, there are very few concrete, published examples or case studies of teams applying social VR, values-based design, or digital ethics frameworks to emergent avatar technologies (in this case, highly realistic VR avatars) — that is, a shared body of knowledge of how teams have taken guidelines and translated them into their day-to-day work and decision-making.
In keeping with Meta’s Responsible Innovation principles“Never Surprise People,” “Provide Controls That Matter,” “Put People First,” and “Consider Everyone,” the Reality Labs research team in Pittsburgh is seeking case studies and examples related to the following topics: (1) how teams build controls that preserve future user agency over their data, such as the realistic avatar itself and information from headset cameras and other input used to pilot the avatar; (2) how teams explain to future users when the social signal (e.g., gestures, facial expressions, etc.) the VR system is conveying might not be accurate; and (3) how to improve the VR community’s access to tactical guidance about how to responsibly build in these spaces using existing frameworks and theoretical research.
To foster further innovation in this area, and to deepen our collaboration with academia, Reality Labs Research is pleased to invite faculty to respond to this call for research proposals pertaining to the aforementioned topics. We anticipate issuing up to a total of eight awards, with a maximum value of $75,000 each. Payment will be made to the proposer's host university as an unrestricted gift.
Macquarie University
University of Auckland
University of Newcastle
University of Messina
Carnegie Mellon University
University of Liverpool
University of Waterloo
University of Maryland
Applications Are Currently CLosed
Areas of interest include, but are not limited to, the following:
1. Providing controls that matter for highly realistic VR avatars
How to explain what avatar data is being collected and why: Our principle“Provide Controls That Matter” states, “People should always have enough information to make informed choices about whether and how to use our products.”We are interested in case studies that show clear and precise ways to explain what avatar data collection is happening inside a VR app, as well as other potentially relevant information — where it will be shared and for what purpose, for example, or how long it will be stored.
How to explain what other information could be inferred from this avatar data: Meta’s “Never Surprise People” responsible innovation principle states, “We are transparent about how our products work and the data they collect. We communicate clearly and candidly so people can understand the tradeoffs we considered, and make informed decisions about whether and how to use our products.” O’Brolcháin et al in 2016 called this data a “digital footprint” and asserts that it will be important to alert people to “what sort of footprint they are leaving, and who will be able to see it.”
As several authors have suggested, how the data itself is used after collection is just as important as what is being collected in the first place. It is more possible than ever to make inferences by combining data with other data sources, even seemingly unrelated sources, and data ethicists have pointed out that it can be exceedingly hard to explain this comprehensively to end users. For example, head and body pose information are likely required to “pilot” a realistic avatar. A user may make a different decision about whether to opt in to using this type of data to pilot an avatar if it is linked with other information, such as an audio recording, employee ID number, or time of call. But that level of explanation, particularly in VR, is extremely dense. What is the optimal way to signal to future users what information could be inferred from avatar data? A helpful analogy is the Creative Commons license agreements. What is the VR data collection equivalent?
How to explain how long the avatar data will be stored: The third component is conveying where data will be stored, and for how long. For example, if a person makes multiple avatars with different representations of their appearance (e.g., hair styled differently for work versus friends), what are clear ways to show where those multiple appearances are being stored, for how long they’ll be stored, and how to delete them?
How might we provide a rubric such that future VR users can evaluate this and other experiences for data usage and their personal comfort level?
How to explain this succinctly in VR: As mentioned above, explaining this in VR is uniquely challenging because VR induces a higher cognitive load than reading on a flat 2D screen. For example, game designers have commented that, generally speaking, interactions in VR must be simpler than what would be asked of an adult on a PC, and often what is intuitive in VR has very little to do with 2D screen-based interactions and more in common with real-life object affordances and immersive theater. For example, the successful VR game “I Expect You To Die” has interactions and hints based on what playtesters chose to do with objects in a full-scale cardboard-and-tin-foil replica of the levels. With this in mind, what are effective ways to convey information in a high-realism virtual environment in a “just in time,” digestible way?
2. Considering everyone: Future user-to-user information flow, missed signals, and marginalized groups at work
How do we explain to future users when the social signal the VR system is conveying from an interaction between realistic avatars is not completely accurate? This is especially important in situations where the VR system may be incorrect about what facial expression or body language it is conveying, or must simulate some of this information (because, for example, an arm is behind someone’s back).
This is even more important when considering groups of people who have been historically marginalized and/or subject to bias at work, such as women, minorities, and people with disabilities, because the consequences for a mistaken social signal may impact these groups more. How might we communicate potential errors or missing signals to both parties in a work conversation inside of VR?
3. Tactical guidance to enact existing research
Meta’s Responsible Innovation Principles work to translate the theoretical into the actionable. We believe that all teams in the social VR space would benefit from as many examples as possible of different digital ethics or values-based design frameworks being put into practice, but existing research in this space is limited.
We are seeking case studies of concrete, flexible methods that can help teams ground theoretical approaches to practical action, especially in emerging technology where part or all of an experience may not exist yet.
Value Sensitive Design (VSD) is an excellent related framework but suffers from what Winkler and Spiekermann recently called “a lack of methodological guidance" (Winkler and Spiekermann 2021) and note that over VSD’s 20-year lifespan, only four case studies have reported iterations that “promise[d] enhanced design.” Any tactical guidance or examples of VSD being put into practice is especially welcome.
Morley et al (2021) point out the need for further research on “how to evaluate translational tools” — that is, if a team finds methods for applying such guidelines in their work, how can they decide whether the method is an effective one?
If you have additional questions about this RFP, please contact Eric Baldwin at ericbaldwin@fb.com.
Most of the RFP awards are an unrestricted gift. Because of its nature, salary/headcount could be included as part of the budget presented for the RFP. Since the award/gift is paid to the university, they will be able to allocate the funds to that winning project and have the freedom to use as they need. All Facebook teams are different and have different expectations concerning deliverables, timing, etc. Long story short – yes, money for salary/headcount can be included. It’s up to the reviewing team to determine if the percentage spend is reasonable and how that relates to the decision if the project is a winner or not.
We are flexible, but ideally proposals submitted are single-spaced, Times New Roman, 12 pt font.
Research awards are given year-round and funding years/duration can vary by proposal.
Yes, award funds can be used to cover a researcher’s salary.
One person will need to be the primary PI (i.e., the submitter that will receive all email notifications); however, you’ll be given the opportunity to list collaborators/co-PIs in the submission form. Please note in your budget breakdown how the funds should be dispersed amongst PIs.
Please read these terms carefully before proceeding.
Meta’s decisions will be final in all matters relating to Meta RFP solicitations, including whether or not to grant an award and the interpretation of Meta RFP Terms and Conditions. By submitting a proposal, applicants affirm that they have read and agree to these terms and conditions.