Areas of interest include, but are not limited to, the following:
1. Providing controls that matter for highly realistic VR avatars
How to explain what avatar data is being collected and why: Our principle“Provide Controls That Matter” states, “People should always have enough information to make informed choices about whether and how to use our products.”We are interested in case studies that show clear and precise ways to explain what avatar data collection is happening inside a VR app, as well as other potentially relevant information — where it will be shared and for what purpose, for example, or how long it will be stored.
How to explain what other information could be inferred from this avatar data: Meta’s “Never Surprise People” responsible innovation principle states, “We are transparent about how our products work and the data they collect. We communicate clearly and candidly so people can understand the tradeoffs we considered, and make informed decisions about whether and how to use our products.” O’Brolcháin et al in 2016 called this data a “digital footprint” and asserts that it will be important to alert people to “what sort of footprint they are leaving, and who will be able to see it.”
As several authors have suggested, how the data itself is used after collection is just as important as what is being collected in the first place. It is more possible than ever to make inferences by combining data with other data sources, even seemingly unrelated sources, and data ethicists have pointed out that it can be exceedingly hard to explain this comprehensively to end users. For example, head and body pose information are likely required to “pilot” a realistic avatar. A user may make a different decision about whether to opt in to using this type of data to pilot an avatar if it is linked with other information, such as an audio recording, employee ID number, or time of call. But that level of explanation, particularly in VR, is extremely dense. What is the optimal way to signal to future users what information could be inferred from avatar data? A helpful analogy is the Creative Commons license agreements. What is the VR data collection equivalent?
How to explain how long the avatar data will be stored: The third component is conveying where data will be stored, and for how long. For example, if a person makes multiple avatars with different representations of their appearance (e.g., hair styled differently for work versus friends), what are clear ways to show where those multiple appearances are being stored, for how long they’ll be stored, and how to delete them?
How might we provide a rubric such that future VR users can evaluate this and other experiences for data usage and their personal comfort level?
How to explain this succinctly in VR: As mentioned above, explaining this in VR is uniquely challenging because VR induces a higher cognitive load than reading on a flat 2D screen. For example, game designers have commented that, generally speaking, interactions in VR must be simpler than what would be asked of an adult on a PC, and often what is intuitive in VR has very little to do with 2D screen-based interactions and more in common with real-life object affordances and immersive theater. For example, the successful VR game “I Expect You To Die” has interactions and hints based on what playtesters chose to do with objects in a full-scale cardboard-and-tin-foil replica of the levels. With this in mind, what are effective ways to convey information in a high-realism virtual environment in a “just in time,” digestible way?
2. Considering everyone: Future user-to-user information flow, missed signals, and marginalized groups at work
How do we explain to future users when the social signal the VR system is conveying from an interaction between realistic avatars is not completely accurate? This is especially important in situations where the VR system may be incorrect about what facial expression or body language it is conveying, or must simulate some of this information (because, for example, an arm is behind someone’s back).
This is even more important when considering groups of people who have been historically marginalized and/or subject to bias at work, such as women, minorities, and people with disabilities, because the consequences for a mistaken social signal may impact these groups more. How might we communicate potential errors or missing signals to both parties in a work conversation inside of VR?
3. Tactical guidance to enact existing research
Meta’s Responsible Innovation Principles work to translate the theoretical into the actionable. We believe that all teams in the social VR space would benefit from as many examples as possible of different digital ethics or values-based design frameworks being put into practice, but existing research in this space is limited.
We are seeking case studies of concrete, flexible methods that can help teams ground theoretical approaches to practical action, especially in emerging technology where part or all of an experience may not exist yet.
Value Sensitive Design (VSD) is an excellent related framework but suffers from what Winkler and Spiekermann recently called “a lack of methodological guidance" (Winkler and Spiekermann 2021) and note that over VSD’s 20-year lifespan, only four case studies have reported iterations that “promise[d] enhanced design.” Any tactical guidance or examples of VSD being put into practice is especially welcome.
Morley et al (2021) point out the need for further research on “how to evaluate translational tools” — that is, if a team finds methods for applying such guidelines in their work, how can they decide whether the method is an effective one?
If you have additional questions about this RFP, please contact Eric Baldwin at firstname.lastname@example.org.