This week we are presenting our paper A Face Recognition Application for People with Visual Impairments: Understanding Use Beyond the Lab, at the Conference on Human Factors in Computing (CHI) annual conference in Montreal, Canada.
To build communities and bring the world closer together, we have to eliminate barriers preventing communities from participating in social interactions online or offline. One of these communities we work closely with is people with visual impairments.
We previously learned from Facebook users who are visually impaired that the amount of inaccessible visual content on social media is a key issue that prevents them from fully participating on social platforms like Facebook. These insights led to the creation of automatic alt-text, an AI-powered feature that automatically describes the content of photos on Facebook in real-time across 29 different languages.
More recently, when we spoke with this user community, we learned that recognizing others is another major challenge, preventing them from fully engaging in social activities and undermining their sense of privacy and physical security. After understanding people’s needs for facial information, we designed Accessibility Bot as a research prototype to provide low cost, mobile, real-time support for such needs. Accessibility Bot is a Facebook Messenger Bot that helps people with visual impairments to recognize their Facebook friends and identify their friends’ facial expressions (e.g., smiling vs. neutral face).
Before building Accessibility Bot, we conducted an interview study with 8 visually impaired participants and listened to their needs and requirements. The interviews focused on understanding the real-world challenges they face when finding, recognizing, and interacting with people, as well as the types of information needed to properly navigate social activities.
All of our participants told us that recognizing people is an important task (and challenge) in their daily life. Many of them could identify familiar people by their voice, but this strategy works less well in a noisy environment or when people are not talking. In those scenarios, people with visual impairment often need to find others by calling out names, or through phone calls or text messages.
“My [blind] husband and I were going to have dinner and decided to meet inside the front door of a mall. I went in one side and he went through the other. Finally, some woman came up and said, ‘Are you meeting a blind gentleman? He is standing about ten feet away.’”
- P5, female, blind
When asked about what information our technology could offer them regarding people, participants told us that they want to know who is nearby, their relative location, and their physical and facial attributes (e.g. “smiling”, “happy”).
“People have their own style and it says things about their personality. I could also ask where they bought [their outfits] and get something similar.”
- P3, female, ultra low vision
After learning more about the need to know who is around a visually impaired person, we designed Accessibility Bot as a research prototype to provide low-cost, mobile, real-time support to address such needs. Using Facebook’s face recognition technology, Accessibility Bot can assist users with visual impairments through screen reader software that provides facial information regarding their Facebook friends who have their face recognition setting on, including names, face locations, and facial expressions. The interaction flow is illustrated below:
Figure 1. Accessibility Bot Workflow (a) the Bot appears as a contact in Facebook Messenger; (b) it automatically replies to the user and tells her to use the camera; (c) it detects faces in real time and verbally reports only the number of faces; (d) upon user double-tapping the screen, it lists the recognition results and reports people’s names and relative locations; (e) a user can navigate the list and selectively listen to a specific person’s detailed facial information. Note: The text on the bottom is for demonstration purposes only and does not appear in the app.
We tested our prototype through a one-week diary study, during which 7 visually impaired participants used the prototype in their daily activities. The weeklong diary study allowed us to examine the use of the Accessibility Bot in real-world situations. Through the study, we discovered more nuanced findings that researchers do not typically uncover in lab studies of research prototypes.
Our participants found the Accessibility Bot to be most useful during activities with many people and a lot of noise, as well as during activities with other visually impaired people. They used it for various daily situations, including gathering with family or close friends, work-related events, loud parties, and activities with many other people with visual impairments. Some participants even used the Accessibility Bot for taking good selfies of themselves.
“It’s helpful, especially if it’s based on your friends on Facebook. Because I know I’m definitely Facebook friends with way more people than I know their voices just by hearing”
— DP2, female, ultra low vision
Almost all study participants found facial expression information provided by the Accessibility Bot very helpful, although the prototype was found to be most helpful for people with some functional vision. People with no functional vision, experienced more challenges using the Accessibility Bot due to increased difficulty aiming the camera to capture faces with typical composition and luminance that our AI technology is accustomed to.
In summary, our goal is to reduce technology barriers for visually impaired people to bring people closer together. This research project is just one of the many steps we are taking to achieve our goals, as we continue our commitment to build inclusive technologies.
“You’re sighted, you can see and tell who I am, then why can’t I? I’m not taking any information, but just want to see who you are.”
- DP3, male, ultra low vision
The research for this CHI 2018 paper A Face Recognition Application for People with Visual Impairments: Understanding Use Beyond the Lab summary, was conducted by lead author, Yuhang Zhao who worked on the project during a research internship with the Core Data Science team at Facebook, working with Facebook Researchers Shaomei Wu and Lindsay Reynolds.
A Face Recognition Application for People with Visual Impairments: Understanding Use Beyond the Lab
Lindsay Reynolds, Shaomei Wu, Shiri Azenkot, Yuhang Zhao
Communication Behavior in Embodied Virtual Reality
Harrison Jesse Smith, Michael Neff
Examining the Demand for Spam: Who Clicks?
Brian Waismeyer, Elissa M. Redmiles, Neha Chachra
Feeling Speech on the Arm
Jennifer Chen, Pablo Castillo, Robert Turcott, Ali Israr, Frances Lau
Social Influence and Reciprocity in Online Gift Giving
René F. Kizilcec, Eytan Bakshy, Dean Eckles, Moira Burke
Speech Communication through the Skin: Design of Learning Protocols and Initial Findings
Jaehong Jung, Yang Jiao, Frederico M, Severgnini, Hong Z Tan, Charlotte M. Reed, Ali Israr,
Frances Lau, Freddy Abnousi
Towards Pleasant Touch: Vibrotactile Grids for Social Touch Interactions
Ali Israr, Freddy Abnousi