February 26, 2020

Enforcing our Community Standards: How we track and measure progress

By: Meta Research

One of Facebook’s priorities is to keep people safe across the Facebook services. Since the early days of Facebook, we’ve maintained a set of Community Standards — a set of rules for what content stays up and what comes down on Facebook. Over the past several years, we have made multiple investments to help us more effectively measure, detect, and remove content that goes against our Community Standards. From developing AI that can detect potentially violating content to partnering with the academic research community, a lot of work goes into keeping people safe on our platform.

To learn more about all the work that goes into the enforcement of our Community Standards, we sat down with Irina Somin, Technical Program Manager within Facebook’s Integrity organization.

The goal of Facebook Integrity is to give people a voice while preventing real-world harm and ensuring that people feel safe in our community. Somin’s team focuses on building transparency and accountability for enforcement of Facebook’s Community Standards. They manage the core systems and infrastructure that detect and remove violating content on the platform, as well as track and measure Facebook’s progress in these efforts over time.

In this Q&A, Somin discusses her role within Community Integrity at Facebook, how Facebook tracks and measures progress in this space, how her work interacts with research, and what motivated her to pursue a career in this field.

Q: What are you responsible for in your current role?

Irina Somin: One of my key areas of focus is the measurement platform that tracks Facebook’s progress in enforcing our Community Guidelines. Facebook has been detecting and removing violating content for over a decade, but previously there had not been a way to consistently measure these efforts. Our goal is to bring visibility, accountability, and transparency into the process in a quantitative way. We have developed the measurement platform and created a common language and taxonomies of our policies based on Facebook’s Community Standards.

Q: How do you measure Facebook’s progress in detecting and removing harmful content?

IS: We measure and track several metrics. We track standard metrics, such as how much content we take down for a specific problem area like hate speech or terrorist content.

We also track how much of this content was proactively identified versus reported by a user. We always hope to identify and deal with a problem before it is reported. We call that proactive detection. For example, there are very sophisticated people trying to set up fake accounts to cause harm on the platform through monetization, spam, or political agendas. Our goal is to stop them at registration, before they can get on the platform and cause harm.

We also report how many of our actions are appealed and how many of our decisions we overturn. We will overturn a decision if we made a mistake.

The last metric, which is the most important, is called prevalence. The more standard metrics show the volume of harmful posts taken down, but they don’t always indicate overall improvement. After we detect and remove harmful content, prevalence shows how much is still left on the platform. To do this, we sample views on Facebook in real time, label the underlying content, and construct a metric to estimate how much harm is currently on the platform. We track how that prevalence is moving over time. We believe that is the most meaningful way to measure progress of our efforts.

We update our progress every six months in ourCommunity Standards Enforcement Report, with a supporting commentary blog on Facebook Newsroom highlighting notable changes.

Q.How is your work involved with research at Facebook?

IS: There are a couple of ways that I work with researchers. One area is meeting with internal and external researchers to understand the problems they are working on and figuring out how Facebook can support them. We discuss the specific challenges that each geographical region has, whether it’s a lack of digital literacy, online harassment of women in politics, the role of social media in shaping public opinion, or the impact of social media on youth. We offer a number of ways to support this research — through research awards, the Facebook Fellowship program, and research collaborations.

We also identify what historical data can be provided to power external research while preserving the privacy of the people who use our apps and services. For example, we are collaborating on elections research with Social Science One, providing access to privacy-protected data sets for research to understand the effects of social media on democracy and elections.

Another area of focus is evolving and improving our measurement methodologies. Over the last year, my team engaged with a panel of external researchers (Data Transparency Advisory Group) to provide visibility into our content moderation processes and measurement methodologies and to seek their independent feedback. The group published their assessment and recommendations on the current state of these methodologies and how they could be improved. We also published a blog post that captures key highlights from the report.

Q: You recently participated in a Content Policy Research Initiative (CPRI) workshop. Tell us about that experience.

IS:Facebook recently hosted a series of CPRI workshops in DC and Paris, Latin America, Sydney and Auckland, and Tanzania and Italy. The goal of the workshops was to engage leading researchers around the world on how to design more effective content policies and how to improve our enforcement processes — both in partnership with Facebook and independently.

To help inform their research, we explained how Facebook’s Product, Operations, and Content Policy teams work together to improve and enforce policies governing what is and isn’t allowed on our platforms. We also outlined our data transparency efforts surrounding how Facebook tracks progress on our enforcement of Community Standards across the range of violating content and experiences. We presented some of our internal research on hate speech and dangerous organizations and discussed opportunities for future research collaborations.

During the workshops, researchers, journalists, and community leaders gave us their perspectives on the problems and concerns they have. By sharing more information about how our systems work and the challenges we have in content moderation at scale, we were able to have informed and constructive discussions on where we may have blind spots, and what success would look like in their respective regions, countries, and local communities.

To me, this is a major step forward in educating and supporting academic research to help the global research community achieve the mission of making the platform safe while connecting the world and giving people voice.

Q: What interests you about working in Community Integrity?

IS: Over the last several years as my kids are growing up, I have embraced the fact that they will live in a very different world, where social platforms are woven into all parts of life. I can sit on the sidelines and lament the shortcomings of the social networks, or I can invest my time, energy, and skill into making the Facebook platform a safe and open space for everyone, including my kids.

To learn more about the types of content that we detect, reduce, and remove, visit the Facebook Community Standards page.