November 4, 2019

Summaries of the Content Policy Research Initiative workshops in Latin America

By: Meta Research

The Content Policy Research Initiative was launched early this year to engage with the research community around how we develop and enforce our Community Standards and the specific policies that help us apply those overarching standards to content on our platforms. We are committed to funding independent research on this topic (see more about our first- and second-phase awardees) and to sharing information about these policies and processes through a global series of workshops. The first two workshops were held in April 2019 in Washington, DC, and Paris. Summaries of these workshops can be found here.

In June 2019, we continued this effort with workshops in Mexico City (June 11) and São Paulo (June 14). We focused on information sharing around our policies and the policy development process, internal research findings and methodologies, and potential areas for research collaborations.

Presentations from Facebook at both workshops included talks about content policy and operations, our data transparency efforts (for example, the Community Standards Enforcement Report), internal research on hate speech in the context of elections, and research design to better understand offline harm, specifically urban violence. We also discussed participants’ research focus and brainstormed how we might work together to advance relevant scholarship.

For a more detailed account of the two workshops, we compiled a list of key questions and concerns from participants, as well as the research collaboration ideas that were generated.

CPRI workshop in Mexico City

At the CPRI workshop in the Mexico City office, Facebook hosted 10 external researchers. The goals for this session were (a) to share more about our processes and research with this expert community to inform their work, and (b) to identify opportunities for future research collaborations.

Key questions and concerns from participants

An important focus of these discussions is to better inform the broader research community about Facebook’s approach to content policies, and for external researchers to tell us what additional information about our policies and processes would be helpful as they seek to understand how these issues play out across social media.

Participants in the Mexico City workshop asked the following questions:

  • Can Facebook create a dedicated communication channel to field questions from external researchers? Participants in this workshop have encountered issues navigating how to conduct research while complying with our Terms of Service. They suggested creating a forum for researchers to ask questions and remediate issues.
  • When content does not directly correlate to offline harm but could be harmful, how does Facebook decide what to do? It’s impossible to have no bias, so why doesn’t Facebook own its (positive) biases and actively support democracy/counter hate? These two related questions are similar to discussions we have consistently had at previous workshops. They relate to a desire for Facebook to better articulate our values when it comes to how we approach various trade-offs, such as balancing the right to free speech for all with the need to protect vulnerable groups.
  • How does Facebook (as a global platform) work with governments without reinforcing existing power dynamics? There is a great deal of interest in the research community about how we handle different types of threats that might be articulated on the platform, and how integrated we are or should be with law enforcement regarding potentially harmful content.

Research collaboration ideas

One of our goals for the CPRI is to identify opportunities for research collaborations in key areas. During this workshop, we discussed how Facebook can best support the work of external researchers in this field, projects that would be of mutual interest, and information-sharing opportunities.

Specifically, participants suggested the following topics and approaches:

  • Connect a global community of researchers interested in data privacy. Such a community could have the goal of intelligently shaping future regulations with evidence-based recommendations. Participants believe there is a connectivity gap in this subset of the expert community and hope to leverage Facebook’s convening power.
  • Intercultural understanding and communication could be a useful topic for research around content policies. And it would be a ripe area for collaboration with universities and students, who could benefit from coding information and — if globally dispersed — could lend the relevant cultural/linguistic context that Facebook most needs in its analysis.
  • Mapping lexicons around various community standards could be accomplished entirely independently. However, Facebook might consider supporting such linguistic work in order to inform our automated detection systems.
  • Facebook and external researchers might work together to identify interventions (on or off social media) that would improve civic behaviors. In Mexico City, for example, it might make sense to start with “easier” topics, such as driving norms and litter, before tackling more contentious civic behaviors, but such initial work could contribute to modeling that would inform an approach to higher-stakes issues like harassment or voting.
  • What makes false news spread? There was a lot of interest in looking into how various factors play into misinformation — including whether there are signals we can share that predict virality, and whether it is possible to see how users’ emotions affect the likelihood that they will share false news.
  • Historical analysis around Facebook policy changes related to the public sphere. With the advent of the updates tracker on our Community Standards page, researchers see a lot of potential in eventually mapping these changes to broader societal behaviors.

This was an extremely useful discussion that reinforced some of the same questions we have heard from the research community elsewhere, and it specifically showed us where we have an opportunity to more proactively share our findings and priorities.

CPRI workshop in São Paulo

At the CPRI workshop in the São Paulo office, Facebook hosted nine external researchers from around Brazil. This workshop focused specifically on issues related to hate speech and preventing offline harm from dangerous organizations and individuals. Like the previous workshops, it was designed (a) to share more about Facebook’s processes and research with this expert community to inform their work, and (b) to identify opportunities for future research collaborations.

Key questions and concerns from participants

In São Paulo, participants asked the following questions:

  • How does Facebook enforce the Community Standards and reduce or remove violating content from the platform? How does Facebook measure that enforcement? There was a lot of interest in better understanding the processes, efforts, and methodology underlying the Community Standards Enforcement Report, as well as the scale of our operations and the challenges that it creates or augments.
  • How does adversarial behavior affect approach and transparency efforts at Facebook? Attendees recognize that there are significant trade-offs in our transparency efforts, including more opportunities to game the system as we increase openness around our processes. They believe it would be beneficial to share more with researchers about how bad actors try to exploit our platforms. Specific questions and thoughts included the following:
    • Do users who have internet access only through (one of) our apps have access to fact-checking content that links offsite? If not, is there better tooling we can create to empower this category of users?
    • Facebook should articulate our values in this space. The tools at our disposal — including downranking and removal of content — may well be sufficient if we can share how we prioritize and why.

Research collaboration ideas and challenges

In São Paulo, we discussed the following topics:

  • Researcher-specific resources. While there are significant challenges involved in data sharing, participants would be interested in access to more background information and/or data from the Community Standards Enforcement Report, the potential to give specific feedback on our content policies, an API on political/elections data, and a more social scientist/humanities-friendly update to research.fb.com.
  • Research challenges in Brazil. We spent the majority of our brainstorming time hearing feedback on the challenges researchers face in Brazil. Funding opportunities from Facebook are appealing and useful, but the group noted that the culture around corporate research funding for independent work is not yet fully developed. They also asked that we do more to publicize existing resources and opportunities.

Overall, we learned a lot about specific challenges that researchers in these markets face as well as how our policies and processes resonate across different cultural-linguistic audiences. This is a fundamental step in building connections with the global research community, and it will be integral to designing improvements and solutions to content policy issues both through their independent work and in collaboration with industry partners, including Facebook. We will be continuing this series of workshops later this year and hope to build on these interdisciplinary connections and foster more information sharing.


No items found