May 23, 2019

Exploring feedback from data and governance experts: A research-based response to the Data Transparency Advisory Group report

By: Radha Iyengar Plumb

Facebook has undertaken a number of efforts to increase transparency and facilitate accountability and oversight by providing insight into our process for creating metrics that are meaningful and relevant both internally to our teams and externally to the broader community of people who use Facebook. To inform this process, we have a set of principles that govern how we think about metrics and analytics. The principles are grounded both in Facebook’s company values and in research principles on transparency and accountability.

We developed metrics to comport with these principles. But we also wanted to ensure we had meaningful input from experts who had studied transparency in the context of governance. This involved both exploring the existing range of academic research around data transparency specifically and governance models more generally, and building a formal process for seeking and incorporating expert feedback.

To ensure we could appropriately balance benefits and risks of transparency in creating valid and informative metrics, we established a formal process to solicit feedback and provide a public assessment of our metrics. We established the Data Transparency Advisory Group (DTAG), which comprises international experts in measurement, statistics, criminology, and governance, with the core function of providing an independent, public assessment of our Community Standards Enforcement Report (CSER) specifically and our measurement efforts related to content moderation more broadly.

Defining research-based criteria for data transparency

A long history of social science research has contributed to our understanding of the effects of transparency, including economic and political science research on the role of transparency in building effective and efficient governance (Stiglitz, 2000; Lindstedt and Naurin, 2010; Brunetti and Weder, 2003) and political philosophy and legal theory on the role of transparency and open deliberation in the civilizing effect on political behavior and generating procedural fairness (Cohn, White, and Sanders, 2000; Elster, 1998).

These bodies of research have been reviewed and applied to social media in several different contexts, notably in the Santa Clara Principles, a theory- and research-based set of recommendations for tech companies. Taken together, this social science and legal literature focuses on transparency as a key tool to ensure accountability and fairness in institutions charged with defining and enforcing rules and standards. We support the spirit of the Santa Clara Principles on Transparency and Accountability in Content Moderation and, informed by the DTAG report’s findings on the challenges of content moderation at scale, are committed to continuing to share more about how we enforce our Community Standards in the future.

At the same time, it is worth noting that transparency is not without its limitations and risks, especially when that transparency involves data and metrics. Karr (2008) summarizes three key concerns, typically associated with public use of government information, which also apply to the data that technology companies would release on content moderation:

  • Addressing an inherent tension between the comprehensiveness of the data and the comprehensibility of information to individuals not intimately familiar with underlying processes.
  • Managing the trade-off between providing appropriate levels of detail to ensure the data is useful while also ensuring there are appropriate protections for private or confidential information.
    • The risk of misinterpretation is higher for data based on information collected as part of operational processes. Often, these data are not structured in a way that purpose-built data sets (e.g., census data) are and thus can be difficult to interpret and face higher potential for misinterpretation. This issue often arises with crime enforcement statistics that are typically derived from police reports and court data used for individuals in these systems to execute their core functions rather than for measurement purposes.
    • This issue often arises with crime enforcement statistics that are often derived from police reports and court data used for individuals in these systems to execute their core functions rather than for measurement purposes.
  • Some information that is valuable for transparency is also sensitive and/or dangerous to release publicly.

Drawing insights from this research on transparency, we developed metrics focused on two distinct but related components.

First, did the metric reflect how we think, prioritize, and assess ourselves internally? The reason for this criterion is that it allows external scrutiny and public assessment of the problem space and the outcomes of our efforts, which can enable accountability.

Second, does the metric allow insight into our processes or actions? Although aggregate counts are not sufficient to provide process insights, the metrics we release provide the information needed to understand the frequency and nature of actions we take and for others to judge for themselves whether those actions appear appropriate in the context of the overall problem space.

Approach to external assessment

Naturally, this process resulted in metrics that balanced some of the external requests and suggestions with the technical constraints and realities of enforcement at scale. The next part of the measurement process then related to whether the metrics we viewed as most relevant and operationally feasible were also metrics that met the objective of allowing insight and ultimately public accountability for our moderation processes. This is why we established the DTAG.

The DTAG was asked to answer this core question: Do the publicly available metrics provide accurate and meaningful measures of the problem area and Facebook’s activities to address it? It was not set up to audit Facebook’s data labeling, data use, or privacy protection practices. As such, the DTAG’s findings reflect its review of the processes and information provided to the group, but not unrestricted access to Facebook systems and processes. This approach helped balance the critical role of external review of our processes with protections for user privacy and complexities in the underlying data architecture that would have limited relevance for external analysts.

To support their assessment, Facebook provided the DTAG with detailed confidential information about how we develop our policies, how we detect violations of these policies at scale, and how we have developed measurement-specific processes to sample, label, and quantify violations of policies. These measurement processes were intended to create analytically rigorous data that would accurately capture the scale and scope of content violations on the platform for different types of violations.

The DTAG members were not given access to user data and did not audit the underlying data used to construct the metrics, but they were provided with more detailed breakdowns of the counts and inputs into the metrics. They also had access to the engineers and data scientists who designed and executed these metrics to answer questions or concerns and provide any requested details on how the metrics were developed.

The group reviewed, assessed, and provided feedback about our approach to measurement along two parallel tracks. First, we worked closely with the group to present our existing metrics, how and why we developed them, technical constraints, as well as best practices in how these metrics should be defined and calculated. This step was critical to bring a range of outside expertise directly into our analytic process — for technical statistical applications, but also to ensure these metrics would comport with best practices. We used their findings both to improve how we defined our metrics and to ensure the information we released alongside the numbers usefully informed readers about our processes and practices.

Second, the group was charged with providing an external report detailing our methods in the context of our processes to provide an external assessment of whether these metrics were in fact a reasonable approach to measuring violations of Facebook’s rules and standards. The DTAG was among the first set of external technical experts who were exposed to the scale and scope of Facebook’s measurement operation and then permitted to freely publish their independent assessment. They recently released their external report including these assessments and their overall recommendations.

This report was reviewed only to ensure no predefined confidential information was included. The findings and recommendations are those of the DTAG and represent an external perspective on the metrics and measures contained in the CSER specifically and Facebook’s efforts to conduct rigorous measurement on violations of its rules and standards more generally.

Summary of DTAG findings and recommendations

After a rigorous review, the DTAG found our processes reasonable and that our core metrics are consistent with best practices from other settings. In particular, the DTAG noted that with hundreds of thousands of pieces of content added every minute, the processes combining automated and human detection and review were appropriate. The findings thus also highlighted the inherent trade-offs and technical challenges that must be balanced in building an effective detection and enforcement regime at scale. The DTAG concluded that the metrics in the CSER were reasonable ways of measuring violations of our Community Standards and that they comport with the best practices that are most analogous to metrics of crime currently published by a number of different governmental agencies globally. For more detailed information on the DTAG findings, see this Newsroom post.

Based on its assessment, the DTAG also offered a number of suggestions that Facebook is now systematically reviewing and determining how best to address. The DTAG offered 15 recommendations in its report.

Table 1 below summarizes the DTAG recommendations and how Facebook is incorporating its feedback into our broader transparency and measurement efforts. Of the 15 recommendations:

  • Five recommendations will be implemented in upcoming reports, briefings, or other settings.
  • Six recommendations are being actively explored to determine how best to operationalize the suggestion.
  • Four recommendations are being considered for alternative solutions, in the context of the underlying concerns or issues that the DTAG raised. The DTAG’s proposed approaches may not be feasible or optimal given other constraints.

DTAG recommendation #1: Release accuracy rates. Measuring accuracy is complex for a number of reasons. One of the major reasons in particular is that there is a range of “ambiguous” content — meaning specific kinds of text, images, or videos about which reasonable people might disagree when assessing whether these violate our Community Standards. Given this complexity, it can be difficult to define whether the content was “accurately” labeled and therefore establish standard measures of accuracy based on false positives, false negatives, and true positives, true negatives. To address this issue, we are working to refine our policies to reduce the content that is “ambiguous.” In practice though, given the range of topics and issues covered under the Community Standards, there will always be ambiguous content, and thus in parallel, we are exploring what could serve as a meaningful metric for accuracy that is robust to the inclusion (or exclusion) of ambiguous content.

DTAG recommendation #2: Release review-and-appeal rates and reversal rates separately from an accuracy metric. Facebook is planning to release metrics that capture the amount of content that is appealed and how much content it restores. These metrics will help provide transparency about additional aspects of the governance and moderation process distinct from the accuracy rates discussed above.

DTAG recommendation #3: Provide information about the percentage of posts that are actioned by automation and the percentage actioned by humans. Such a metric could help highlight that while Facebook may want to increase automatic detection of content that violates our standards, in many cases we may not want to increase automatic actions for specific violation types. We currently share the percent of content that is detected before anyone reports it, but humans then review some of this content. And how useful automation technology is in removing content varies. For instance, such actions might be more relevant for imagery depicting adult sexual activity rather than content that may be hate speech or harassment, which requires greater language and cultural context to assess. As a result, we agree it may be helpful in the future to more explicitly define automated detection versus automated actions along with their relative accuracy, and we will continue to explore ways to do this in a manner that is both meaningful and accurate.

DTAG recommendation #4: Check reviewers’ judgments not only against an internal “correct” interpretation of the Community Standards but also against users’ interpretations of the Community Standards. Although such comparisons are useful in some settings, we do not believe they represent meaningful measures against which Facebook would operate. This is because our internal research suggests users are often unaware or do not understand the Community Standards themselves or the processes by which they are applied. Moreover, when users report, the rates at which their reports can be actioned are very low. There are three possible reasons for these low rates: a lack of context on Facebook’s side that would illuminate what is wrong; a misunderstanding about how we apply our Community Standards; and sometimes, there is abusive mass reporting. Taken together, these findings show that Facebook should do more to help users understand the rules and inform their reporting. We have already begun some of this work and, in that context, will explore ways to more systematically research how users interpret these Community Standards in the broad range of cultural and regional settings in which our platforms are used.

DTAG recommendation #5: Report prevalence measures not only as a percentage of the total estimated number of views but also as a percentage of the total estimated number of posts. This was one of the most technically challenging suggestions, and we discussed it extensively with the DTAG. The idea was that we develop a violating content rate — that is, a measure of prevalence in which a unit of observation is an individual piece of content, and the measure is the fraction of all content that is violating. This rate would serve as a supplement to the current prevalence measure, which is based on views of content. So even though our current viewership-based prevalence metric is more of a “consumption metric” — indicating how much of violating content is (intentionally or not) consumed — the proposed content-based prevalence measure is a “production metric” — indicating the amount of violating content out of how much material exists on the platform. In discussions, we explored whether it was feasible to estimate the number of distinct posts that contribute to violating viewership. Initially, we explored whether a content-based metric can be constructed with the Hansen-Hurwitz estimator using the sampling rate of views and the number of views each sampled post received in the given time frame. The DTAG agreed, however, that such an approach would be limited because: (1) the uncertainty on this population estimate would likely be very large because the viewership distribution of material is heavily skewed; (2) the views-based prevalence sampling has been optimized to minimize error by focusing on material that is more likely to be viewed, and so content that is not viewed by anyone would skew the distribution; and (3) such a count would require sampling continuously in real-time or risk missing content that is proactively removed. In such conditions, the underlying assumption that the number of views is proportional to the population of content would not hold. Moreover, the degree to which such a sample might be biased would be different for various types of violations. These discussions with the DTAG made clear that in order to construct a consistent estimate of a violating content rate, we would need an entirely new and separate labeling and measurement effort. Given this, we considered this metric as an interesting addition but not a priority effort relative to other suggestions for which we are working to improve or expand metrics. We are exploring other ways to provide insight into “production” of violating content through more targeted analysis and research.

DTAG recommendation #6: Explore ways of relating prevalence metrics to real-world harm. We do not believe such a relationship could be meaningfully established in the context of our Content Standards Enforcement Report, but we certainly view it as part of a broader research agenda that is active both inside and outside of Facebook. Extensive research by Facebook and external scholars has found that misinformation and disinformation are associated with a range of harms that vary by regional, political, and social context. While the causal relationship between online information and offline behaviors, including violence, is still being explored, both internal Facebook work and external scholarly research have highlighted risks from amplification and rapid spread of information facilitated by social media platforms. (See, for instance, work by Dunn et al., 2017, on health information.) Facebook continues to explore these issues both through internal research and through support of external research (e.g., our recent call for proposals).

DTAG recommendation #7: Explore ways of accounting for the seriousness of a violation in the prevalence and proactivity metrics. This suggestion is also part of a broader effort by both research and data science teams. But there are a number of complexities related to constructing valid and reliable measures that are relevant across broad violation types and consistent in global contexts. This difficulty is similar to the complexity in measuring severity in the context of crime both in determining meaningful concepts and in designing feasible and valid measures. (See, for example, Greenfield, 2013; Sherman, et al., 2016; and Ramchand, et al., 2009) .

DTAG recommendation #8: Report prevalence measures in subpopulations. The DTAG also had a number of suggestions on releasing more disaggregated data related to content standards enforcement. These breakdowns are feasible for these count-based metrics but may not be for prevalence because of the current stratified sampling approach. Despite some technical constraints, we recognize the value in having different subpopulations of the various metrics and the value of exploring which breakdowns may be most useful and the ways in which we may present and share such information.

DTAG recommendation #9: Report actioned content and proactively actioned content as a proportion of estimated violating content. This recommendation relies on the creation of a content rate metric discussed in recommendation #5. We are exploring ways we can more transparently discuss and quantify how actioned content and prevalence rates are related to help readers better understand how to compare metrics based on distinct units of observation (e.g., compare views-based rates to content-based counts).

DTAG recommendation #10: Break out actioned content measures by type of action taken. We agree that this additional detail would be useful in understanding the way different policy and enforcement tools are used to balance various principles of maintaining voice on important issues — such as discussions of suicide or graphic depictions of human rights violations — while providing protections for users who might be made to feel unsafe or emotionally triggered by such content. We are exploring appropriate breakdowns of this metric to ensure it meaningfully captures some of these issues.

DTAG recommendation #11: Explore ways of accounting for changes in the Community Standards and changes in technology when reporting metrics in the CSER. This is an important point, and we agree with the DTAG. Policy changes could drive changes in metrics too. Currently, much of our narrative discussion focuses on changes in technology, and we are exploring how to include policy changes in the narrative, as well as how to account for these changes, to allow for consistency over time. In the interim, we have released a recent updates section to our Community Standards webpage, which can allow readers to identify changes in the Community Standards over time.

DTAG recommendation #12: Explore ways to enhance bottom-up (as opposed to top-down) governance. The DTAG offered suggestions related to exploring additional partnerships, diversifying input into our processes, and integrating expert evaluations more broadly. These suggestions are a critical and growing component of our ongoing efforts in the content moderation and governance space. We are exploring a range of ways to engage experts and conduct evaluations such as the DTAG — in particular, with our Content Policy Research Initiative workshops and funding opportunities — to help researchers better understand and study our policies and processes. We have also engaged in a range of collaborative processes in designing oversight mechanisms, including an open solicitation for public comments.

DTAG recommendation #13: Enhance components of procedural justice in the Community Standards enforcement and appeal-and-review process.
Over the past year, Facebook has engaged in dozens of sessions to help experts and users better understand how we set the rules and enforce them at scale. Facebook also conducts user research, which involves both qualitative and quantitative methods, to understand what users think about the rules as well as how we could more clearly communicate these rules. We also built and continue to scale an appeals process and increased communication with users on how we enforce these rules, consistent with principles of procedural justice. As part of this work, Facebook continues to research and test ways to better inform users of our rules consistent with the principles of procedural justice and transparency. (See, for example, work with Tyler, et al., 2018). Work in this area has and will continue to be a core aspect of Facebook’s content governance efforts.

DTAG recommendation #14: Publicly release anonymized or otherwise aggregated versions of the data used to calculate prevalence and other metrics in the CSER.
We are exploring ways to make accessible — either via public release or based on a more moderated application-based process (similar to access to sensitive government data) — anonymized or otherwise aggregated versions of the data used to calculate prevalence, content actioned, and proactive detection rate metrics.

DTAG recommendation #15:Modify the formatting, presentation, and text of CSER documents to make them more accessible and intelligible to readers. This recommendation, though not related to the metrics themselves, is critical for identifying ways that we could meaningfully enhance transparency. We incorporated many of these suggestions into our narrative descriptions and explanations, including improving the clarity of some of descriptions, discussing how policy changes affected movements in the metrics, and generally improving accessibility of the report. Many of these changes will be reflected in upcoming reports, and we will continue to consider how best to ensure that the language and details which accompany the metrics can be most clearly expressed.

Next steps and future collaborations

Facebook will release the third iteration of its Community Standards Enforcement Report, which reflects a number of the DTAG’s recommendations along with Facebook’s own efforts to expand its measurement efforts. Facebook will continue to explore ways to expand and improve its Community Standards enforcement transparency by working with external experts. We have conducted dozens of engagements with researchers globally to increase awareness about and research regarding our policies (including explicit efforts to support more extensive research collaborations).

We are building innovative ways to share data, and in parallel, we are working to identify new ways to expand research that can improve understanding of Facebook scale and constraints while preserving independence of external analysis. Working with experts to ensure we are developing, enforcing, and reporting on our Community Standards will continue to be a key tool in ensuring that Facebook is a safe and inclusive platform globally.

References

Brunetti, Aymo and Weder, Beatrice (2003). “A Free Press Is Bad News for Corruption,” Journal of Public Economics, 87(7–8): 1801–24

Cohn, E. S., White, S. O. & Sanders, J. (2000). Distributive and procedural justice in seven nations. Law and Human Behavior, 24, 553-579.

Dawes, Sharon S. (2010). Stewardship and Usefulness: Policy Principles for Information-Based Transparency. Government Information Quarterly, Volume 27, Issue 4, Pages 377-383.

Efron, B. (1987). “Better Bootstrap Confidence Intervals,” Journal of the American Statistical Association, Vol. 82, No. 397. 82 (397): 171–185. doi:10.2307/2289144. JSTOR 2289144.

Elster, J. (1998). Deliberation and constitution making. In Deliberative Democracy, ed. Jon Elster. New York: Cambridge University Press, pp. 97-122.

Greenfield, V. A. and Paoli, L. (2013). “A Framework to Assess the Harms of Crimes.” British Journal of Criminology, 53(5): 864–886.

Karr, A. F. (2008). Citizen access to government statistical information. In Digital Government (pp. 503-529). Springer, Boston, MA.

Lindstedt, C. & Naurin, D. (2010). Transparency is not enough: Making transparency effective in reducing corruption. International Political Science Review, 31(3), 301-322.

Ramchand, R., MacDonald, J. M., Haviland, A., et al. “A Developmental Approach for Measuring the Severity of Crimes,” Journal of Quantitative Criminology (2009), 25: 129.

Lawrence Sherman, Peter William Neyroud, Eleanor Neyroud; The Cambridge Crime Harm Index: Measuring Total Harm from Crime Based on Sentencing Guidelines, Policing: A Journal of Policy and Practice, Volume 10, Issue 3, 1 September 2016, Pages 171–183

Stiglitz, Joseph E. (2000). “The Contributions of the Economics of Information to Twentieth Century Economics,” Quarterly Journal of Economics, 115(4): 1441–78.

Tyler, T. R., Boeckmann, R J., Smith, H J. & Huo, Yuen J. (1997). Social Justice in a Diverse Society. Boulder: Westview Press.

Tyler, T. R. (2000). Social justice: Outcome and procedure. International Journal of Psychology, 35, 117-125.

Tyler, T. R. (2006). Psychological perspectives on legitimacy and legitimation. Annual Review of Psychology, 57, 375-400.