Justin Cheng is a Research Scientist within Facebook Core Data Science (CDS), a research and development team working on improving Facebook’s processes, infrastructure, and products that enable more than 1.5 billion people to communicate with one another every day. This work was done in collaboration with Jonathan Chang, a PhD student at Cornell University, and Cristian Danescu-Niculescu-Mizil, an Assistant Professor at Cornell University.
Conversations, whether online or offline, involve two perspectives: a speaker’s intention (that is, what the speaker wanted to convey) and others’ perception of the speaker’s words (that is, what others think the speaker wanted to convey). Occasionally, a speaker’s message might be perceived in a different way from what they intended, and such misperception could have negative effects on the remainder of the conversation, such as an increase in unfriendly or even hostile behavior.
One common form of misperception is the confusion between facts and opinions, which has previously been observed in the context of news media. In this work, we extend this investigation to online discussions, a conversational setting in which people actively engage with one another rather than passively consuming content. By surveying Facebook users about their intentions behind comments they wrote on public Pages and their perceptions of others’ comments, we discovered factors that are indicative of misperception and the effects of misperception on the future trajectory of the conversation.
This work is described in our paper “Don’t Let Me Be Misunderstood: Comparing Intentions and Perceptions in Online Discussions,” to be published at the Web Conference 2020.
The key difficulty in any study of intentions is that only one person knows with certainty the intent behind a comment—namely, the author of the comment. As such, we surveyed users who had recently (at time of survey) participated in a conversation in the comments section of a public Page post to ask them directly about their intention in commenting on the post. Specifically, we asked them whether they intended to share/seek facts, and whether they intended to share/seek opinions. We additionally surveyed users who had recently replied to a comment on a public Page post about their perceptions of the comment they replied to. All survey questions used a five-point Likert scale (from 1 — not intended/perceived at all, to 5 — definitely intended/perceived).
The survey takes discussions from the comments section of public Page posts and asks the authors of comments (like Alice, above) what they intended and readers of comments (like Bob) what they perceived.
We used the responses to these surveys to study three research questions:
Comparing intention and perception survey responses allows us to explore how closely perceptions and intentions match each other. If perceptions perfectly captured intentions, then for each conversational goal, people would be as likely to report perceiving (or not perceiving) that goal as they were to report intending it (or not intending it). Therefore, the mean response scores for perceptions and intentions would be the same.
By contrast, a higher mean score for perceptions would imply that responders were more likely to perceive that goal than to intend it, meaning that goal is systematically overestimated. Likewise, a higher mean score for intentions would imply that responders were less likely to perceive a goal than to intend it, meaning that goal is systematically underestimated.
We find that people tend to overestimate how often others intend to share opinions, with a mean response of 3.77 for perception of opinion sharing and 3.44 for intention to share an opinion. By contrast, intention and perception scores for fact sharing do not differ significantly. This corroborates prior work from the news media setting, which found that people are more likely to misidentify factual statements as opinions than vice versa.
Intentions and perceptions also differ at a linguistic level. For example, we find that people tend to use explicitly factual language (such as “in fact” and “actually”) when intending to share a fact, but such language does not affect whether others perceive them as sharing a fact. This may relate to the previously observed bias toward perceiving opinions: Even if the commenter tries to double down on the apparent factuality of the comment through the use of explicitly factual language, this seems to do little to sway the reader, who may already be inclined toward perceiving an opinion.
Other linguistic differences we find include interrogative language (question words, like “why” and “how”) being negatively correlated with perceived fact sharing but uncorrelated with intended fact sharing, and second-person pronouns being correlated with perceived but not intended fact-seeking.
An example comment annotated with various linguistic features that indicate fact-sharing intention. The instance of factual language (“actually”) reflects only intention; all other features reflect both intention and perception of fact sharing in this comment.
Finally, intentions and perceptions differ with respect to one key conversational outcome: how uncivil the conversation eventually becomes. Specifically, when a comment is intended to share a fact, the resulting conversation is more likely to turn uncivil, whereas when a comment is perceived as sharing a fact, the resulting conversation is less likely to turn uncivil.
Two observations point to misperception as one likely cause of this difference. The first is that people tend to perceive that others are sharing opinions more often than they intend to. The second is that perceived opinion sharing is correlated with greater incivility. In other words, uncivil behavior may follow comments that are intended to share a fact because these comments tend to be misperceived as sharing an opinion (under this explanation, we would further expect that comments that are correctly perceived as sharing a fact would not tend to lead to incivility).
To verify this, we consider all comments that were intended to share a fact and categorize them by whether they were correctly perceived as fact sharing or misperceived as opinion sharing. As expected, we find that only the misperceived cases tend to be followed by uncivil behavior.
Comments intended to share a fact can be correctly perceived as sharing a fact or misperceived as sharing an opinion. Those in the latter category tend to be followed by uncivil behavior, explaining why intention to share a fact has an overall association with future incivility.
Our results might suggest strategies for promoting healthier interactions on online discussion platforms. For instance, classifiers that predict intentions and perceptions could signal to people when a comment they are writing may be misperceived by others and suggest strategies (based on the results of our linguistic analysis) for reducing this risk. Still, user studies would be needed to guide the design of such interventions to minimize the risk of unintended negative consequences.
There are also several open questions that arise from our findings, which could serve as the basis for future research in this direction. While we have defined “misperception” as merely a misalignment between intentions and perceptions, it is unclear whether such misalignment arises primarily from a mistake on the part of the reader (interpreting an opinion where there is only a fact) or on the part of the commenter (intending to share a fact but framing it in an overly opinionated way). Context might also play a role: Fact sharing might be more prone to misinterpretation in the context of a more controversial post, for instance. It would also be interesting to explore how perceptions change over time—or whether they change at all.
Finally, while our present work looks only at Facebook Pages, the same methodology could be extended to other types of social platforms, both elsewhere on Facebook (e.g., Groups) and beyond.