Measuring the Behavioral Effects of Misinformation Exposure on Twitter
By Daniele Bellutta
Photo credit @rachelbotsman (Twitter, posted 9/13/21 11:38am)
From the tactics used for propagating conspiracy theories to strategies for correcting people’s misconceptions, social cybersecurity research has studied all facets of the operation of misinformation in human society. Crucially, however, the ultimate stage of misinformation’s impact on democracies, namely its translation into influence on people’s beliefs and behaviors, has received distressingly little attention compared to the engineering of powerful systems for identifying and labeling false stories. Furthermore, what research has been conducted about misinformation’s effects on the electoral process has thus far yielded conflicting results.
Though at least two survey studies [1, 2] have linked belief in false stories to people defecting from voting one political party to another, other work has instead called into question the effectiveness of online misinformation at changing people’s ideologies and reaching wide audiences. Two studies that analyzed people’s online information “diets” found that low-credibility Web sites constituted only a small part of most people’s information consumption [3] and that the vast majority of misinformation was viewed by a strikingly small proportion of people [4]. Similarly, though a 2020 survey [5] found that exposure to COVID-19 misinformation reduced respondents’ intentions to get vaccinated, a 2017 study [6] of Twitter users who interacted with Russian troll accounts instead failed to find a significant influence on changing those people’s ideologies. Considering the currently conflicting account of misinformation’s impact on voters’ perceptions and practices, democracies around the world are in dire need of more behavioral research studying misinformation’s pathways from appearing on screens to swaying citizens’ minds.
To explore how Twitter users change their behavior after exposure to potential misinformation, we collected tweets from users who interacted with low-credibility Web sites or the Twitter accounts those sites controlled [7]. Over the course of two weeks, we identified a sample of 2,500 users who shared links to low-credibility sites and 2,500 users who replied to tweets published by those sites’ accounts. For each user, we collected fifty tweets sent before and fifty tweets sent after the user’s interaction with the low-credibility source. Since we were interested in analyzing changes in human behavior, we then used the BotHunter machine learning model [8] to filter out any bots in our data set. Finally, we used the NetMapper software [9] to quantify various features about how people were writing their tweets, such as the number of expletives they used.
As can sometimes happen in research, our first interesting observation occurred accidentally during our data processing. Approximately 71% of the users who shared links to low-credibility sites appeared to be bots, which was a much larger proportion than the 51% of users who had replied to the corresponding Twitter accounts. This incidental result suggests that malicious bots are more likely to simply share links to misinformation than engage with existing content by replying to tweets published by low-credibility accounts.
During our planned analysis of the collected tweets, we then made two significant findings. First, exposure to low-credibility information appeared to be more effective at changing what people discussed rather than how they phrased their tweets. When comparing the similarity of each user’s hashtag choices immediately before and after exposure to a baseline similarity value computed from before the interaction, we found a statistically significant drop in similarity amongst some users. In other words, the interaction with a low-credibility source was associated with greater change in a user’s choice of hashtags. This result came in contrast to the lack of significant change in features describing how a user’s tweets were written, such as the reading difficulty of the tweets. Misinformation may therefore influence people over time by shifting what they think about and discuss rather than by immediately causing behavioral changes.
Our second notable result revealed the importance of the type of someone’s exposure to misinformation for indicating how much it may influence that person’s behavior. As it turned out, the previously mentioned change in people’s hashtag choices was only statistically significant for users who had replied to tweets sent by the low-credibility accounts, not for those who had simply shared links. This suggests that greater engagement with low-credibility content (such as replying to a tweet rather than just tweeting or retweeting a link) could signal a greater chance that a person’s behavior will change. Our work consequently provides empirical evidence for the intuition that misinformation generating a lot of active engagement should be more concerning than links to false stories that are simply being shared and re-shared.
Since at least 2016, the pervasiveness of online misinformation has occupied part of the public consciousness through repeated media reporting and public warnings. The susceptibility of mass social networking sites to the rapid spread of lies and conspiracies has generated grave concerns in democratic societies, where effective decision-making is reliant on having a well-informed electorate. Our analysis of Twitter users’ tweets before and after exposure to low-credibility information contributes to the small but necessary body of work studying the influence of misinformation on citizens’ behaviors and beliefs. As research into misinformation’s ultimate effects on democratic societies continues, we hope to also help resolve the conflicting accounts of precisely how misinformation impacts peoples’ actions and ideologies.
References
- Gunther, R., Beck, P.A., & Nisbet, E.C. (2019). “‘Fake news’ and the defection of 2012 Obama voters in the 2016 presidential election”. Electoral Studies
- Zimmermann, F., & Kohring, M. (2020). “Mistrust, disinforming news, and vote choice: A panel survey on the origins and consequences of believing disinformation in the 2017 german parliamentary election”. Political Communication 37(2): 215–237.
- Guess, A.M., Nyhan, B., & Reifler, J. (2020). “Exposure to untrustworthy websites in the 2016 US election”. Nature Human Behaviour 4: 472–480.
- Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). “Fake news on Twitter during the 2016 U.S. presidential election”. Science 363(6425): 374–378.
- Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K., & Larson, H. J. (2021). “Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA”. Nature Human Behavior 5: 337–348.
- Bail, C.A., Guay, B., Maloney, E., Combs, A., Hillygus, D.S., Merhout, F., Freelon, D., & Volfovsky, A. “Assessing the Russian Internet Research Agency’s impact on the political attitudes and behaviors of American Twitter users in late 2017”. Proceedings of the National Academy of Sciences 117(1): 243—250.
- Bellutta, D., & Carley, K. M. (2021). “Comparing the behavioral effects of different interactions with sources of misinformation”. Presented at SBP-BRiMS 2021.
- Beskow, D., Carley, K. (2018). “Bot-hunter: A tiered approach to detecting characterizing automated activity on Twitter”. Presented at SBP-BRiMS 2018.
- (2021). NetMapper. https://netanomics.com/netmapper/
