
Teaching People to Counter Misinformation, Not Just Spot It
By Catherine King
Associated Publication:
King, C. & Carley, K. M. Promoting Social Corrections: A Media Literacy Intervention for Misinformation on Social Media. In International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation, pp. 223-232. Cham: Springer Nature Switzerland, 2025.
Image Credit: DALL-E
Keywords: misinformation interventions, media literacy, social corrections, user behavior
New research explores training people to respond to or report false or misleading information when they see it online.
Can we train people to speak up when they see misinformation on social media?
Most media literacy research efforts focus on training people to identify true from false information. However, learning to recognize accurate information doesn’t automatically help stop inaccurate information from spreading online. Many of us have seen false or misleading content on our feeds, but quickly scroll past it and move on, sometimes deciding that it is not worth engaging with.
Our recent study explores a different approach to media literacy: training people to actively respond to misinformation when they see it online rather than ignoring it. This work focuses on both “social corrections,” which include everyday direct actions such as commenting with accurate information, as well as lower-effort actions such as reporting misleading content.
Study design
We ran an experiment with a group of government analysts who had signed up for a social cybersecurity training program called . They completed a short, interactive session on how and why to counter misinformation. Before and after the training, participants were shown social media posts explicitly labeled as false and asked if and how they would respond. They could select one or more of the actions described in the table below.

They were also asked whether and how the poster of the post and the platform on which it was posted would influence their response. The goal wasn’t to make these already highly skilled analysts better fact-checkers, but to see whether the training could make them more willing to intervene.
Finding 1: People became more willing to speak up
After the training, we found that participants were more likely to say they would act when encountering misinformation in their feeds, especially using higher-effort actions such as commenting publicly or messaging the poster privately. This shift mainly came from people who were already taking low-effort steps (like reporting posts) before the training.
Finding 2: Social context is important
Overwhelmingly, people said that they were much more likely to correct someone they knew rather than a stranger. Many mentioned that close friends and family members felt more worth the effort. At the same time, people were less likely to counter posts expressing very extreme false beliefs (e.g., flat Earth), feeling that any effort would be pointless or cause conflict. These findings reinforce the idea that misinformation is not purely a content problem but also a social one.
Finding 3: Social media type and design matters
Some participants also noted that potential platform features would influence whether they would act. Some felt more comfortable correcting others on platforms that offered anonymity (like Reddit) to avoid sparking conflict. Others said that if a platform had easy-to-use reporting features and a history of taking reports seriously, they would be more likely to engage with the content.

Implications for Platforms and Policy-Makers
These findings point to several future directions for platform design and media literacy efforts. Potential next steps include:
- Investing in platform functionality that makes it easier and more accessible for users to correct and report misleading posts and accounts.
- Designing and testing media literacy programs that focus on potential response strategies rather than just misinformation detection, especially in communities where trust and existing relationships can support constructive engagement.
Taken together, these steps emphasize that user intervention can complement, rather than replace, existing moderation and fact-checking systems.
Takeaways
This research highlights the importance of empowering everyday people to speak up in their own online communities:
- Community Engagement: When individuals share their thoughts and experiences, it can strengthen community bonds and foster open dialogue.
- Support for Moderation Efforts: User interventions can complement existing content moderation and fact-checking strategies, potentially making them more effective.
- Increased Awareness: Active participation by empowered users can help identify and combat misinformation, leading to a more informed online community.
- Diverse Perspectives: Everyday users bring unique viewpoints that can enrich discussions and contribute to a better understanding of the issues.
- Collective Action: When people act, even in small ways, those efforts can add up to meaningful collective responses to misinformation.
