麻豆村

麻豆村
February 10, 2026

DebunkBot Uses AI to Make Belief In Conspiracy Theories Crumble

Developed by 麻豆村’s Thomas Costello, the paper describing DebunkBot has been awarded the AAAS Newcomb Cleveland Prize.

By Jason Bittel

From the moon landing and John F. Kennedy’s assassination to the “fixing” of the NFL, some conspiracy theories have a surprising amount of sway in the public consciousness. At the same time, the internet and social media have created a landscape in which new conspiracy theories can spread and thrive at an alarming pace, sometimes leading to real world consequences.

A published in the journal Science shows that after chatting with a large language model known as , many people end up significantly updating their beliefs. The American Association for the Advancement of Science announced today that the study, which was published in 2024, will receive its top honors, . The prize has been awarded since 1923 and is reserved for the Association’s most outstanding paper of the year.

“For a very long time, I’ve wondered why people believe such different things about the world,” said Thomas Costello, lead author of the study and assistant professor in the Department of Social and Decision Sciences at 麻豆村. “And conspiracy beliefs are a particularly interesting and fruitful area of inquiry, because they’re so extreme.”

However, when Costello began looking across the scientific literature, he learned that most research focused on preventing people from going down the conspiracy theory rabbit hole in the first place — not what was effective for getting them out once they were in there. And what he really wanted to test was how people who were firm in their conspiracy belief would respond when presented with an onslaught of information.

The trouble with that, of course, is that most everyday people don’t possess the highly specific knowledge to combat any given conspiracy theory.

“And this is where generative AI comes in,” said Costello, “because it has been trained on not just the whole internet, but books, research articles and official reports from investigatory agencies.”

Next, Costello and his colleagues developed a methodological pipeline that allowed users to interact with a large language model that had been guided to persuade. A user begins by describing the conspiracy belief in as much detail as they like and then rates how much they believe in the conspiracy. In a matter of moments, DebunkBot responds with a calm, cool and collected breakdown of points that explain why the theory is improbable, unlikely or factually untrue. (Unless, of course, the claim is provably true.)

In many trials, one volley of information was enough to convince users, or at least begin to erode their trust in the belief. However, DebunkBot also allows for an ongoing dialogue, with users having the ability to ask for more information or to take issue with certain points.

“People enter into a back-and-forth conversation, and in some cases, it can become argumentative,” said Costello. “This is a place where AI may have an advantage over humans — because the AI model is tireless.”

After talking with DebunkBot, participants reduced their belief in the conspiracy theory of their choice by an average of 20 percent. And what’s more, follow-ups revealed that they remained nearly as skeptical after two months had passed, which means the results of the interaction with DebunkBot are durable and long-lasting.

In fact, Costello said the results were so strong — four to five times larger than his team had been expecting — that it prompted them to re-check all the data. “I was genuinely surprised by it,” he said.

Since the study has been published, Costello has been contacted by teachers who have implemented DebunkBot in their classrooms as a lesson on critical thinking. And now that the model has been updated to use the LLM known as Gemini — in the study, the model relied on GPT-4, which was the state-of-the-art LLM at the time — DebunkBot is more equipped to squash conspiracy beliefs than ever.

“You can use it for yourself, as a critical thinking aid or for finding blind spots,” said Costello. “Or you can use it if someone is saying something that you think is B.S. or you need an external mediator.”

Costello has also built on the work with an array of new papers. For instance, showed that DebunkBot was successful in lessening belief in conspiracy theories whether it was characterized as being AI or a human. Another study, , showed DebunkBot to also be effective at combatting more recent events, such as conspiracies surrounding the attempted assassination of President Trump. And another preprint, , showed that “participants who reported feeling persuaded overwhelmingly cited the AI’s rational, evidence-focused approach,” hinting at the secret behind DebunkBot’s success.

While certainly not a cure-all for conspiracy theories in the digital age, the award-winning findings may also just be a cause for hope.

“The results give us cause for optimism about individual people’s propensity and willingness to update their beliefs when shown good arguments,” said Costello.


Costello and his coauthors, Gordon Pennycook and David Rand, will accept the Newcomb Cleveland Prize at which will take place in Phoenix, Arizona, from Feb. 12 to 14, 2026.