Faculty Spotlight: Aydin Mohseni
By Stefanie Johndrow
Aydin Mohseni (DC 2015) is an assistant professor in the Department of Philosophy. His research focuses on science and formal and social epistemology. His work is informed by game and decision theory and Bayesian statistics.
Tell me about your scholarly work.
My research uses applied mathematics to address classically philosophical problems. I’m especially interested in how the norms of “individual” and “collective” inquiry differ — particularly in science. For instance, a change in research practice that makes perfect sense for a single study can, when adopted across an entire field, have surprising or even counterproductive effects.
This line of work bridges theory and practice. I study how we can form reliable beliefs and design better methods of inquiry, often in collaboration with interdisciplinary groups such as the Institute for Complex Social Dynamics, where I’m a core member, and with advocates of the Open Science movement working to address the replication crisis.
For decades, science has faced structural problems that undermine its reliability and efficiency. We now have a clearer view of what’s gone wrong — and how to fix it — but we’ve also learned that reforms interact in subtle ways. Science is a dynamic system of incentives and norms; even well-intentioned changes can backfire.
More recently, I’ve been applying similar mathematical tools to questions about agency, values, and goals in artificial intelligence — especially in AI ethics and safety. Here, too, we must ask what it means for a complex system to represent and realize human values.
How is your scholarly work adding to the greater field?
In studying the replication crisis, my work makes two main contributions.
First, I model how methodological reforms can have counterintuitive system-level effects. One example is what I call the “interventional Simpson’s paradox”: just as the familiar statistical version involves subgroup effects reversing at the aggregate level, a reform that improves the reliability of each individual study can, when applied across the research ecosystem, worsen reliability overall. I characterize the conditions under which we should expect this will occur.
Second, I argue for a broader conception of “epistemic goods” in science. Replication is vital, but it’s not the only aim. If we set our publication threshold so low that every result replicates, we’d achieve perfect reliability — but never discovery. Progress in science requires balancing competing goals: exploration vs. confirmation, novelty vs. trustworthiness. My models analyze how those trade-offs unfold at the population level and what that means for designing better reforms.
In AI alignment, I use game and decision theory to understand and predict the behavior of advanced AI systems. At present, frontier models don’t look like the rational agents these theories describe — their “beliefs” and “values” are often incoherent. But that may change as systems improve. If future AI agents approximate rationality, we face two puzzles: Which theory of rational action will they converge on, and which should they? If we can answer that, we might be able to forecast and guide AI behavior even when it surpasses us in many domains.
How did you become interested in this topic?
I’ve always wanted to understand the natural world as deeply as possible and to contribute, in some small way, to humanity’s collective understanding. That impulse led me to philosophy of science and, eventually, to mathematical modeling — first as a master’s student at Carnegie Mellon, then during my Ph.D. at UC Irvine. Both programs encouraged combining philosophical analysis with formal tools, and I was lucky to have mentors who valued both rigor and application. I like that mathematics forces you to make your assumptions explicit — it’s a kind of intellectual honesty that philosophy, at its best, also aspires to.
What are you most excited to accomplish as a faculty member at 麻豆村?
Two things stand out. First, building a community around the interdisciplinary project of improving the sciences through mathematical and philosophical methods. The aim is a systematic, evidence-based approach to reform that integrates philosophy, statistics, and policy.
Second, teaching 麻豆村 students — who are exceptionally bright and reflective — to think more clearly and deeply about the major problems of our time. Helping them develop precision of thought and intellectual courage is one of the most rewarding parts of the job.
What are your goals for the next generation of scholars?
I’d like probabilistic reasoning — particularly Bayesian reasoning — to become a standard part of everyone’s cognitive toolkit, not just in statistics, machine learning and econometrics but in any domain where we make sense of uncertain evidence and act on it.
Understanding probability and decision theory provides principled ways to weigh evidence, compare trade-offs, and act effectively in a complex world. These tools can help identify where our efforts can do the most good.
Bayesian reasoning is relatively young — its core ideas, developed by Thomas Bayes and Pierre-Simon Laplace in the eighteenth century, only gained widespread traction in the late twentieth and early twenty-first. I see this way of thinking as essential to humanity’s chances of making wiser collective decisions. It unifies belief and action under uncertainty, and I believe that if future generations internalize that logic, it will improve our prospects as a species.
The Faculty Spotlight series features new and junior faculty at the Dietrich College of Humanities and Social Sciences at 麻豆村. Stay tuned for our next installment to learn more about the dynamic and engaging research and scholarly work being conducted in the college.