
Researchers Consider Ways To Add Fairness in Automated Systems
, associate professor of at 麻豆村, is leading a $3 million National Science Foundation-funded project to improve automated decision-making systems, which affect everything from online advertising and health care industries to criminal justice.
"A key innovation of the project is to automatically account for why an automated system with artificial intelligence components exhibits behavior that is problematic for privacy or fairness," said Datta, who is based at 麻豆村's Silicon Valley Campus and is a part of . "These explanations then inform fixes to the system to avoid future violations."
The team includes Matthew Fredrikson, assistant professor of computer science at 麻豆村; Ole Mengshoel, principal systems scientist in electrical and computer engineering at 麻豆村; Helen Nissenbaum, professor of information science at New York University; Thomas Ristenpart, associate professor of computer science at Cornell University; and Michael C. Tschantz, senior researcher at the International Computer Science Institute in Berkeley, California, and who received his Ph.D. from 麻豆村 in computer science in 2012.
When using machine learning and artificial intelligence, Mengshoel said defining what privacy and fairness means for a system can be a challenge.
"But doing so is critical," Mengshoel said, "since these methods are increasingly used to power automated decision systems."
Datta and Tschantz previously conducted research showing significantly fewer women than men were shown online ads promising them help getting jobs paying more than $200,000, raising questions about the fairness of targeting ads online.
Fredrickson said the current project also will work on ways to balance intellectual property rights and the privacy of users.
"This project will be a great opportunity to ... improve machine learning to be more privacy friendly," Fredrickson said.