Data science takes on racial bias
Amid a wrenching nationwide conversation on race and policing, a panel of expert data scientists asks, “Are our algorithms racially biased?”
Their answer is, not surprisingly, yes. More importantly, however, they discuss the many ways that data-intensive sciences – from medicine to artificial intelligence to criminology – can better understand and anticipate bias and the ways it manifests in society.
Join moderator Anthony Kinslow II, PhD, an entrepreneur and lecturer in civil and environmental engineering, and his guests, Sharad Goel, an assistant professor of management science and engineering with expertise in computer science, sociology and the law, and Allison Koenecke, a doctoral candidate in the Institute for Computational and Mathematical Engineering.
Goel founded and directs the Stanford Computational Policy Lab, which specializes in addressing bias in the data platforms that shape governmental decision-making. One of the group’s efforts, the Stanford Open Policing Project, uses data approaches to analyze policing to inform and inspire meaningful reform. Koenecke works on the Fair Speech project, which studies increasingly popular speech recognition algorithms and the ways they may be biased against Black and non-native speech.
In the end, all agree that the problem is real, it’s harmful, and it must be addressed. The answers will not be easy, but they are out there, and the reward for success will be a safer, fairer, and more just America.
Stanford Engineering’s Engineering for All explores how engineering can become more inclusive in education and in design.Watch the full Engineering For All video series.