|Other - Colloquium on Artificial Intelligence Research and Optimization|
|Representative Individuals and Fair Machine Learning|
|Dr. Clinton Castro, Florida International University|
|Virtual- see abstract Zoom
May 04, 2022 - 03:00 pm
Zoom link: https://lsu.zoom.us/j/91825043573
It is common to demand that machine learning systems be fair in the following sense: the chances of someone judged by the system of receiving an erroneous judgement (e.g., being falsely identified as a criminal) should be independent of their sensitive attributes (e.g., their race, gender, and ethnicity). This paper identifies two problems with this common demand and develops methods for solving them. One problem—which we call “the other reference class problem”—involves identifying how, exactly, to categorize sensitive attributes, given that our categorizations of individuals affect the probabilities we assign to them. The other problem—which we call “the equal probabilities talk problem”—-involves making sense of the imperative to equalize chances. I argue that these two problems are deeply intertwined and offer a method for solving them.
Clinton Castro is assistant professor of philosophy and director of the certificate in Ethics, Artificial Intelligence & Big Data at Florida International University (FIU) in Miami, Florida. His primary areas of study are information ethics, fair machine learning, and epistemology. His recently published book, Algorithms and Autonomy (co-authored with Adam Pham and Alan Rubel), examines how algorithms in criminal justice, education, housing, elections and beyond affect autonomy, freedom, and democracy.