Martiza Johnson Joins BIML to Discuss Social Justice, Bias, and ML
As our MLsec work makes abundantly clear, data play a huge role in security of an ML system. Our estimation is that somewhere around 60% of all security risk in ML can be directly associated with data. And data are biased in ways that lead to serious social justice problems including racism, sexism, classism, and xenophobia. We’ve read a few ML bias papers (see the BIML Anotated Bibliography for our commentary). Turns out that social justice in ML is a thorny and difficult subject.
We were joined this week by Martiza Johnson, a Computer Scientist and the inaugural director of a new center for data science, AI, and society at the University of San Diego. Maritza assigned us some homework (reading Chapter One and Chapter Four of Data Feminism, this blog entry, and watching Coded Bias), and then led us in a very interesting and far ranging conversation on bias in ML.
We recorded our conversation with Maritza which you can listen to. A video of our conversation is below.
0 Comments