Lack of representation in AI puts vulnerable people at risk
In April of 2019 the AI Now Institute of New York University published the research paper DISCRIMINATING SYSTEMS — Gender, Race, and Power in AI. The report concluded that there is a diversity crisis in the AI sector across gender and race. The authors called for an acknowledgement of the great risks of the significant power imbalance. In fact, AI researchers find time and time again that bias in AI reflects historical patterns of discrimination. When this pattern of discrimination keeps repeating itself in the very workforce that calls it out as a problem, it’s time to wake up and call out the bias within the workforce itself.
Providing the most striking and illustrative of examples, less than a month before the research report came out, Stanford launched their Institute for Human-Centered Artificial Intelligence. Of 121 faculty members featured on their web page, not a single person was black.
Even a full year before these findings, Timnit Gebru described the core of the problem eloquently, even then referring to a “diversity crisis”, in an interview with MIT Technology Review:
There is a bias to what kinds of problems we think are important, what kinds of research we think are important, and where we think AI should go. If we don’t have diversity in our set of researchers, we are not going to address problems that are faced by the majority of people in the world. When problems don’t affect us, we don’t think they’re that important, and we might not even know what these problems are, because we’re not interacting with the people who are experiencing them.
Timnit Gebru is an Ethiopian computer scientist and the technical co-lead of the Ethical Artificial Intelligence Team at Google. She cofounded the Black in AI community of researchers in 2016 after she attended an artificial intelligence conference and noticed that she was the only black woman out of 8,500 delegates.
While I’m generally not surprised by the systemic discrimination, I am surprised by the lack of self-awareness on display when putting together representative groups within AI. You see, it is quite safe to assume that most AI researchers have read the research paper from the AI Now Institute, that they have been interviewed by newspapers about it and that they stand on stage addressing it. Specialists see the problem, yet often fail to see how they may be part of the problem.
This is why I was especially concerned to see the composition of a new advisory group within the field of AI for Sweden’s government authority for digitalization of the public sector (DIGG). The 8-person team appears to consist of 7 white men and 1 white woman, all around 40 years of age or above. Sweden being one of the world’s most gender-egalitarian countries, this should disappoint.
Of course it doesn’t have to be like this. And shouldn’t. In less than 30 minutes I joined my friend Marcus Österberg in putting together a list of 15 female and/or nonwhite AI experts in Sweden. We obviously shouldn’t stop there, and as many today are pointing out, including Sarah Myers West, Meredith Whittaker and Kate Crawford in their research paper, “a focus on ‘women in tech’ is too narrow and likely to privilege white women over others”. From the report:
We need to acknowledge how the intersections of race, gender, and other identities and attributes shape people’s experiences with AI. The vast majority of AI studies assume gender is binary, and commonly assign people as ‘male’ or ‘female’ based on physical appearance and stereotypical assumptions, erasing all other forms of gender identity.
In the end, the diversity crisis is about power. It affects who benefits from the development of AI-powered tools and services. We need more equitable focus on including under-represented groups. Because if those in power fail to see the issues of vulnerable people, and keep failing to see how representation matters, the issues of the most privileged will ever be the only ones on the agenda.
Further reading/listening
The research paper: AI Now Institute. Discriminating Systems. Gender, Race and Power in AI.
Understand the problem of bias. Can you make AI fairer than a judge? Play our courtroom algorithm game. an interactive article by Karen Hao.
“We’re in a diversity crisis”: cofounder of Black in AI on what’s poisoning algorithms in our lives (MIT Technology Review)
Podcast, CBC Radio: How algorithms create a ‘digital underclass’. Princeton sociologist Ruha Benjamin argues bias is encoded in new tech. Benjamin is also the author of Race After Technology.
Podcast, Clear+Vivid with Alan Alda. I can really recommend this episode of Clear+Vivid where Alan Alda and his producers speak with some of today’s most outspoken advocates for professional women in the STEM fields. We hear from Melinda Gates, Jo Handelsman, Nancy Hopkins, Hope Jahren, Pardis Sabeti, Leslie Vosshall, and many more. Is There a Revolution for Women In Science? Are Things Finally Changing?
In Swedish by Marcus Österberg: DIGG:s AI-referensgrupp 88% vita män


