From Dr. Frankenstein to Jurassic Park’s misguided geneticists, popular culture is full of stories about scientists so absorbed by their creations that they neglect to consider the consequences, as well as films like “Her” and “Ex Machina” represent our anxieties about what intelligent machines mean for mankind.
Today, AI and robotics are faced with a more realistic version of the problem: scientists and engineers’ homogeneity – in terms of ethnicity, culture and gender – may lead them to design artificial intelligence systems without considering the effects on people different from themselves.
In 2015, in Europe three engineers out of four were men, with figures as high as 85% for Switzerland and Ireland. In the US, only 17% of Computer Science graduates are women.
That’s not so surprising, given how few women there are in the field and how gender stereotypes continue to flourish in science. The figures are even worse for AI, as research has drifted from a focus on how the technology can improve people’s lives. In general, many women tend to work for the benefit of their communities, while men tend to be more interested in algorithms and technical properties. Since men have come to dominate AI, research has become very narrowly focused on solving technical problems and not the big questions.
Major companies in the sector, after receiving thousands of applications for AI and data science roles, reported that only 0.1% were women. And even if people working in AI are men who want opportunities for their young daughters, they see the situation as simply: “we’ll hire the best, but we’re not going to go out and seek diversity”.
However, you can’t say you’re hiring the best if you have no diversity at all. Thinking probabilistically, something is really weird with a number like 0.1%, since women make up about 50% of the people in the world and artificial intelligence, in and of itself, is genderless and sexless.
Much has been made about the tech industry’s lack of women engineers and executives, but there’s a unique problem with homogeneity in AI. To teach computers about the world, researchers need to gather massive data sets of almost everything. If these data sets aren’t sufficiently broad, then companies can create AIs with biases.
From a machine learning perspective, if we don’t think about gender inclusiveness, it’s more likely that inferences that are made will be biased towards the majority group – in our case, affluent white males. And, if un-diverse data goes in, then close-minded, inside-the-box, not-very-good results come out.
A number of embarrassing incidents concerning AI have happened because of incomplete or flawed datasets. To mention one: Tay, Microsoft’s chatbot released earlier this year, had a lot of un-diverse stuff going in, and eventually reflected the inclinations of the worst the Internet has to offer. Within 24 hours of being exposed to the public, internet users realized that Tay would learn from its interactions, so they tweeted insulting, racist, nasty things at it. The bot simply incorporated that language into its mental model and started spewing out more of the same.
In most cases, it’s not that systems are built out of malevolence, but there’s always a problem when a homogeneous group builds a system that is applied to people not represented by its builders.
The bottom line is that if everyone teaching computers to act like humans are men, then these machines will have a view of the world that is narrow by default, and possibly biased. This is why the industry as a whole needs to do a better job of classifying gender and other diversity signals in training datasets.
Robotics and artificial intelligence don’t just need more women, they need more diversity overall: companies should make efforts to attract people from a wide range of racial and ethnic backgrounds, of different age groups, as well as people with disabilities.
Cultural diversity matters, too: for example, the Japanese are well at ease with robots, with no Terminator complex about artificial intelligence taking over the world. This is due to the Shintoist religion, that perceives humans and non-human entities as equally important parts of a common synergy. Concerning the fields of applications, the US has a heavy focus on artificial intelligence in the military, while Europeans are more interested in applications supporting the elderly and people with disabilities.
To conclude, AI systems are built by humans using training data that one way or another comes from humans, optimized for goals that humans set, and every person in the loop has a bit of their own bias. Hence, as we build machines to make our lives better and easier, it’s always important to ask whose lives we are trying to improve, and in which ways.