AI, Ethics, and the Importance of Diversity in Machine Learning
When Harper Reed gave a speech on AI at a SAS conference last week, his main thesis was about using AI for good. There are massive ethical implications when allowing AI to make critical decisions with full autonomy. He used a story of a Chinese start-up to illustrate the effects that AI systems can have when teams don’t consider diversity and ethics.
The start-up was using AI to look at an image of a human face and using only that image, predict that person’s age. They were very excited about the technology and boasted that it had an accuracy of over 90%. The team started their demo using the founders faces first. Sure enough, the AI predicted their age perfectly. Next, they passed it over to Harper (click the link to see what he looks like), and the AI predicted his age to be 140 years old. He tried again and got the same result.
The founders were so confused! Why did it work well in testing, but not during the demo? After asking some questions about their dataset, Harper found that the training set only contained images of mid-twenties, asian males. The training set perfectly represented the co-founders because they collected their training set images from friends and a network of people they knew. This led the AI to work really well for young asian males, and terribly for the red-haired Harper Reed.
There are many examples of AI outputs being misogynistic or racist. Joy Buolamwini is a researcher at the MIT Media Lab and has been fighting bias in algorithms since 2015. Joy was experimenting with an AI facial recognition software and noticed that it was not working for her. She initially assumed the technology was still in its infancy and would have some bugs, but noticed that her white colleagues did not have any issues. Joy made a white mask (that looks nothing like a human face), put it on, and the software detected the mask immediately.
Facial recognition for iPhones is one thing, but imagine if the recognition software of a self-driving car was more likely to hit black pedestrians than white. Joy has created the Algorithmic Justice League to help companies and researchers avoid building bias in their algorithms and ensure that AI isn’t unfairly benefiting those who are represented in the datasets.
The examples above do not show that AI researchers are inherently racist and only building solutions for those who look like them. They instead open up a conversation of how our unconscious biases can lead to creating programs and algorithms that serve only those who are like us. It's important to build the training dataset to represent a diverse group of people. In order to build diverse datasets, it is important to have diverse AI and Machine Learning teams.
Diversity is a complex and increasingly political subject. Companies are aware of the benefits of diversity, but often fall short on their commitments. When it comes to AI, diversity is not an option, it’s a requirement. The product will not work and companies will not succeed if their algorithms are biased against a large part of the customer base.
In the examples above, the algorithms immediately show when they do not respond to people of different colours. With this feedback, we need leaders in all fields of AI to pivot towards diversity and ensure that organizations are focussed on building a fair and free society for all.
Chapters
Trip Images
Want to stay in the loop?
Mitchell Johnstone
Director of Strategy
Mitch is a Strategic AI leader with 7+ years of transforming businesses through high-impact AI/ML projects. He combines deep technical acumen with business strategy, exemplified in roles spanning AI product management to entrepreneurial ventures. His portfolio includes proven success in driving product development, leading cross-functional teams, and navigating complex enterprise software landscapes.