AI with a heart: How artificial intelligence can uncover biases

Today’s forward-thinking businesses—and particularly their HR teams—are well versed in the benefits of artificial intelligence and machine learning.
Many have already started leveraging these technologies to support their most important business needs, including automating processes, gaining insight through data analysis, and engaging with customers and employees.
The use of AI has been a boon for busy HR professionals in a time when a single job listing can easily get thousands of applicants.
Reading through so many résumés is a daunting task, whether you’re at a small company with a few job postings or an industrial giant with hundreds.
Leveraging AI to read and evaluate applicants and make hiring recommendations can make this work far easier and more efficient.
Bias free?
However, a key part of any AI strategy is ensuring that systems are free from bias—and this is especially true during the hiring process, to avoid discriminating against qualified candidates.
In 2018, Amazon discarded a system for screening résumés because it was penalizing women; listing experiences such as having played on a women’s chess team or having attended a women’s college caused applicants to be downgraded.
AI models are only as good as the datasets they’re trained on. Amazon’s system was trained on their own hiring data, but because most employees were male, the algorithm associated successful applications with male-oriented words.
AI systems also often discriminate against people of color. Research by Joy Buolamwini and Timnit Gebru revealed that gender recognition algorithms are most a