Bias in machine learning can be a problem even for companies with plenty of experience with AI, like Amazon. According to a report from Reuters, the e-commerce giant scrapped an internal project that was trying to use AI to vet job applications after the software consistently downgraded female candidates.
Because AI systems learn to make decisions by looking at historical data they often perpetuate existing biases. In this case, that bias was the male-dominated working environment of the tech world. According to Reuters, Amazon’s program penalized applicants who attended all-women’s colleges, as well as any resumes that contained the word “women’s” (as might appear in the phrase “women’s chess club”).
The team behind the project reportedly intended to speed up the hiring process. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those,” an unnamed source familiar with the work told Reuters. When the company realized the software was not producing gender-neutral results it was tweaked to remove this bias. However, those involved could not be sure other biases had not crept in to the program, and as a result it was scrapped entirely last year.
The program was scrapped at the beginning of last year
Speaking to The Verge, a source at Amazon said that the program had only ever been used in trials, and was never used independently or rolled out to larger groups. The source also noted that the project was abandoned for a number of different reasons, not just the gender bias issue, which was eventually fixed. In a statement, an Amazon spokesperson confirmed that the program was never used in an official capacity, telling The Verge, “This was never used by Amazon recruiters to evaluate candidates.”
Over the past few years, as artificial intelligence has been deployed in more and more contexts, researchers have become increasingly vocal about the dangers of bias. Prejudices about gender and race can easily creep into a range of AI programs — everything from facial recognition algorithms to those used by the courts and hospitals.
In most cases, these programs are simply perpetuating existing biases. With Amazon’s CV scanner, for example, a human recruiter might be equally prejudiced against female candidates on a subconscious basis. But, by passing these biases on to to computer program, we make them less visible and less open to correction. That’s because we tend to trust decisions from machines and because AI programs can’t explain their thinking.
Despite this, many startups working on AI recruitment tools explicitly sell their services as a way to avoid bias, because, the say, preferences for certain candidates can be coded in. Amazon is apparently thinking along these lines too, as Reuters reports that the company is having another go at building an AI recruitment tool, this time “with a focus on diversity.”
Update Wednesday October 10th, 11:20AM ET: Updated with additional comment.
Update Thursday October 11th, 5:43PM ET: Updated with official Amazon comment.