Skip to main content

Google wants to make sure AI advances don’t leave anyone behind

Google wants to make sure AI advances don’t leave anyone behind


The company says its new research initiative will tackle bias in AI and make the technology more accessible

Share this story

Westfield Hosts Interactive Artificial Intelligence Storytelling For Kids At Pop-Up Indoor Park
“Gather round children, and ye may share my AI wisdoms.”
Photo by Jeff Spicer / Getty Images

For every exciting opportunity promised by artificial intelligence, there’s a potential downside that is its bleak mirror image. We hope that AI will allow us to make smarter decisions, but what if it ends up reinforcing the prejudices of society? We dream that technology might free us from work, but what if only the rich benefit, while the poor are dispossessed?

It’s issues like these that keep artificial intelligence researchers up at night, and they’re also the reason that Google is launching an AI initiative today to tackle some of these same problems. The new project is named PAIR (it stands for “People + AI Research”) and its aim is to “study and redesign the ways people interact with AI systems” and try to ensure that the technology “benefits and empowers everyone.”

Google wants to help everyone from coders to users

It’s a broad remit, and an ambitious one. Google says PAIR will look at a number of different issues affecting everyone in the AI supply chain — from the researchers who code algorithms, to the professionals like doctors and farmers who are (or soon will be) using specialized AI tools. The tech giant says it wants to make AI user-friendly, and that means not only making the technology easy to understand (getting AI to explain itself is a known and challenging problem) but also ensuring that it treats its users equally.

It’s been noted time and time again that the prejudices and inequalities of society often become hard-coded in AI. This might mean facial recognition software that doesn’t recognize dark-skinned users, or a language processing program which assume that doctors are always male and nurses are always female.

Usually this sort of issue is caused by the data that artificial intelligence is trained on. Either the information it has it incomplete, or it’s prejudiced in some way. That’s why PAIR’s first real news is the announcement of two new open-source tools — called Facets Overview and Facets Dive — which make it easier for programmers to examine datasets.

A screenshot of Facets Dive — an open-source tool for examining data used by AI.
A screenshot of Facets Dive — an open-source tool for examining data used by AI.
Image: Google / PAIR

In the screenshot above Facets Dive is being used to test a facial recognition system. The program is sorting the testers by their country of origin and comparing errors with successful identifications. This allows a coder to quickly see where their dataset is falling short, and make the relevant adjustments.

Currently, PAIR has 12 full-time staff. It’s a bit of a small figure considering the scale of the problem, but Google says PAIR is really a company-wide initiative — one that will draw in expertise from the firm’s various departments.

More open-source tools like Facets will be released in the future, and Google will also be setting up new grants and residencies to sponsor related research. It’s not the only big organization taking these issues seriously (see also: the Ethics and Governance of Artificial Intelligence Fund and Elon Musk-funded OpenAI), but it’s good to see Google join the fight for a fairer future.