Skip to main content

When algorithms go wrong we need more power to fight back, say AI researchers

When algorithms go wrong we need more power to fight back, say AI researchers

/

The public doesn’t have the tools to hold algorithms accountable

Share this story

Governments and private companies are deploying AI systems at a rapid pace, but the public lacks the tools to hold these systems accountable when they fail. That’s one of the major conclusions in a new report issued by AI Now, a research group home to employees from tech companies like Microsoft and Google and affiliated with New York University.

The report examines the social challenges of AI and algorithmic systems, homing in on what researchers call “the accountability gap” as this technology is integrated “across core social domains.” They put forward ten recommendations, including calling for government regulation of facial recognition (something Microsoft president Brad Smith also advocated for this week) and “truth-in-advertising” laws for AI products, so that companies can’t simply trade on the reputation of the technology to sell their services.

Big tech companies have found themselves in an AI gold rush, charging into a broad range of markets from recruitment to healthcare to sell their services. But, as AI Now co-founder Meredith Whittaker, leader of Google’s Open Research Group, tells The Verge, a lot of their claims about benefit and utility are not backed by publicly accessible scientific evidence.”

Whittaker gives the example of IBM’s Watson system, which, during trial diagnoses at Memorial Sloan Kettering Cancer Center, gave “unsafe and incorrect treatment recommendations,” according to leaked internal documents. “The claims that their marketing department had made about [their technology’s] near-magical properties were never substantiated by peer-reviewed research,” says Whittaker.

2018 has been a year of “cascading scandals” for AI

The authors of AI Now’s report say this incident is just one of a number of “cascading scandals” involving AI and algorithmic systems deployed by governments and big tech companies in 2018. Others range from accusations that Facebook helped facilitate genocide in Myanmar, to the revelation that Google’s is helping to build AI tools for drones for the military as part of Project Maven, and the Cambridge Analytica scandal.

In all these cases there has been public outcry as well as internal dissent in Silicon Valley’s most valuable companies. The year saw Google employees quitting over the company’s Pentagon contracts, Microsoft employees pressuring the company to stop working with Immigration and Customs Enforcement (ICE), and employee walkouts from Google, Uber, eBay, and Airbnb protesting issues involving sexual harassment.

Whittaker says these protests, supported by labor alliances and research initiatives like AI Now’s own, have become “an unexpected and gratifying force for public accountability.”

US Conducts Air War Against ISIL From Secret Base In Persian Gulf Region
This year saw widespread protests against the use of AI, including Google’s involvement in building drone surveillance technology.
Photo by John Moore/Getty Images

But the report is clear: the public needs more. The danger to civic justice is especially clear when it comes to the adoption of automated decision systems (ADS) by the government. These include algorithms used for calculating prison sentences and allotting medical aid. Usually, say the report’s authors, software is introduced into these domains with the purpose of cutting costs and increasing efficiency. But that result is often systems making decisions that cannot be explained or appealed.

AI Now’s report cites a number of examples, including that of Tammy Dobbs, an Arkansas resident with cerebral palsy who had her Medicaid-provided home care cut from 56 hours to 32 hours a week without explanation. Legal Aid successfully sued the State of Arkansas and the algorithmic allocation system was judged to be unconstitutional.

Whittaker and fellow AI Now co-founder Kate Crawford, a researcher at Microsoft, say the integration of ADS into government services has outpaced our ability to audit these systems. But, they say, there are concrete steps that can be taken to remedy this. These include requiring technology vendors which sell services to the government to waive trade secrecy protections, thereby allowing researchers to better examine their algorithms.

“If we want public accountability we have to be able to audit this technology.”

“You have to be able to say, ‘you’ve been cut off from Medicaid, here’s why,’ and you can’t do that with black box systems” says Crawford. “If we want public accountability we have to be able to audit this technology.”

Another area where action is needed immediately, say the pair, is the use of facial recognition and affect recognition. The former is increasingly being used by police forces, in China, the US, and Europe. Amazon’s Rekognition software, for example, has been deployed by police in Orlando and Washington County, even though tests have shown that the software can perform differently across different races. In a test where Rekognition was used to identify members of Congress it had an error rate of 39 percent for non-white members compared to only five percent for white members. And for affect recognition, where companies claim technology can scan someone’s face and read their character and even intent, AI Now’s authors say companies are often peddling pseudoscience.

Despite these challenges, though, Whittaker and Crawford say that 2018 has shown that when the problems of AI accountability and bias are brought to light, tech employees, lawmakers, and the public are willing to act rather than acquiesce.

With regards to the algorithmic scandals incubated by Silicon Valley’s biggest companies, Crawford says: “Their ‘move fast and break things’ ideology has broken a lot of things that are pretty dear to us and right now we have to start thinking about the public interest.”

Says Whittaker: “What you’re seeing is people waking up to the contradictions between the cyber-utopian tech rhetoric and the reality of the implications of these technologies as they’re used in everyday life.”