Skip to main content

The problem with AI ethics

Is Big Tech’s embrace of AI ethics boards actually helping anyone?

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

An image showing a graphic of a brain on a black background
Illustration by Alex Castro / The Verge

Last week, Google announced that it is creating a new external ethics board to guide its “responsible development of AI.” On the face of it, this seemed like an admirable move, but the company was hit with immediate criticism.

Researchers from Google, Microsoft, Facebook, and top universities objected to the board’s inclusion of Kay Coles James, the president of right-wing think tank The Heritage Foundation. They pointed out that James and her organization campaign against anti-discrimination laws for LGBTQ groups and sponsor climate change denial, making her unfit to offer ethical advice to the world’s most powerful AI company. An open petition demanding James’ removal was launched (it currently has more than 1,700 signatures), and as part of the backlash, one member of the newly formed board resigned.

Google has yet to say anything about all of this (it didn’t respond to multiple requests for comment from The Verge), but to many in the AI community, it’s a clear example of Big Tech’s inability to deal honestly and openly with the ethics of its work.

Ethics boards and charters aren’t changing how companies operate

This might come as a surprise if you’ve followed recent debates over AI ethics. In the past few years, tech companies certainly seem to have embraced ethical self-scrutiny: establishing ethics boards, writing ethics charters, and sponsoring research in topics like algorithmic bias. But are these boards and charters doing anything? Are they changing how these companies work or holding them accountable in any meaningful way?

Academic Ben Wagner says tech’s enthusiasm for ethics paraphernalia is just “ethics washing,” a strategy to avoid government regulation. When researchers uncover new ways for technology to harm marginalized groups or infringe on civil liberties, tech companies can point to their boards and charters and say, “Look, we’re doing something.” It deflects criticism, and because the boards lack any power, it means the companies don’t change.

“Most of the ethics principles developed now lack any institutional framework,” Wagner tells The Verge. “They’re non-binding. This makes it very easy for companies to look [at ethical issues] and go, ‘That’s important,’ but continue with whatever it is they were doing beforehand.”

Think of it like CEO Jack Dorsey’s repeated assurances that’s he thinking hard about Twitter’s problems with abuse, harassment, and neo-Nazis. He keeps thinking, and things on the site stay pretty much the same. At a certain point, all of this contemplation looks like a substitute for actual policy change.

An ethics explosion

Google isn’t the only company with an ethics board and charter, of course. Its London AI subsidiary DeepMind has one, too, though it’s never revealed who’s on it or what they’re up to. Microsoft has its own AI principles, and it founded its AI ethics committee in 2018. Amazon has started sponsoring research into “fairness in artificial intelligence” with the help of the National Science Foundation, while Facebook has even co-founded an AI ethics research center in Germany.

Despite their proliferation, these programs tend to share fundamental weaknesses, says Rumman Chowdhury, a data scientist and lead for responsible AI at management consultancy Accenture. One of the most important points is that they lack transparency.

Chowdhury notes that many of society’s most important institutions, from universities to hospitals, have, over time, developed effective review boards that represent the institutions’ values in the public interest. But in the case of Big Tech, it’s unclear whose interests are being represented.

“This board cannot make changes, it can just make suggestions.”

“It’s not that people are against governance bodies, but we have no transparency into how they’re built,” Chowdhury tells The Verge. With regard to Google’s most recent board, she says, “This board cannot make changes, it can just make suggestions. They can’t talk about it with the public. So what oversight capabilities do they have?”

When boards do make interventions, only the company who operates it really knows why. When Microsoft set up its AI ethics oversight committee Aether, for example, the company said that “significant sales” had been cut off because of the group’s recommendations. But it’s never explained what customers or applications were vetoed. Where exactly does Microsoft draw the line on unethical uses of AI? Only the company itself knows.

A report last year from research institute AI Now said there’s been a “rush to adopt” ethical codes, but there’s no corresponding introduction of mechanisms that can “backstop these ... commitments.” The report points out that there’s also scant evidence on a corporate and an individual level if these codes and charters even have much effect.

One study from 2018 tried to test whether codes of conduct could influence programmers’ ethical decision-making. It quizzed two groups of developers with a set of hypothetical problems they might face at work. Before answering, one group was told to consider a code of ethics issued by the Association for Computing Machinery, while the other group was simply told the fictional firm they worked for had strong ethical principles. The study found that priming test subjects with the code of ethics “had no observed effect” on their answers.

Things are equally discouraging when considering how companies act. Google, for example, only created its AI ethics charter after employees objected to its work in helping the Pentagon design analytics tools for drones. After this, it continued to develop its censored Chinese search engine, a project that will probably involve AI to some degree and that many think infringes human rights. (Google says work on this project has stopped, but reports from employees say otherwise.)

IBM offers similar evidence. The company has been vocal about its ethical efforts in AI, and last year, it created an ethnically diverse dataset to help remove racial bias from facial recognition systems. Reports from The Intercept have detailed how, as recently as 2016, the company’s surveillance tech was used by police forces in the Philippines where thousands have been killed in “extrajudicial executions” as part of a brutal war on drugs. An interest in ethical algorithms doesn’t stop companies from assisting deeply unethical causes.

These problems don’t mean AI ethics boards should be done away with completely. These are hard problems, and discussing best practices for algorithmic systems raises awareness of their potential flaws. With so much expertise working for big tech companies, it would be foolish to shut these firms out of the conversation. But if we are to believe that these projects will be enough to keep society safe from the most harmful effects of new technology, we need to go further.

“Trust us, we’re plenty ethical”

Part of the problem is that Silicon Valley is convinced that it can police itself, says Chowdhury.

“It’s just ingrained in the thinking there that, ‘We’re the good guys, we’re trying to help,” she says. The cultural influences of libertarianism and cyberutopianism have made many engineers distrustful of government intervention. But now these companies have as much power as nation states without the checks and balances to match. “This is not about technology; this is about systems of democracy and governance,” says Chowdhury. “And when you have technologists, VCs, and business people thinking they know what democracy is, that is a problem.”

The solution many experts suggest is government regulation. It’s the only way to ensure real oversight and accountability. In a political climate where breaking up big tech companies has become a presidential platform, the timing seems right.

Jack Poulson, a former researcher at Google who resigned over the company’s work on its censored Chinese search engine, says that even the recent backlash over Google’s ethics board is something of a distraction compared to this bigger issue.

“The major battle is accountability.”

“In a sense, we are playing into their hands by polarizing [the issue of] ethics,” Poulson tells The Verge. “The major battle is accountability. And accountability is less likely if we polarize what is currently bipartisan concern over Big Tech.”

Legislation is proven to work, too. Last week, Facebook was charged by the US government for allowing customers to target ads at users based on their ethnicity. It marks the first time federal anti-discrimination laws have been applied in this area. As Ben Carson, secretary of the Department of Housing and Urban Development, put it: “Using a computer to limit a person’s housing choices can be just as discriminatory as slamming a door in someone’s face.” It’s not hard to imagine biased AI systems being prosecuted along similar lines.

The difficult task now is finding crossovers between existing legislation and the malicious impact of new technology. After that, new legislation needs to be drafted to fill the gaps.

Chowdhury says that since the Cambridge Analytica scandal broke last year, there’s been a renewed interest among US lawmakers to tackle these problems. But a major problem is a lack of familiarity with the technology. As was demonstrated during Mark Zuckerberg’s congressional hearings, the people in charge of reining in these tech companies often know little about the tech itself.

“They are trying their best, absolutely trying their best, to grapple with something that is completely outside their wheelhouse,” says Chowdhury of US politicians. “But a lot of the power — the intellectual knowledge — is held by the very bodies they would regulate.”

That could mean that, as with the establishment of ethics boards, these firms get to shape the way they’re regulated. As Poulson puts it, “Why do tech companies get to choose their own critics?” It’s a dynamic that’s long worked in tech’s favor, and we’re seeing it action right now, with Zuckerberg calling for regulation of Facebook but on the company’s own terms.

Wagner notes that whatever steps are made to regulate AI, the thinking has to be global, not US-centric. The technologies being developed by Google, Amazon, and others will be deployed around the world, and decisions made in America will affect more than just Americans. “We saw this with stuff like Facebook going into Myanmar,” he says. “This is a global challenge.”