Skip to main content

DeepMind launches new research team to investigate AI ethics

DeepMind launches new research team to investigate AI ethics

/

The Google-owned company will publish research on the effects of AI on society

Share this story

TechCrunch Sessions: Robotics
Photo by Paul Marotta/Getty Images for TechCrunch

Google’s AI subsidiary DeepMind is getting serious about ethics. The UK-based company, which Google bought in 2014, today announced the formation of a new research group dedicated to the thorniest issues in artificial intelligence. These include the problems of managing AI bias; the coming economic impact of automation; and the need to ensure that any intelligent systems we develop share our ethical and moral values.

DeepMind Ethics & Society (or DMES, as the new team has been christened) will publish research on these topics and others starting early 2018. The group has eight full-time staffers at the moment, but DeepMind wants to grow this to around 25 in a year’s time. The team has six unpaid external “fellows” (including Oxford philosopher Nick Bostrom, who literally wrote the book on AI existential risk) and will partner with academic groups conducting similar research, including The AI Now Institute at NYU, and the Leverhulme Centre for the Future of Intelligence.

In a blog post announcing the team, co-leads Verity Harding and Sean Legassick write that DMES will help DeepMind “explore and understand the real-world impacts of AI.” For examples of what this work looks like, they mention investigations into racism in criminal justice algorithms, and discussion on topics like the ethics of steering crashes in driverless cars. “If AI technologies are to serve society, they must be shaped by society’s priorities and concerns,” write Harding and Legassick.

DeepMind itself is only too aware of these challenges, after being criticized last year for its work with the UK’s National Health Service (NHS). A deal DeepMind made with three London hospitals in 2015, in which it processed medical data belonging to 1.6 million patients, was ruled this year to be illegal by the UK’s data watchdog, with the company failing to inform individuals their data was being used. DeepMind later said it had “underestimated the complexity of the NHS and of the rules around patient data,” and hired new independent ethical reviewers to examine any future deals.

Although the creation of DMES is evidence DeepMind is actively and openly considering the problems of how AI will impact society, the company will continue to face questions about the ethical implications of its own work. DMES researchers will reportedly operate in parallel to the staff building DeepMind’s own products, rather than having a hand in their creation; and the company’s internal ethics review board remains shrouded in mystery. Naturally, there are limitations on how transparent a private company working on cutting-edge technology can be, but, presumably, DMES can tackle this topic as well.