Skip to main content

These are the projects Elon Musk is funding to prevent killer AI

These are the projects Elon Musk is funding to prevent killer AI

Share this story

Elon Musk has donated millions to the Future of Life Institute, and now the organization is putting that money to use by funding research into keeping artificial intelligence "robust and beneficial" — i.e. not something that will turn on humanity Skynet style. The institute announced this week that it will be issuing grants to 37 research teams, whittled down from an applicant list of around 300. The teams are taking on the killer AI issue from different angles, including teaching AI to learn what humans want, aligning robots' interests with our own, and keeping AI under human control.

The institute provided this summary highlighting some of the grants:

  • Three projects developing techniques for AI systems to learn what humans prefer from observing our behavior, including projects at UC Berkeley and Oxford University
  • A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values
  • A project lead by Manuela Veloso from Carnegie Mellon University on making AI systems explain their decisions to humans
  • A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial
  • A project headed by Heather Roff studying how to keep AI-driven weapons under "meaningful human control"
  • A new Oxford-Cambridge research center for studying AI-relevant policy

A full list of the grant winners, as well as descriptions of their projects, is available at the institute's site.

About $7 million will be split between the research teams in total, coming from Musk and the Open Philanthropy Project. Most of the projects should begin work this September, and the institute intends to keep them funded for up to three years. The program also has a remaining $4 million, which will be distributed later on, as the institute determines which areas of research appear to be the most promising.

The research also addresses more practical questions

While there's a lot of talk about this research being meant to prevent a Terminator situation, the institute is now explicitly trying to play down that type of language. "The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI," Future of Life Institute president Max Tegmark says in a statement. "We're staying focused, and the 37 teams supported by today’s grants should help solve such real issues." It's a fair statement: the institute is thinking about some very practical concerns, including how to optimize AI's economic impact, so that it doesn't create further income inequality by destroying jobs. It's also looking into how AI should handle ethical dilemmas, such as being presented with different options in an unavoidable car crash.

That said, there are very much some killer robot concerns in the mix. Musk, the CEO of Tesla and SpaceX, has stated quite clearly that it's a concern of his. He's brought up Terminator in the past while discussing his concerns about the evolution of AI and stated that AI has the potential to be "more dangerous than nukes." Part of the Future of Life Institute's goal is to prevent that possibility by making sure that AI remains under human control and following humans' best interests. These first 37 projects are the beginning of that work. It may seem early — but Musk and others behind the institute would rather see the research done now, rather than when powerful AI is already around.