Skip to main content

Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla

Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla

/

Musk will remain a donor to the nonprofit organization, which focuses on AI safety and ethics

Share this story

Elon Musk Presents SpaceX Plans To Colonise Mars
Photo by Mark Brake/Getty Images

Tech billionaire Elon Musk is leaving the board of OpenAI, the nonprofit research group he co-founded with Y Combinator president Sam Altman to study the ethics and safety of artificial intelligence.

The move was announced in a short blog post, explaining that Musk is leaving in order to avoid a conflict of interest between OpenAI’s work and the machine learning research done by Telsa to develop autonomous driving. “As Tesla continues to become more focused on AI, this will eliminate a potential future conflict for Elon,” says the post. Musk will stay on as a donator to OpenAI and will continue to advise the group.

The blog post also announced a number of new donors, including video game developer Gabe Newell, Skype founder Jaan Tallinn, and the former US and Canadian Olympians Ashton Eaton and Brianne Theisen-Eaton. OpenAI said it was broadening its base of funders in order to ramp up investments in “our people and the compute resources necessary to make consequential breakthroughs in artificial intelligence.” For those concerned about the near-term impact of AI on areas like surveillance and propaganda, this work is crucial.

OpenAI was founded just two years ago, but has fast become a significant voice in the global machine learning community. Its research has been wide-ranging, including teaching computers to control robots with minimal instruction (known as “one-shot learning”) and the creation of AI agents to play popular video game Dota (a more daunting challenge than board games like chess).

Just this week, the institute contributed to a multi-disciplinary report outlining the ways AI could be used maliciously over the next five years. (A different and much realer set of concerns than the specter of evil superintelligent AI sometimes raised by figures including Musk himself.) But for an organization concerned with keeping advances in artificial intelligence safe, OpenAI certainly has its work cut out.