Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla

Photo by Mark Brake/Getty Images

Tech billionaire Elon Musk is leaving the board of OpenAI, the nonprofit research group he co-founded with Y Combinator president Sam Altman to study the ethics and safety of artificial intelligence.

The move was announced in a short blog post, explaining that Musk is leaving in order to avoid a conflict of interest between OpenAI’s work and the machine learning research done by Telsa to develop autonomous driving. “As Tesla continues to become more focused on AI, this will eliminate a potential future conflict for Elon,” says the post. Musk will stay on as a donator to OpenAI and will continue to advise the group.

The blog post also announced a number of new donors, including video game developer Gabe Newell, Skype founder Jaan Tallinn, and the former US and Canadian Olympians Ashton Eaton and Brianne Theisen-Eaton. OpenAI said it was broadening its base of funders in order to ramp up investments in “our people and the compute resources necessary to make consequential breakthroughs in artificial intelligence.” For those concerned about the near-term impact of AI on areas like surveillance and propaganda, this work is crucial.

OpenAI was founded just two years ago, but has fast become a significant voice in the global machine learning community. Its research has been wide-ranging, including teaching computers to control robots with minimal instruction (known as “one-shot learning”) and the creation of AI agents to play popular video game Dota (a more daunting challenge than board games like chess).

Just this week, the institute contributed to a multi-disciplinary report outlining the ways AI could be used maliciously over the next five years. (A different and much realer set of concerns than the specter of evil superintelligent AI sometimes raised by figures including Musk himself.) But for an organization concerned with keeping advances in artificial intelligence safe, OpenAI certainly has its work cut out.


makes it sound like Tesla is working on unethical AI…

But then you thought twice.

to be fair, the headline has changed since I first read it.

When working on AI for self-driving cars their creators have to make ethically difficult decisions. They need to define rules how to react in some dangerous situations. E.g. sometimes AI has to decide either to hit pedestrian (which may kill them) or hit an opposite driving truck (which may kill the car driver/passenger).

When drivers have to make this decisions in fractions of seconds, it’s hard to judge them. But when decision is made by AI designers it’s different. People will want to know what their car will do in unexpected situations. And these rules may be different depending on who made software for them.

To be fair, I would never buy a car with ethical AI unless forced to. If a car must kill 10 orphan kids, a puppy and a kitten to save me… well.

Somebody give this man an Honesty Nobel.

.. And Elon needs more time for his part time job assembling Model 3s, which he does as recent immigrant ‘Nole Ksum’.

"Say, Nole, how about that Union guy, pretty good idea, huh?"
"What? No! Union bad, take your money. Elon’s a good guy, he looks out for us. Hand me that rubber door trim piece."

Musk will stay on as a donator to OpenAI and will continue to advise the group.

Interesting word choice. I’m not sure I’ve ever seen "donator" used in writing other than in legalese before, and it’s used there purely to intimidate those reading the documents or to make the document sound impressive.

View All Comments
Back to top ↑