Skip to main content

Elon Musk and scientists agree: we need to make sure AI helps humanity

Elon Musk and scientists agree: we need to make sure AI helps humanity

/

Open letter warns that we need to maintain 'meaningful control'

Share this story

Academics and experts around the world are calling for researchers building artificial intelligence to focus "not only on making AI more capable, but also on maximizing the societal benefit of AI." An open letter, drafted by the Future of Life Institute (FLI) and signed by leading scientific and industry figures including Stephen Hawking and Elon Musk, warns that companies and engineers must do more to ensure that "AI systems … do what we want them to do."

Musk, who has previously expressed fears that artificial intelligence could lead to a Terminator-style future, tweeted a link to the letter followed by a retweet that encapsulates a common — if sensational — worry surrounding AI development: "First question asked of AI; 'Is there a god?' First AI answer; 'There is now'." The message from the FLI was more measured, emphasizing potential benefits while mentioning only "potential pitfalls" and stressing the need for researchers to have full control over their creations.

20 years of AI research have created concrete results — but we're just getting started

The letter notes that 20 years of research into intelligent agents that "perceive and act in some environment" has produced concrete advances in fields including "speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems." This, the letter states, has created a "virtuous cycle" in which the AI industry rewards even small improvements with "large sums of money," encouraging even more investment:

"The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."

"the eradication of disease and poverty are not unfathomable."

Signatories include academics from Harvard, MIT, Oxford, and Cambridge; staff from Silicon Valley tech companies including Google and Amazon; and employees of venture capital firms such as the Founders Fund and Thiel Capital.

A separate document also outlines some suggested research priorities for building "robust and beneficial artificial intelligence." These include building greater economic tools such as labor market forecasting; research into machine ethics ("How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost?"); proper development of autonomous weapons; and methods of creating "robust" AI — that is, artificial intelligence whose operations humans understand and exert "meaningful control" over.