Skip to main content

Thieves are now using AI deepfakes to trick companies into sending them money

Thieves are now using AI deepfakes to trick companies into sending them money

/

So AI crimes are a thing now

Share this story

Illustration by Alex Castro and Grayson Blackmon / The Verge

It seems like every few days there’s another example of a convincing deepfake going viral or another free, easy-to-use piece of software (some even made for mobile) that can generate convincing video or audio that’s designed to trick someone into believing a piece of virtual artifice is real. But according to The Wall Street Journal, there may soon be serious financial and legal ramifications to the proliferation of deepfake technology.

The publication reported last week that a UK energy company’s chief executive was tricked into wiring €200,000 (or about $220,000 USD) to a Hungarian supplier because he believed his boss was instructing him to do so. But the energy company’s insurance firm, Euler Hermes Group SA, told the WSJ that a clever AI-equipped fraudster was using deepfake software to mimic the voice of the executive and demand his underling pay him within the hour.

“The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent,” a Euler Hermes spokesperson later told The Washington Post. The phone call was matched with an email, and the energy firm CEO obliged. The money is now gone, having been moved through accounts in Hungary and Mexico and dispersed around the world, the Post reports.

Deepfakes have been used to steal from companies in at least three cases

Later, after a second request from the thieves was made, the energy firm CEO called up his actual boss, only to find himself handling calls from both the fake and the real versions of the man simultaneously, which alerted the CEO to the ongoing theft. Euler Hermes declined to name the energy firm or its German parent company.

This may not be the first time this has happened. According to the Post, cybersecurity firm Symantec says it has come across at least three cases of deepfake voice fraud used to trick companies into sending money to a fraudulent account. Symantec told the Post that at least one of the cases, which appears to be distinct from the one Euler Hermes has confirmed, resulted in millions of dollars in losses.

The situation highlights the fraught nature of AI research, especially around the artificial creation of video and audio. While none of the big Silicon Valley companies with large, capable AI research institutions are openly developing deepfake video software, some are working diligently in the audio realm.

Google’s controversial Duplex service uses AI to mimic the voice of a real human being so that it can make phone calls on a user’s behalf. A number of smaller startups, many of which are located in China, are offering up similar services for free on smartphones, sometimes under questionable privacy and data collection terms. Meanwhile, researchers at tech companies and in academia are trying to develop deepfake-detecting software. Other researchers are unearthing the extent to which a convincing deepfake can be generated and purposed using even smaller amounts of data.

In other words, deepfakes are here, and they can be dangerous. We’re just going to need better tools to sort out the real from the fake.