Last week, numerous news outlets reported that a string of European politicians had been tricked by a sophisticated Russian plot. Parliamentarians from the UK, Latvia, Lithuania, and Estonia had all arranged video calls with a hoaxer claiming to be Leonid Volkov, chief of staff to imprisoned Russian anti-Putin politician Alexei Navalny. As the politicians tell it, they fell victim to a digital fake: a doppelgänger using “deepfake” technology created specially to trick them — the latest example of Russia’s misinformation campaigns in the West.
But the Russian men who orchestrated the calls say the “deepfake” claim is itself misinformation. Speaking to The Verge, the hoaxers say their imitation Volkov was created using effects no more sophisticated than makeup and artfully obscure camera angles.
The case of the fake deepfake
“Meet Leonid Volkov, Russian opposition leader,” says Vladimir Kuznetsov in a recent video call with The Verge, introducing his colleague and partner-in-crime Alexei Stolyarov, a man who does indeed bear a passing resemblance to Leonid Volkov.
The pair say they tricked their way into various meetings with European politicians and even a live interview on Latvian TV. They did so by cold-calling and emailing their targets from fake addresses, using a real picture of Volkov as their digital avatar. As proof, the pair shared some of this correspondence with The Verge. They’ve also uploaded a meeting between “Volkov” and Ukrainian politicians to YouTube and say more videos are coming.
“I didn’t need to prepare much to look like the real Volkov,” says Stolyarov. “I just had some brushes and some colors and that was enough.”
Kuznetsov and Stolyarov are better known as Vovan and Lexus: a pair of self-described “pranksters” who have a history of fooling Western politicians and celebrities. Over the years, the pair have tricked their way into phone calls with the likes of Justin Trudeau, Elton John, Bernie Sanders, Lindsey Graham, and Boris Johnson, each time aiming to catch these figures off-guard and tease potentially embarrassing statements out of them.
“We wouldn’t prank Putin. We don’t want to harm our country”
Although the pair have denied any official connections to the Kremlin, there’s no doubt their work is useful to and supported by the Russian government. In the past, they’ve had their own show on Russian state TV and their antics approvingly covered by state news. They plainly know which side their bread is buttered. As Stolyarov told The Guardian a few years back: “We wouldn’t prank Putin. We don’t want to harm our country. We don’t want unrest here; we don’t want to do anything that would help the enemies of Russia.”
Creating unrest elsewhere, though, is par for the course. “Our work is to prank high officials and celebrities and to make a lot of fun and publish it to social media,” says Stolyarov.
The pair say they chose to impersonate Volkov for a number of reasons. Firstly, because of the newsworthiness of Navalny. After leading the most substantial opposition to Russian president Vladimir Putin in years, Navalny has been imprisoned. He recently ended a 24-day hunger strike and continues to criticize the Russian government despite the expectation that his nationwide political movement will be outlawed soon. Secondly, because of Stolyarov’s resemblance. And thirdly, because the real Volkov hasn’t sought many meetings with Western politicians, meaning few had any familiarity with how he looks and sounds. An unspoken fourth motivation is to ridicule anti-Putin forces, in line with the duo’s politics.
Kuznetsov and Stolyarov have carried out numerous pranks over the years (you can browse their YouTube channel for a selection) though not all have gained media attention. What made this particular campaign stand out is its connection with deepfakes — AI-generated or manipulated media that many fear will be used as a tool of political misinformation.
Despite fears, deepfakes have yet to make a major political impact
Over the years, experts have warned about a so-called “infopocalypse,” where the quality and availability of deepfakes make it impossible for the public to distinguish truth and fiction. So far, this dire prediction has not come to pass. The most damaging effects of deepfakes have been in the generation of nonconsensual pornography. And although politics has, indeed, been dogged in recent years by hoaxes and misinformation, such incidents almost always revolve around video and images edited the old-fashioned way. Nevertheless, the specter of deepfakes still haunts politics — as this recent incident shows.
It’s not clear when the fake Volkov calls were first blamed on AI technology, but it seems Volkov himself might be the source. Latvian politician Rihards Kols posted on Facebook on April 22nd that he had been tricked into taking a call in March with an unknown prankster, and shared two pictures supposedly showing the real and fake Volkovs. Volkov then reposted the images that same day, ascribing the prank to Vovan and Lexus, but also suggesting that AI was used. “Looks like my real face — but how did they manage to put it on the Zoom call?” he wrote, according to a Facebook translation. “Welcome to the deep-fake era.”
From here it seems the line spread. The next day, Kols, who is chair of Latvia’s Foreign Affairs Committee, posted a statement on Twitter co-signed by his counterparts from Lithuania and Estonia. The trio warned about the threat posed by “deepfake technologies,” and say the pranks were “targeted attacks on the Kremlin’s critics.”
“We encourage all to remain vigilant yet open”
“During the past few months, disinformation operations with the use of manipulated and artificial intelligence (AI) generated media were carried out against Estonian, Latvian, Lithuanian and the United Kingdom’s politicians, non-governmental organisations, and media representatives,” says the statement. “We encourage all to remain vigilant yet open to communication, as representatives of true democracies always should be!”
Kuznetsov and Stolyarov say that they’re surprised the prank was described as a deepfake, not least because the image they used in their accounts, and that was identified by Volkov himself as fake, is taken from a real video. “It was his real picture but he denied it was him,” says Stolyarov, before Kuznetsov adds that perhaps Volkov didn’t like the picture because he looked “too fat.”
Whether politicians blamed the trick on deepfakes out of genuine confusion or more self-serving reasons isn’t clear. Certainly it’s less embarrassing to be fooled by a sophisticated AI fake than a couple of pranksters with a convincing email manner. But the incident does illustrate that the fear of deepfakes is having as much an effect on misinformation as the technology itself. For those tricked by Kuznetsov and Stolyarov — aka Vovan and Lexus — blaming new technology may simply have been a matter of saving face.