Skip to main content

These faces show how far AI image generation has advanced in just four years

These faces show how far AI image generation has advanced in just four years

/

Those people on the right aren’t real; they’re the product of machine learning

Share this story

The faces on the left were created by AI in 2014; on the right are ones made by AI in 2018.
The faces on the left were created by AI in 2014; on the right are ones made by AI in 2018.
Image: Goodfellow et al; Karras, Laine, Aila / Nvidia

Developments in artificial intelligence move at a startling pace — so much so that it’s often difficult to keep track. But one area where progress is as plain as the nose on your AI-generated face is the use of neural networks to create fake images. In brief: we’re getting scarily good at it.

In the image above you can see what four years of progress in AI image generation looks like. The crude black-and-white faces on the left are from 2014, published as part of a landmark paper that introduced the AI tool known as the generative adversarial network (GAN). The color faces on the right come from a paper published earlier this month, which uses the same basic method but is clearly a world apart in terms of image quality.

These realistic faces are the work of researchers from Nvidia. In their paper, shared publicly last week, they describe modifying the basic GAN architecture to create these images. Take a look at the pictures below. If you didn’t know they were fake, could you tell the difference?

Some of Nvidia’s AI-generated faces.
Some of Nvidia’s AI-generated faces.
Image: Karras, Laine, Aila

What’s particularly interesting is that these fake faces can also be easily customized. Nvidia’s engineers incorporated a method known as style transfer into their work, in which the characteristics of one image are blended with another. You might recognize the term from various image filters that are popular on apps like Prisma and Facebook in recent years, which can make your selfies look like an impressionist painting or a cubist work of art.

Applying style transfer to face generation allowed Nvidia’s researchers to customize faces to an impressive degree. In the grid below, you can see this in action. A source image of a real person (the top row) has the facial characteristics of another person (right-hand column) imposed onto it. Traits like skin and hair color are blended together, creating what looks like to be an entirely new person in the process.

Style transfer allows you to blend facial characteristics from different people.
Style transfer allows you to blend facial characteristics from different people.
Image: Karras, Laine, Aila

Of course, the ability to create realistic AI faces raises troubling questions. (Not least of all, how long until stock photo models go out of work?) Experts have been raising the alarm for the past couple of years about how AI fakery might impact society. These tools could be used for misinformation and propaganda and might erode public trust in pictorial evidence, a trend that could damage the justice system as well as politics. (Sadly, these issues aren’t discussed in Nvidia’s paper, and when we reached out to the company, it said it couldn’t talk about the work until it had been properly peer-reviewed.)

These warnings shouldn’t be ignored. As we’ve seen with the use of deepfakes to create non-consensual pornography, there are always people who are willing to use these tools in questionable ways. But at the same time, despite what the doomsayers say, the information apocalypse is not quite nigh. For one, the ability to generate faces has received special attention in the AI community; you can’t doctor any image in any way you like with the same fidelity. There are also serious constraints when it comes to expertise and time. It took Nvidia’s researchers a week training their model on eight Tesla GPUs to create these faces.

There are also clues we can look for to spot fakes. In a recent blog post, artist and coder Kyle McDonald listed a number of tells. Hair, for example, is very difficult to fake. It often looks too regular, like it’s been painted on with a brush, or too blurry, blending into someone’s face. Similarly, AI generators don’t quite understand human facial symmetry. They often place ears at different levels or make eyes different colors. They’re also not very good at generating text or numbers, which just come out as illegible blobs.

Some examples of AI-generated faces with obvious asymmetrical features.
Some examples of AI-generated faces with obvious asymmetrical features.
Image by Kyle McDonald

If you read the beginning of this post, though, these hints probably aren’t a huge consolation. After all, Nvidia’s work shows just how fast AI in this domain is progressing, and it won’t be long until researchers create algorithms that can avoid these tells.

Thankfully, experts are already thinking about new ways to authenticate digital pictures. Some solutions have already been launched, like camera apps that stamp pictures with geocodes to verify when and where they were taken, for example. Clearly, there is going to be a running battle between AI fakery and image authentication for decades to come. And at the moment, AI is charging decisively into the lead.