These faces show how far AI image generation has advanced in just four years

The faces on the left were created by AI in 2014; on the right are ones made by AI in 2018.
Image: Goodfellow et al; Karras, Laine, Aila / Nvidia

Developments in artificial intelligence move at a startling pace — so much so that it’s often difficult to keep track. But one area where progress is as plain as the nose on your AI-generated face is the use of neural networks to create fake images. In brief: we’re getting scarily good at it.

In the image above you can see what four years of progress in AI image generation looks like. The crude black-and-white faces on the left are from 2014, published as part of a landmark paper that introduced the AI tool known as the generative adversarial network (GAN). The color faces on the right come from a paper published earlier this month, which uses the same basic method but is clearly a world apart in terms of image quality.

These realistic faces are the work of researchers from Nvidia. In their paper, shared publicly last week, they describe modifying the basic GAN architecture to create these images. Take a look at the pictures below. If you didn’t know they were fake, could you tell the difference?

Some of Nvidia’s AI-generated faces.
Image: Karras, Laine, Aila

What’s particularly interesting is that these fake faces can also be easily customized. Nvidia’s engineers incorporated a method known as style transfer into their work, in which the characteristics of one image are blended with another. You might recognize the term from various image filters that are popular on apps like Prisma and Facebook in recent years, which can make your selfies look like an impressionist painting or a cubist work of art.

Applying style transfer to face generation allowed Nvidia’s researchers to customize faces to an impressive degree. In the grid below, you can see this in action. A source image of a real person (the top row) has the facial characteristics of another person (right-hand column) imposed onto it. Traits like skin and hair color are blended together, creating what looks like to be an entirely new person in the process.

Style transfer allows you to blend facial characteristics from different people.
Image: Karras, Laine, Aila

Of course, the ability to create realistic AI faces raises troubling questions. (Not least of all, how long until stock photo models go out of work?) Experts have been raising the alarm for the past couple of years about how AI fakery might impact society. These tools could be used for misinformation and propaganda and might erode public trust in pictorial evidence, a trend that could damage the justice system as well as politics. (Sadly, these issues aren’t discussed in Nvidia’s paper, and when we reached out to the company, it said it couldn’t talk about the work until it had been properly peer-reviewed.)

These warnings shouldn’t be ignored. As we’ve seen with the use of deepfakes to create non-consensual pornography, there are always people who are willing to use these tools in questionable ways. But at the same time, despite what the doomsayers say, the information apocalypse is not quite nigh. For one, the ability to generate faces has received special attention in the AI community; you can’t doctor any image in any way you like with the same fidelity. There are also serious constraints when it comes to expertise and time. It took Nvidia’s researchers a week training their model on eight Tesla GPUs to create these faces.

There are also clues we can look for to spot fakes. In a recent blog post, artist and coder Kyle McDonald listed a number of tells. Hair, for example, is very difficult to fake. It often looks too regular, like it’s been painted on with a brush, or too blurry, blending into someone’s face. Similarly, AI generators don’t quite understand human facial symmetry. They often place ears at different levels or make eyes different colors. They’re also not very good at generating text or numbers, which just come out as illegible blobs.

Some examples of AI-generated faces with obvious asymmetrical features.
Image by Kyle McDonald

If you read the beginning of this post, though, these hints probably aren’t a huge consolation. After all, Nvidia’s work shows just how fast AI in this domain is progressing, and it won’t be long until researchers create algorithms that can avoid these tells.

Thankfully, experts are already thinking about new ways to authenticate digital pictures. Some solutions have already been launched, like camera apps that stamp pictures with geocodes to verify when and where they were taken, for example. Clearly, there is going to be a running battle between AI fakery and image authentication for decades to come. And at the moment, AI is charging decisively into the lead.

Comments

Agreed.

I personally believe as AI image generation accelerates, some open standard for image verification, similar to an HTTP certificate will eventually appear in order to preserve some sort of reliability of pictorial evidence. The apps mentioned in the last paragraph are only really a start, and need to be universal and secure in order for them to work.

It freaks me out to think about how this will be used for propaganda once they start animating this.

You’ll only be able to trust the reality you see in front of you at that point.

Are you asking why you won’t be able to trust the news once we can use AI to generate faces that are indistinguishable from the real thing and can move to say/do anything we want?

No I’m asking why you would trust what you see in front of your face.

I was hoping that’s what the ‘why’ was for

I didn’t mean that it would be appropriate for making worldly decisions or anything like that. I was just trying to illustrate the idea that news would basically be worthless at that point.

Well no, you could trust authorities, just like you do now for other subjects.

Different colored eyes seems like a problem that could be solved with one line of code. Ears might be a little more difficult; hair more difficult still.

We seem to be pretty close to the point where you could make up a new human being. Even a low-quality headshot like this would be pretty useful online for someone with bad intentions.

If you read the beginning of this post, though, these hints probably aren’t a huge consolation. After all, Nvidia’s work shows just how fast AI in this domain is progressing, and it won’t be long until researchers create algorithms that can avoid these tells.

I think you’re taking something very hard and complicated and waving the wand of AI Progress over it. I wouldn’t bet against them being "solved", but assuming they are trivial and will be solved emenantly is unwise.

If you dig into AI training and neural networks, "small" things like this are miles apart from each other. The kind of AI that can sort-of drive a car in GTA is lightyears behind the kind of AI that powers a self driving car. The kind of AI that can write a simple artical based on some metrics is far, far away from an AI that actually understands lauguage and can put together sentences.

You are now leaving Uncanny Valley. Come back soon.

They often place ears at different levels or make eyes different colors.

I’m surprised that someone who analyzes faces doesn’t appear to have heard of heterochromia before. It’s a natural thing, it’s beautiful, and it’s not that uncommon. That AI guy on the right could totally be a real person.

The only source I could find on an overall rate is this one, which states that some form of it happens in one out of ever 6,000 people – but that includes people who have eyes that are very slightly different from each other, plus people who have eyes with only small discolored sections, rather than ones like that face. It would appear that two different solid colors is exceptionally rare, based on what I read.

Two completely different colors as opposed to one eye just having a large patch of a different color are surely very rare (thanks for looking it up). But at these resolutions, I think it’s difficult to see if two eyes are really completely different or if it’s the latter. A famous example is actress Kate Bosworth: If you look closely you can see that both her eyes are blue, but her right eye has a large patch of brown in it. In smaller photos, it looks completely brown. So at least at the photo sizes provided in the article, I don’t think you could really determine that.

In any case, I just feel that it’s odd to determine a photo were fake just based on that.

ACKCHEWALLLYYYYYYYY

This really creeps me out. I’ve resisted being creeped out to new tech and AI but for some reason this get to me.

I can’t believe those aren’t real people. It’s like they generated a person but that person doesn’t exist. It’s weird and feels unsettling.

Just wait.. this will get integrated into Dating apps where you get a ‘Beauty rating’.

Tinder has been heavily populated with AI-generated profiles and selfies for at least the past year.

I imagine it’s a great way for people to do research on what is attractive, and then use it for all sorts of purposes. Great.

So AI face generation in 2018 are photographs? How is this any different from AI generation from 2014?

You see no difference?

Thanks, I hate it

View All Comments
Back to top ↑