Skip to main content

OpenAI’s image generator DALL-E can now edit human faces

OpenAI’s image generator DALL-E can now edit human faces


The feature was previously off-limits for fears of misuse

Share this story

An illustration depicting a feature-less face against a pink, white, and blue background.
Illustration by Alex Castro / The Verge

OpenAI is letting users of its AI art generator program DALL-E edit images with human faces. This feature was previously off-limits due to fears of misuse, but, in a letter sent to DALL-E’s million-plus users, OpenAI says it’s opening up access after improving its filters to remove images that contain “sexual, political, and violent content.”

The feature will let users edit images in a number of different ways. They can upload a photograph of someone and generate variations of the picture, for example, or they can edit specific features, like changing someone’s clothing or hairstyle. The feature will no doubt be useful to many users in creative industries, from photographers to filmmakers.

“With improvements in our safety system, DALL·E is now ready to support these delightful and important use cases — while minimizing the potential of harm from deepfakes,” said OpenAI in its letter to customers announcing the news.

The decision is part of an ongoing negotiation by the makers of AI art generators with their own users as they try to navigate the technology’s potential harms. As a well-funded company with links to tech giants like Microsoft, OpenAI has taken a relatively cautious approach. But the company has been outflanked by rivals like Stable Diffusion, which places fewer restraints on users. This leads to quicker development of the technology, but also makes malicious applications far easier. Stable Diffusion, for example, is already being used to generate pornographic deepfakes of celebrities.

Such explicit material should be easy for OpenAI to block with DALL-E. The company’s terms of use also forbid users from uploading images of people without their consent (though this is essentially impossible to proactively enforce with its current access model). However, no content filter is perfect, and there may be harmful use-cases that are more subtle than nonconsensual pornography.