Skip to main content

Stable Diffusion made copying artists and generating porn harder and users are mad

Stable Diffusion made copying artists and generating porn harder and users are mad

/

Changes to the AI text-to-image model make it harder for users to mimic specific artists’ styles or generate NSFW output, but offer other functional improvements.

Share this story

An AI-generated image of a person in a spacesuit crouching next to a puddle and a group of ducks.
An image generated using Stable Diffusion Version 2.
Image: Stability AI

Users of AI image generator Stable Diffusion are angry about an update to the software that “nerfs” its ability to generate NSFW output and pictures in the style of specific artists.

Stability AI, the company that funds and disseminates the software, announced Stable Diffusion Version 2 early this morning European time. The update re-engineers key components of the model and improves certain features like upscaling (the ability to increase the resolution of images) and in-painting (context-aware editing). But, the changes also make it harder for Stable Diffusion to generate certain types of images that have attracted both controversy and criticism. These include nude and pornographic output, photorealistic pictures of celebrities, and images that mimic the artwork of specific artists.

“They have nerfed the model”

“They have nerfed the model,” commented one user on a Stable Diffusion sub-reddit. “It’s kinda an unpleasant surprise,” said another on the software’s official Discord server.

Users note that asking Version 2 of Stable Diffusion to generate images in the style of Greg Rutkowski — a digital artist whose name has become a literal shorthand for producing high-quality images — no longer creates artwork that closely resembles his own. (Compare these two images, for example). “What did you do to greg😔,” commented one user on Discord.

Changes to Stable Diffusion are notable, as the software is hugely influential and helps set norms in the fast-moving generative AI scene. Unlike rival models like OpenAI’s DALL-E, Stable Diffusion is open source. This allows the community to quickly improve on the tool and for developers to integrate it into their products free of charge. But it also means Stable Diffusion has fewer constraints in how it’s used and, as a consequence, has attracted significant criticism. In particular, many artists, like Rutkowski, are annoyed that Stable Diffusion and other image generating models were trained on their artwork without their consent and can now reproduce their styles. Whether or not this sort of AI-enabled copying is legal is something of an open question. Experts say training AI models on copyright-protected data is likely legal, but that certain use-cases could be challenged in court.

A grid of images showing side-by-side comparisons of AI generated artwork created using different versions of Stable Diffusion.
A comparison of Stable Diffusion’s ability to generate images resembling specific artists.
Image: lkewis via Reddit

Stable Diffusion’s users have speculated that the changes to the model were made by Stability AI to mitigate such potential legal challenges. However, when The Verge asked Stability AI’s founder Emad Mostaque if this was the case in a private chat, Mostaque did not answer. Mostaque did confirm, though that Stability AI has not removed artists’ images from the training data (as many users have speculated). Instead, the model’s reduced ability to copy artists is a result of changes made to how the software encodes and retrieves data.

“There has been no specific filtering of artists here,” Mostaque told The Verge. (He also expanded on the technical underpinning of these changes in a message posted on Discord.)

What has been removed from Stable Diffusion’s training data, though, is nude and pornographic images. AI image generators are already being used to generate NSFW output, including both photorealistic and anime-style pictures. However, these models can also be used to generate NSFW imagery resembling specific individuals (known as non-consensual pornography) and images of child abuse.

Discussing the changes Stable Diffusion Version 2 in the software’s official Discord, Mostaque notes this latter use-case is the reason for filtering out NSFW content. “can’t have kids & nsfw in an open model,” says Mostaque (as the two sorts of images can be combined to create child sexual abuse material), “so get rid of the kids or get rid of the nsfw.”

One user on Stable Diffusion’s sub-reddit said the removal of NSFW content was “censorship,” and “against the spirit philosophy of Open Source community.” Said the user: “To choose to do NSFW content or not, should be in the hands of the end user, no [sic] in a limited/censored model.” Others, though, noted that the open source nature of Stable Diffusion mean nude training data can easily be added back into third-party releases and that the new software doesn’t affect earlier versions: “Do not freak out about V2.0 lack of artists/NSFW, you’ll be able to generate your favorite celeb naked soon & anyway you already can.”

Although the changes to Stable Diffusion Version 2 have annoyed some users, many others praised its potential for deeper functionality, as with the software’s new ability to produce content that matches the depth of an existing image. Others said the changes did make it harder to quickly produce high-quality images, but that the community would likely add back this functionality in future versions. As one user on Discord summarized the changes: “2.0 is better at interpreting prompts and making coherent photographic images in my experience so far. it will not make any rutkowski titties though.”

Mostaque himself compared the new model to a pizza base that lets anyone add ingredients (i.e. training data) of their choice. “A good model should be usable by everyone and if you want to add stuff add stuff,” he said on Discord.

Mostaque also said future versions of Stable Diffusion would use training datasets that would allow artists to opt-in or opt-out — a feature that many artists have requested, and that could help mitigate some criticism. “We are trying to be super transparent as we improve the base models and incorporate community feedback,” Mostaque told The Verge.

A public demo of Stable Diffusion Version 2 can be accessed here (though due to high demands from users the model may be inaccessible or slow).