Sexually explicit AI-generated images of Taylor Swift have been circulating on X (formerly Twitter) over the last day in the latest example of the proliferation of AI-generated fake pornography and the challenge of stopping it from spreading.
One of the most prominent examples on X attracted more than 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks before the verified user who shared the images had their account suspended for violating platform policy. The post was live on the platform for around 17 hours prior to its removal.
But as users began to discuss the viral post, the images began to spread and were reposted across other accounts. Many still remain up, and a deluge of new graphic fakes have since appeared. In some regions, the term “Taylor Swift AI” became featured as a trending topic, promoting the images to wider audiences.
A report from 404 Media found that the images may have originated in a group on Telegram, where users share explicit AI-generated images of women often made with Microsoft Designer. Users in the group reportedly joked about how the images of Swift went viral on X. On Monday, 404 Media reported the loopholes had been addressed.
Microsoft Responsible AI Engineering Lead Sarah Bird confirmed the changes, saying, “We are committed to providing a safe and respectful experience for everyone. We are continuing to investigate these images and have strengthened our existing safety systems to further prevent our services from being misused to help generate images like them.”
X’s policies regarding synthetic and manipulated media and nonconsensual nudity both explicitly ban this kind of content from being hosted on the platform. While representatives for X, Swift, and the NFL have not responded to our requests for comment, X did post the following public statement almost a day after the incident began, but without mentioning the Swift images specifically.
Swift’s fan base has criticized X for allowing many of the posts to remain live for as long as they have. In response, fans have responded by flooding hashtags used to circulate the images with messages that instead promote real clips of Swift performing to hide the explicit fakes.
The incident speaks to the very real challenge of stopping deepfake porn and AI-generated images of real people. Some AI image generators have restrictions in place that prevent nude, pornographic, and photorealistic images of celebrities from being produced, but many others do not explicitly offer such a service. The responsibility of preventing fake images from spreading often falls to social platforms — something that can be difficult to do under the best of circumstances and even harder for a company like X that has hollowed out its moderation capabilities.
The company is currently being investigated by the EU regarding claims that it’s being used to “disseminate illegal content and disinformation” and is reportedly being questioned regarding its crisis protocols after misinformation about the Israel-Hamas war was found being promoted across the platform.
Update January 25th, 1:06PM ET: Added findings from 404 Media.
Update January 26th, 5:31AM ET: Added Twitter’s general statement on posting non-consensual nudity.
Update January 29th, 5:06PM ET: Added update from 404 Media and Microsoft’s statement about safety changes.