Skip to main content
A
Stability AI, Tiktok, and several governments signed an agreement to fight AI-generated CSAM.

The resolution doesn’t offer many specifics, but it’s the latest sign that AI-generated images of child sexual abuse are becoming a priority for platforms and law enforcement — even if it’s still framed as an “emerging” problem.

The UK announced the agreement ahead of an AI safety summit later this week, and agencies from Italy, Germany, the US, Korea, and Australia signed it, as did OnlyFans, Snapchat, and a variety of child safety nonprofits.