Skip to main content
J
Artists can now use this data ‘poisoning’ tool to fight back against AI scrapers.

The University of Chicago’s Glaze Project has released Nightshade v1.0, which enables artists to sabotage generative AI models that ingest their work for training.

Nightshade makes invisible pixel-level changes to images that trick AI models into reading them as something else and corrupt their image output — for example, identifying a cubism style as cartoon.

It’s out now for Windows PC and Apple Silicon Macs.


A screenshot taken from a University of Chicago research paper showing examples of AI images corrupted using Nightshade.
Here are a few examples of what happened during testing when an AI-image genetaror was repeatidly poisoned.
Image: University of Chicago