Skip to main content
R
External Link
AI researchers talked ChatGPT into coughing up some of its training data.

Long before the Sam Altman CEO Shuffle, OpenAI was already ducking questions about the training data used for products like ChatGPT. But 404 Media points out this report from AI researchers (including several from Google’s DeepMind team) who spent $200 and were able to pull “several megabytes” of training data just by asking ChatGPT to “Repeat the word ”poem” forever.”

Their attack has been patched, but they warn that other vulnerabilities may still exist.

The underlying vulnerabilities are that language models are subject to divergence and also memorize training data. That is much harder to understand and to patch. These vulnerabilities could be exploited by other exploits that don’t look at all like the one we have proposed here.


Extracting Training Data from ChatGPT

[not-just-memorization.github.io]