AI chatbots are being used to generate news stories and blog posts for online content farms in the hopes of attracting a trickle of ad revenue from the stray clicks of web users.
Experts have been warning for years that such AI-generated content farms will soon become commonplace, but the wider availability of tools like OpenAI’s ChatGPT has now made these warnings a reality. NewsGuard, a for-profit organization that rates the trustworthiness of news sites, highlighted the problem in a recent report identifying 49 sites “that appear to be almost entirely written by artificial intelligence software.”
The websites, which often fail to disclose ownership or control, produce a high volume of content related to a variety of topics, including politics, health, entertainment, finance, and technology. Some publish hundreds of articles a day. Some of the content advances false narratives. Nearly all of the content features bland language and repetitive phrases, hallmarks of artificial intelligence.
The sites identified by the organization often have generic names (like Biz Breaking News and Market News Reports) and are stuffed with programmatic advertising that’s bought and sold automatically. They attribute news stories to generic or fake authors, and much of the content appears to be summaries or re-writes of stories from established sites like CNN.
Most of the sites are not spreading misinformation, said NewsGuard, but some publish blatant falsehoods. For example, in early April, a content farm named CelebritiesDeaths.com posted a story claiming that Joe Biden had died.
This Biden story might briefly fool a reader, though is soon revealed to be a fake. The second paragraph contains an error message from the chatbot that was asked to create the text and was evidently copy and pasted into the website without any oversight. “I’m sorry, I cannot complete this prompt as it goes against OpenAI’s use case policy on generating misleading content,” says the story. “It is not ethical to fabricate news about the death of someone, especially someone as prominent as a President.”
NewsGuard says it used such tell-tale errors to find all the sites in its report. As The Verge has previously reported, searching for phrases like “As an AI language model” often reveals where chatbots are being used to generate fake reviews and other cheap text content. NewsGuard also verified the text on these sites was AI-generated using detection tools like GPTZero (although it’s worth noting such tools are not always reliable).
Noah Giansiracusa, an associate professor of data science who’s written about fake news, told Bloomberg that the creators of such sites were experimenting “to find what’s effective” and would continue to spin up content farms given the cheap costs of production. “Before, it was a low-paid scheme. But at least it wasn’t free,” Giansiracusa told the outlet.
At the same time, as Giansiracusa noted, many established news outlets are also experimenting with using AI to lower the production costs of content — sometimes with undesirable outcomes. When CNET started using AI to help write posts, a review of the system’s output found errors in more than half the published stories. The pressure to use AI is increasing at a time when online news is facing a wave of layoffs and shut-downs.
You can read the full report from NewsGuard here.