Skip to main content

Media execs haven’t learned a thing from these AI tests

Media execs haven’t learned a thing from these AI tests


Media bosses keep promising AI will benefit journalists. But who’s actually using the tools — and to what end?

Share this story

An illustration of a woman typing on a keyboard, her face replaced with lines of code.
Image: The Verge

Over the last eight months, disparate segments of the public have clamored to integrate generative AI software like OpenAI’s ChatGPT into their daily lives — and especially into their work.

Everyone from doctors and online marketers to students and tennis announcers is experimenting with bringing AI tools into the fold. Aspiring millionaire spammers are using chatbots to speed up their junk generation, while artists are using AI art tools like Midjourney to beat out human competition. At least one lazy lawyer tried — and failed — to cut down on the research they needed to do. The promise of maximizing output and saving time is driving much of the “experimentation.”

News outlets are among the institutions that have latched onto this vision of AI-assisted scale and speed. For years, AI tools have been used in things like corporate earnings reports and short sports stories — formulaic dispatches that deliver the bare minimum. But now that powerful large language models are widely available, news publishers want more from them, and they’re twisting themselves into a pretzel to justify deploying AI tools with little work process or oversight. The result has been a slew of pivots that undermine their core mission of providing accurate and expert information. 

Executives at news outlets have used similar language to try to explain why generative AI tools are needed in the newsroom. At the heart of their reasoning is the implication that they have a duty to learn how they can use AI-generated writing — that because the outlet covers technology, it must also use AI systems in its own publishing process.

Here’s G/O Media editorial director Merrill Brown in an internal email to editorial staff after an error-ridded AI article was published on Gizmodo last week:

“We are both a leading technology company and an editorial organization that covers technology in world class fashion across multiple sites. So it is utterly appropriate — and in fact our responsibility — to do all we can to develop AI initiatives relatively early in the evolution of the technology.”

And here’s former CNET editor-in-chief Connie Guglielmo in a public memo to readers earlier this year after the discovery of AI-generated stories containing a litany of errors:

“There’s still a lot more that media companies, publishers and content creators need to discover, learn and understand about automated storytelling tools, and we’ll be at the front of this work.

In the meantime, expect CNET to continue exploring and testing how AI can be used to help our teams as they go about their work testing, researching and crafting the unbiased advice and fact-based reporting we’re known for. The process may not always be easy or pretty, but we’re going to continue embracing it – and any new tech that we believe makes life better.”

Both statements promise that generative AI is being tested to try to make journalists’ work quicker and easier. Guglielmo, for example, said the test was designed to “see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective.” In fact, CNET news reporters and product reviewers were some of the last to know what was happening at the Red Ventures-owned outlet. The only CNET staff members that got to use the AI tool were those on the CNET Money team, a siloed group of employees who primarily produce personal finance explainers that drive traffic via Google search.

The use case for AI tools has been to fill the internet with lower-quality versions of content that already exists

Likewise, after G/O Media published one of its first AI-generated stories last week, an inaccurate list of Star Wars movies and TV shows, it became clear that editorial staff was not in the driver’s seat. James Whitbrook, an editor of the section under which the list appeared, tweeted that he didn’t even know of the article’s existence until 10 minutes before it went live. Other G/O Media employees I spoke with say the same: editorial staff had nothing to do with the rollout of technology that’s supposed to help them do their jobs. Some did not even realize AI-generated stories had been published on the same sites where their bylines appear.

Both Guglielmo and Brown say that it’s our job as tech reporters to experiment with generative AI software in our work and that learning how to effectively use these tools will bolster the journalism that readers want. Yet the way AI tools have been applied suggests the opposite. At G/O Media-owned website The Inventory, dozens of articles bearing the byline “The Inventory Bot” have been published this week, some with odd text formatting and prose that sounds like an ad, not a human recommendation. The BuzzFeed bot has been used to churn out repetitive SEO-bait travel guides after CEO Jonah Peretti said the company would “lead the future of AI-powered content and maximize the creativity of our writers, producers, and creators and our business.” The first use cases for these powerful AI tools have so far been to fill the internet with lower-quality versions of content that already exists.

It’s not surprising that executives’ plan for generative AI is to try to do more with less — the financial underpinnings of digital media mean that spending less time producing stories is good for business, even if it’s terrible for your reputation. Reducing the time it takes to produce stories, explainers, and product roundups means each click comes at a lower cost. AI-generated articles don’t have to be good, or even accurate, to fill up with ads and rank on Google search. This is why the AI “experiments” are happening in public — careful, accurate material is second to monetizable content. If media outlets truly wanted to learn about the power of AI in newsrooms, they could test tools internally with journalists before publishing. Instead, they’re skipping to the potential for profit. 

One way journalists have tried to wrest control of AI tools in their workplaces is through unions. In May, CNET staff announced they were forming a union in part to have a voice in how AI tools would be used. Earlier this week, the Writers Guild of America, East issued a statement demanding an end to AI-generated stories on G/O Media sites. (Disclosure: The Verge’s editorial team is also unionized with the Writers Guild of America, East.)

The initial damage has already been done

But in both cases, the initial damage has already been done. The sloppy deployment of tools, a lack of oversight leading to embarrassing errors, and audiences’ mounting distrust is adding up. It doesn’t make generative AI seem useful for publishers — it makes it look like liability.

That’s part of the problem, too: the technology is legitimately impressive. There’s a way to experiment thoughtfully, and I’m open to the idea that generative AI tools could expand what journalists are capable of or help artists in their creative process. But that is not the example media executives are setting. They have an amazing technology at their disposal, and all they can come up with is something cheaper, nakedly desperate, and simply more boring. It’s what science fiction author Ted Chiang described in an essay in The New Yorker as “sharpening the knife blade of capitalism.” In other words: more of the same in an industry that doesn’t know what to do with itself.