Skip to main content

A college student used GPT-3 to write fake blog posts and ended up at the top of Hacker News

A college student used GPT-3 to write fake blog posts and ended up at the top of Hacker News

/

He says he wanted to prove the AI could pass as a human writer

Share this story

An image showing a robot performing various tasks
Illustration by Alex Castro / The Verge

College student Liam Porr used the language-generating AI tool GPT-3 to produce a fake blog post that recently landed in the No. 1 spot on Hacker News, MIT Technology Review reported. Porr was trying to demonstrate that the content produced by GPT-3 could fool people into believing it was written by a human. And, he told MIT Technology Review, “it was super easy, actually, which was the scary part.”

So to set the stage in case you’re not familiar with GPT-3: It’s the latest version of a series of AI autocomplete tools designed by San Francisco-based OpenAI, and has been in development for several years. At its most basic, GPT-3 (which stands for “generative pre-trained transformer”) auto-completes your text based on prompts from a human writer.

My colleague James Vincent explains how it works:

Like all deep learning systems, GPT-3 looks for patterns in data. To simplify things, the program has been trained on a huge corpus of text that it’s mined for statistical regularities. These regularities are unknown to humans, but they’re stored as billions of weighted connections between the different nodes in GPT-3’s neural network. Importantly, there’s no human input involved in this process: the program looks and finds patterns without any guidance, which it then uses to complete text prompts. If you input the word “fire” into GPT-3, the program knows, based on the weights in its network, that the words “truck” and “alarm” are much more likely to follow than “lucid” or “elvish.” So far, so simple.

Here’s a sample from Porr’s blog post (with a pseudonymous author), titled “Feeling unproductive? Maybe you should stop overthinking.”

Definition #2: Over-Thinking (OT) is the act of trying to come up with ideas that have already been thought through by someone else. OT usually results in ideas that are impractical, impossible, or even stupid.

Yes, I would also like to think I would be able to suss out that this was not written by a human, but there’s a lot of not-great writing on these here internets, so I guess it’s possible that this could pass as “content marketing” or some other content.

OpenAI decided to give access to GPT-3’s API to researchers in a private beta, rather than releasing it into the wild at first. Porr, who is a computer science student at the University of California, Berkeley, was able to find a PhD student who already had access to the API, who agreed to work with him on the experiment. Porr wrote a script that gave GPT-3 a blog post headline and intro. It generated a few versions of the post, and Porr chose one for the blog, copy-pasted from GPT-3’s version with very little editing.

The post went viral in a matter of a few hours, Porr said, and the blog had more than 26,000 visitors. He wrote that only one person reached out to ask if the post was AI-generated, although several commenters did guess GPT-3 was the author. But, Porr says, the community downvoted those comments.

William Porr

He suggests that GPT-3 “writing” could replace content producers which ha ha these are the jokes people of course that could not happen I hope. “The whole point of releasing this in private beta is so the community can show OpenAI new use cases that they should either encourage or look out for,” Porr writes. And notable that he doesn’t yet have access to the GPT-3 API even though he’s applied for it, admitting to MIT Technology Review, “It’s possible that they’re upset that I did this.”