Skip to main content

Artificial intelligence is helping old video games look like new

Modders are taking advantage of AI tools to update old graphics

Share this story

The recent AI boom has had all sorts of weird and wonderful side effects as amateur tinkerers find ways to repurpose research from universities and tech companies. But one of the more unexpected applications has been in the world of video game mods. Fans have discovered that machine learning is the perfect tool to improve the graphics of classic games.

The technique being used is known as “AI upscaling.” In essence, you feed an algorithm a low-resolution image, and, based on training data it’s seen, it spits out a version that looks the same but has more pixels in it. Upscaling, as a general technique, has been around for a long time, but the use of AI has drastically improved the speed and quality of results.

“It was like witchcraft.”

“It was like witchcraft,” says Daniel Trolie, a teacher and student from Norway who used AI to update the visuals of 2002 RPG classic The Elder Scrolls III: Morrowind. “[It] looked like I just downloaded a hi-res texture pack from [game developers] Bethesda themselves.”

Trolie is a moderator at the r/GameUpscale subreddit where, along with specialist forums and chat apps like Discord, fans share tips and tricks on how to best use these AI tools.

Browsing these forums, it’s apparent that the modding process is a lot like restoring old furniture or works of art. It’s a job for skilled craftspeople, requiring patience and knowledge. Not every game is a good fit for upscaling, and not every upscaling algorithm produces similar results. Modders have to pick the right tool for the job before putting in hundreds of hours of work to polish the final results. It’s a labor of love, not a quick fix.

Despite the work involved, it’s still much faster than previous methods. It means restoring the graphics can be done in a few weeks by a single dedicated modder, rather than a team that has to work for years. As a consequence, there’s been an explosion of new graphics for old games over the past six months or so.

The range of titles is impressive, including Doom, Half-Life 2, Metroid Prime 2, Final Fantasy VII, and Grand Theft Auto: Vice City. Even more recent fare like 2010’s Mass Effect 2 has got the AI-upscaling treatment. In each case, though, these are unsanctioned upgrades, meaning it takes a bit of extra know-how to install the new visuals.

Actually creating these AI graphics takes a lot of work, explains a modder who goes by the name hidfan. He tells The Verge that the updated Doom visuals he made took at least 200 hours of work to tweak the algorithm’s output and clean up final images by hand.

In Doom, as with many video games, the majority of the visuals are stored as texture packs. These are images of rocks, metal, grass, and so on that are pasted onto the game’s 3D maps like wallpaper onto the walls of a house. Just as with wallpaper, these textures have to tesselate perfectly, or players can spot where one image starts and another begins.

Because the output from AI upscaling algorithms tends to introduce a lot of noise, says hidfan, a lot of manual editing is still required. The same is true when it comes to the visuals for characters and enemies. Hidfan says that cleaning up just a single monster takes between five and 15 hours, depending on how complex their animation is.

That’s something to remember when looking at these updates or any project that uses machine learning. Just because AI is involved, doesn’t mean human labor isn’t.

Updated Doom graphics created by hidfan. On the left is the original image; on the right is the AI-enhanced version.
Updated Doom graphics created by hidfan. On the left is the original image; on the right is the AI-enhanced version.

But how does the process actually work? Albert Yang, CTO of Topaz Labs, a startup that offers a popular upscaling service used by many modders, says it’s pretty straightforward.

You start by taking a type of algorithm known as a generative adversarial network (GAN) and train it on millions of pairs of low-res and high-res images. “After it’s seen these millions of photos many, many times it starts to learn what a high resolution image looks like when it sees a low resolution image,” Yang tells The Verge.

One part of the algorithm tries to re-create this transition from low-res to high-res, while another part compares its work against the training data, seeing if it can spot the difference and rejecting the output if not. This feedback loop is how GANs improve over time.

Using AI to upscale images is a relatively simple task, but it perfectly illustrates the core advantage of machine learning. While traditional algorithms rely on rules defined by humans, machine learning techniques create their own rules by learning from data.

A comparison of a traditional upscaling technique (“nearest neighbor”) and the AI-enhanced version (“ESRGAN”).
A comparison of a traditional upscaling technique (“nearest neighbor”) and the AI-enhanced version (“ESRGAN”).
Image: kingdomakrillic.tumblr.com

In the case of upscaling algorithms, these rules are often pretty simple. If you want to upscale a 50 x 50-pixel image to double its size, for example, a traditional algorithm just inserts new pixels between the existing ones, selecting the new pixels’ color based on an average of its neighbors. To give a very simplified example: if you have a red pixel on one side and a blue pixel on the other, the new pixel in the middle comes out purple.

This sort of method is simple to code and execute, but it’s a one-size-fits-all approach that produces mixed results, says Yang.

The algorithms created by machine learning are much more dynamic by comparison. Topaz Labs’ Gigapixel upscaling doesn’t just look at neighboring pixels; it looks at whole sections of images at a time. That allows it to better re-create larger structures, like the outlines of buildings and furniture or the edges of a racetrack in Mario Kart.

“This larger perceptual field is the major reason [AI upscaling algorithms] perform so much better,” says Yang.

Updating game graphics is more than just a technical challenge, though. It’s often about salvaging memories. Replaying the favorite video games of one’s youth can be a surprisingly bittersweet experience: the memories are intact, but the games themselves seem strangely ugly and raw. “Was I really impressed by those graphics?” you ask yourself, wondering if you’ve lost the capacity to enjoy such games altogether.

Take the Final Fantasy series, for example. These were titles I played extensively as a child. Just hearing songs from their soundtracks can transport me back to specific in-game moments and locations. But playing the games again as an adult is a weird experience. I usually don’t get too far when I try, despite the treasured place they hold in my memory. They just look bad.

Modder Stefan Rumen, who used AI upscaling to improve the graphics of Final Fantasy VII, explains that new display technology is as much to blame for this as outdated graphics.

“With the pixel/low polygon graphics of yesteryear, the old TV monitors helped gloss over many imperfections,” he says. “Your mind finished the job and filled in the gaps [but] modern displays show these old games in their un-filtered roughness.”

Luckily, these early games are also the perfect target for AI upscaling. In the case of the Final Fantasy series, that’s partly because of their extensive use of pre-rendered backgrounds, which mean modders have to process fewer images. The visuals also occupy a “sweet spot” in terms of detail, says Rumen.

“They’re not as low-res as pixel art, meaning there’s more information for the machine learning to do its magic, but it’s not a too high resolution that an upscale wouldn’t be needed,” he says. The results speak for themselves.

Rumen says that Final Fantasy VII isn’t actually a game he played when he was young. (“I was a PC kid.”) But by updating the graphics, he’s making these classics accessible once more. They’ve convinced me, anyway. I’ve just downloaded Rumen’s AI graphics pack myself and am getting ready to play FFVII once more.

Correction April 18th, 11:00AM ET: An earlier version of this article included mention of a SNES emulation mod as an example of AI upscaling. This mod does not use machine learning that we know of and has been removed from the article. We regret the error.