Deepfake satellite imagery poses a not-so-distant threat, warn geographers

Satellite imagery can be seen as inherently trustworthy because of the expense in creating it.
Image: DigitalGlobe via Getty Images

When we think of deepfakes, we tend to imagine AI-generated people. This might be lighthearted, like a deepfake Tom Cruise, or malicious, like nonconsensual pornography. What we don’t imagine is deepfake geography: AI-generated images of cityscapes and countryside. But that’s exactly what some researchers are worried about.

Specifically, geographers are concerned about the spread of fake, AI-generated satellite imagery. Such pictures could mislead in a variety of ways. They could be used to create hoaxes about wildfires or floods, or to discredit stories based on real satellite imagery. (Think about reports on China’s Uyghur detention camps that gained credence from satellite evidence. As geographic deepfakes become widespread, the Chinese government can claim those images are fake, too.) Deepfake geography might even be a national security issue, as geopolitical adversaries use fake satellite imagery to mislead foes.

The US military warned about this very prospect in 2019. Todd Myers, an analyst at the National Geospatial-Intelligence Agency, imagined a scenario in which military planning software is fooled by fake data that shows a bridge in an incorrect location. “So from a tactical perspective or mission planning, you train your forces to go a certain route, toward a bridge, but it’s not there. Then there’s a big surprise waiting for you,” said Myers.

The first step to tackling these issues is to make people aware there’s a problem in the first place, says Bo Zhao, an assistant professor of geography at the University of Washington. Zhao and his colleagues recently published a paper on the subject of “deep fake geography,” which includes their own experiments generating and detecting this imagery.

Bo Zhao and his colleagues at the University of Washington were able to create their own AI-generated satellite imagery (above).

The aim, Zhao tells The Verge over email, “is to demystify the function of absolute reliability of satellite images and to raise public awareness of the potential influence of deep fake geography.” He says that although deepfakes are widely discussed in other fields, his paper is likely the first to touch upon the topic in geography.

“While many GIS [geographic information system] practitioners have been celebrating the technical merits of deep learning and other types of AI for geographical problem solving, few have publicly recognized or criticized the potential threats of deep fake to the field of geography or beyond,” write the authors.

Far from presenting deepfakes as a novel challenge, Zhao and his colleagues locate the technology in a long history of fake geography that dates back millennia. Humans have been lying with maps for pretty much as long as maps have existed, they say, from mythological geographies devised by ancient civilizations like the Babylonians, to modern propaganda maps distributed during wartime “to shake the enemy’s morale.”

One particularly curious example comes from so-called “paper towns” and “trap streets.” These are fake settlements and roads inserted by cartographers into maps in order to catch rivals stealing their work. If anyone produces a map which includes your very own Fakesville, Ohio, you know — and can prove — that they’re copying your cartography.

“It is a centuries-old phenomenon,” says Zhao of fake geography, though new technology produces new challenges. “It is novel partially because the deepfaked satellite images are so uncannily realistic. The untrained eyes would easily consider they are authentic.”

It’s certainly easier to produce fake satellite imagery than fake videos of humans. Lower resolutions can be just as convincing and satellite imagery as a medium is inherently believable. This may be due to what we know about the expense and origin of these pictures, says Zhao. “Since most satellite images are generated by professionals or governments, the public would usually prefer to believe they are authentic.”

As part of their study, Zhao and his colleagues created software to generate deepfake satellite images, using the same basic AI method (a technique known as generative adversarial networks, or GANs) used in well-known programs like ThisPersonDoesNotExist.com. They then created detection software that was able to spot the fakes based on characteristics like texture, contrast, and color. But as experts have warned for years regarding deepfakes of people, any detection tool needs constant updates to keep up with improvements in deepfake generation.

For Zhao, though, the most important thing is to raise awareness so geographers aren’t caught off-guard. As he and his colleagues write: “If we continue being unaware of an unprepared for deep fake, we run the risk of entering a ‘fake geography’ dystopia.”

Comments

Any image can be edited. Duh. Welcome to 1987.

What an astonishing insight… please tell me more, duh

He’s not completely wrong though. You can just whip out Photoshop and create some good looking fake sattelite images with relatively low effort if you wanted to – completely AI free. So all the listed "dangers" aren’t new and exclusive to AI generated fakes. So it is actually not half as scary as this article makes it out to be. What’s the actual worry here a bad actor with an AI can do that he cannot already do with more traditional tools?

I get the idea it would be trivial to scale using non-AI method since most fakes use traditional algos on public data, much like how AI trains on existing public data to form the model.

The article points to precisely what the advantage of AI fake sat data has over traditional tools. It is much harder to spot the fake.

*Welcome to 1846.

We need a digital signature straight from the camera’s image sensor or the satellite.

I like this. Something like a checksum of the image’s contents that’s encrypted by the private key of the satellite and can be decrypted by the satellite’s public key.

Celebrities and photographers should probably start doing something similar to verify something is authentically theirs.

It would greatly increase the value of celebrity nude leaks if there was cryptographic proof they are genuine!

This is effectively what NFTs are for.

While they’re currently being abused to certify original, technically unique digital art (which doesn’t prevent the creator from actually duplicating it and assigning a new NFT), the obvious use case is validating authenticity.

  1. There is no monopoly on earth observation. Any attempted fake could quickly be disproven by any other imaging provider, some of which have commercially available daily updating global maps.
  2. I highly doubt military and intelligence services would fall for someone else’s fake maps, having their own satellites to provide them with data.
  3. China claiming something is fake… are you going to believe the CCP or imagery from multiple independent sources?

If you’re in China you’ll believe the CCP. And if you’re distrustful of the U.S. you’ll probably believe the CCP too.

Despite living in an era of unprecedented information, we still have climate change deniers, holocaust deniers, Moon landing deniers, flat earthers, and QAnon, etc. Never underestimate the power of ignorance or confirmation bias.

By that metric they wouldn’t need fancy deepfaked satellite imagery at all to tell whichever lies they want to.
What I was getting at is that this alleged new threat doesn’t change anything or introduce any new significant challenges. Nobody is suddenly going to start believing the CCP more if they now have their own maps that show idyllic pastures where their concentration camps should be.

Regarding your second point, you can thank the revocation of the fairness doctrine for that as much as the internet. Both have made all these (mis-)information bubbles, i.e. rightwing propaganda cable TV and talk radio, and social media echo chambers possible in the first place, where actual facts are treated as mere opposing opinions.

This was actually one of the things people were talking about when it came to deepfakes for political scandals: it costs way less to produce rumors, and those rumors are just as effective at influencing people.

Who needs deep fakes when you can just claim the existence of WMDs?

This plus GPS spoofing.

We effectively already accept "faked" satellite imagery from Google and others where government facilities are frequently removed from published images. That being said, there are certain levels of security access which allow viewing of the original images through Google and other services.

I find it hard to believe that faking a static satellite image is going to be all that effective if the user sourcing it is always looking for the latest data for their uses, like pulling up Google Earth; it will have the latest images, which unless altered from within Google, are accurate.

It seems this would be more of an issue if someone decided to replace historical satellite images with fakes, and even then there’s a lot of mistakes that would be required to have the desired malicious effect.

This seems like a non-issue.

More likely, this would be used as information warfare or disinformation between governments.

In order to detect such deepfake images, Mayachitra Inc has technologies handle them.

Our demo web apps to measure the integrity of a given image:

View All Comments
Back to top ↑