Skip to main content

Hacking nuclear systems is the ultimate cyber threat. Are we prepared?

Nightmare scenario

Illustrations by Alex Castro and William Joel

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

nuclear power, nuclear testing, feature

The nuclear plant employees stood in rain boots in a pool of water, sizing up the damage. Mopping up the floor would be straightforward, but cleaning up the digital mess would be far from it.

A hacker in an adjacent room had hijacked a simulated power plant, using the industrial controls against themselves to flood the cooling system.  

It took officials from three different Swedish nuclear plants, who were brought in to defend against an array of cyberattacks, a couple of hours to disconnect the industrial computer (known as a programmable logic controller) running the system and coordinate its repair.

Though the exercise was conducted in a simulated coal plant, not a nuclear one, the tactile nature of the demonstration — the act of donning rubber boots to fix the flooding — drove home the potential physical consequence of a cyberattack on critical infrastructure. “The next step for them is to go back home and train in their real environment,” Erik Biverot, a former lieutenant colonel in the Swedish army who planned the event, told The Verge.

The drill, which took place this past October at a research facility 110 miles southwest of Stockholm, was the most technically sophisticated cyber exercise in which the UN’s nuclear watchdog — the International Atomic Energy Agency (IAEA) — has participated.

“The next step for them is to go back home and train in their real environment.”

Security experts say more of these hands-on demonstrations are needed to get an industry traditionally focused on physical protection to think more creatively about growing cyber threats. The extent to which their advice is heeded will determine how prepared nuclear facilities are for the next attack.

“Unless we start to think more creatively, more inclusively, and have cross-functional thinking going into this, we’re going to stay with a very old-fashioned [security] model which I think is potentially vulnerable,” said Roger Howsley, executive director of the World Institute for Nuclear Security (WINS).

The stakes are high for this multibillion-dollar sector: a cyberattack combined with a physical one could, in theory, lead to the release of radiation or the theft of fissile material. However remote the possibility, the nuclear industry doesn’t have the luxury of banking on probabilities. And even a minor attack on a plant’s IT systems could further erode public confidence in nuclear power. It is this cruelly small room for error that motivates some in the industry to imagine what, until fairly recently, was unimaginable.


The Nuclear Threat Initiative, a Washington-based nonprofit co-founded by Ted Turner, has tallied about two-dozen cyber incidents since 1990, at least 11 of which were malicious. Those include a December 2014 attack in which suspected North Korean hackers stole blueprints for South Korean nuclear reactors and estimates of radiation exposure to local residents. The affected power company, which provides 30 percent of the country’s electricity, responded by carrying out cyber drills at plants around the country.

In another attack, hackers posing as a Japanese university student sent malicious emails to researchers at the University of Toyama Hydrogen Isotope Research Center, one of the world’s top research sites on the radioactive isotope that makes a hydrogen bomb. From November 2015 to June 2016, the hackers stole over 59,000 files, according to media reports, including research on the ill-fated Fukushima nuclear plant.

Any list of cyber incidents in the nuclear sector, however, is very likely incomplete. The US Nuclear Regulatory Commission, for example, only requires operators to report to the commission cyber incidents that affect the safety, security functions, or emergency preparedness of the plant, excluding potentially significant attacks on IT systems. It is, in general, extremely difficult for a hacker to breach a plant’s inner control systems implicated in the former category, but not nearly as challenging to penetrate the non-critical IT networks included in the latter.

From November 2015 to June 2016, the hackers stole over 59,000 files

“We are absolutely undercounting [the number of non-safety-related incidents] and we’re not looking so we can’t pretend that our count is accurate,” said Robert M. Lee, a former Air Force cyber officer and founder of Dragos, a firm specializing in industrial control systems (ICS) cybersecurity. By probing their networks for more of these lower-level threats, nuclear operators can bolster their security, he added.

Regulatory requirements have strengthened US nuclear plants’ cybersecurity, and most plants were built decades ago on analog systems that are shielded from direct internet-based attacks. But the growing digitization of the industry is opening up new potential vectors for hackers.

One of the first known cyber incidents at a nuclear plant took place in 1992 when rogue programmer Oleg Savchuk deliberately infected the computer system of a plant in Lithuania with a virus. Savchuk was arrested and became a precautionary footnote in the history of nuclear security. It would take a set of much more seismic events to illuminate the danger of cyber threats to nuclear operators.

In March 2007, with US energy regulators looking on, engineers at the Idaho National Lab showed how 21 lines of computer code could cripple a huge generator, as journalist Kim Zetter writes in her book. It was only through this jaw-dropping experiment, known as Aurora, that some energy industry officials came to accept that digital tools are capable of physical destruction.

Before Aurora, “there were many people who simply denied the concept that any kind of physical damage could be caused or triggered by a cyber event,” Marty Edwards, an ICS expert who helped design the experiment, told The Verge. Two years later, the destructive potential shown in Aurora became a reality. The famed Stuxnet attack injected a formidable computer worm into Iran’s Natanz enrichment facility in 2009, destroying about 1,000 centrifuges. The United States and Israel are suspected of being behind the attack, which used portable media to deliver malware to “air gapped” systems, or those with no direct or indirect connections to the internet. In doing so, the attackers refuted the notion that such a system was immune to hacking.

Attackers managed the improbable feat of breaching and manipulating a nuclear facility’s heavily protected industrial controls

Stuxnet’s creators used four “zero-days,” or previously unknown software exploits, whereas most big cyberattacks use one at most. The attackers managed the improbable feat of breaching and manipulating a nuclear facility’s heavily protected industrial controls. In doing so, they changed the cybersecurity conversation in the nuclear industry, prompting new regulations and more investments in defenses.

As instructive as Stuxnet was, nuclear officials can only learn so much from one attack and, because successful attacks are rare, there is a small pool of data from which to learn. For some, the answer is to create your own attacks in a controlled environment.

The exercise conducted this past October took advantage of the high-tech environment provided by Sweden’s Defense Research Agency. Officials from the IAEA and at least 20 of its member countries, including the US and China, watched on TV screens as offensive and defensive cyber teams did battle. The defenders grappled with everything from straightforward denial-of-service attacks to the more insidious scenario of a contractor’s laptop exposing a facility to malware.

In one instance, they used an actual Siemens programmable logic controller. In another, they modeled one of the exercise’s attacks on the 2015 hack of the Ukrainian power grid, one of the biggest energy-sector attacks since Stuxnet.  

The Swedes meticulously documented what amounted to a scientific experiment. Audio and video captured participants’ every move and may be later analyzed by a research team. The biggest early takeaway from the experiment, however, was decidedly low-tech: participants had to trust each other to navigate a stressful environment.

The IT specialists who participated normally work individually rather than as a team to handle cyber incidents, according to Biverot. For each participant, knowing that “I can give this guy a call if I’m in trouble” would be invaluable during a security incident, he told The Verge.

“It’s very important to understand the link between what’s happening in cyberspace and what’s happening in real life.”

Security experts say there is no substitute for putting an organization’s cyber teams under the gun in an intense, credible scenario. “It’s very important to understand the link between what’s happening in cyberspace and what’s happening in real life,” said Dennis Granåsen, a senior scientist at the Defense Research Agency. “If you don’t do that, it’s very easy to just think of these exercises as a game where you need to perform and get a good score and that’s it.”

The less that exercises seem like a game to participants, the better prepared they’ll be for the real thing. The challenge, however, is that exercises as technically rigorous as the Swedish one have not been the norm across the global nuclear sector. They can be expensive, take many months to plan, and may require bringing in outside cyber expertise to drill plant personnel. Exercise programs are growing in maturity and are including more red-teaming, but experts say more work is needed.

Without outside help, many operators will struggle to keep pace with cyber threats, according to Roger Brunt, a former top official at the UK’s Office for Nuclear Regulation. For that reason, Britain’s larger nuclear operators have recently begun hiring security firms to probe their computer networks for vulnerabilities, he said.


While safety and security are paramount at nuclear plants, business considerations also come into play as many plants, including the vast majority of the 61 in the US, are privately owned. The financial and reputational damage that a successful cyberattack could wreak has led some executives to walk through them in advance.

Two weeks before the Swedish exercise, a group of lawyers, insurers, and nuclear executives huddled in central London to consider an alarming scenario: malware had hit a workstation at a nuclear plant, triggering a shutdown of the reactor and a power cut for nearby residents during a dangerous heatwave.

Whereas the Swedish drill was geeks and computer code, the London one was lawyers and the lofty words of judges and defendants.

A fictional power company was on mock trial for decisions its executives had taken leading up to the made-up incident. They had failed to ensure that software on the plant had been updated and that employees were trained in security. Despite an eloquent defense from executives, the judges found the company criminally and civilly liable for the $1.7 billion in economic and other damages incurred by the power cut, and for the 10 people who died in the heat wave.

Howsley said he was surprised at the criminal verdict, thinking the bar for damning security practices would be higher. But that may be where legal norms are headed, given that companies like Uber and Anthem have been sued for allegedly shoddy cybersecurity regimes.

“Accountability is going to drive better behavior.”

Among nuclear executives, “accountability is going to drive better behavior” on cybersecurity, said Kathryn Rauhut, a lawyer and nonresident fellow at the Stimson Center, which hosted the exercise.

Rauhut said that when drawing up the exercise, she considered several scenarios that might spur strong interest from nuclear executives. Nothing resonates like the threat of a civil or criminal lawsuit for bad security practices. “The CEOs said, ‘Whoa, this is huge. I didn’t know I was liable,’” she told The Verge.

Howsley, a 35-year veteran of the nuclear industry, has seen the industry adapt its safety standards after the 1986 Chernobyl disaster, its security standards after the September 11th attacks, and its cybersecurity standards after Stuxnet. The guessing game of where the next threat might come from can be maddening.

“Someone once said to me, ‘The future is actuarial, history is forensic,’” said Howsley, a cerebral Englishman with a PhD in botany. “If something awful happens at 3 o’clock this afternoon, people will look back and say, ‘How did we allow this to happen?’ But we forget all the things that we worried about and didn’t happen.”


As training in the lab and boardroom continues, hackers in the real world are sharpening their skills. The years since Stuxnet have seen an uptick in advanced hacking operations targeting energy infrastructure. The Ukrainian power grid has been a playground for hackers, some of whom analysts have traced to Russia.  

A year after the December 2015 attack, which cut power for 225,000 people, the Ukrainian grid was hit again in what Dragos says was an even more sophisticated operation. “Adversaries are getting smarter, they are growing in their ability to learn industrial processes and codify and scale that knowledge, and defenders must also adapt,” states the firm’s analysis of the attack.

Just last week, energy software giant Schneider Electric acknowledged that hackers had exploited a flaw in its safety system software, known as Triconex, at an industrial plant, causing the plant to shut down. The company has declined to identify the plant. Triconex systems are used at a variety of plants, including oil, gas, and nuclear.

This changing digital landscape is prompting governments and energy companies to get more ambitious in how they drill for attacks. The goal is tighter communication and unalloyed trust between the government and operators of critical infrastructure, the vast majority of which is privately owned in the US.

“Adversaries are getting smarter.”

In the event of a serious cyberattack, nuclear operators would need to have agencies on speed dial to mitigate the damage. In the waning days of the Obama administration, US and British officials tested these lines of communication in an unprecedented exercise they called Ionic Shield.

On a conference call in November 2016, officials at the White House and Downing Street watched as a piece of malware hit the administrative networks of hypothetical nuclear plants in the US and Britain. Participants tested how well they could pass the word of a spreading attack through the chain of command and take corrective action. Communication between the two governments and between government and industry went well, according to Caitlin Durkovich, a former official for the Department of Homeland Security (DHS).

However, Durkovich told The Verge, “I think we walked away with the sense we need to improve how the industry here [in the US] is communicating with the industry there [in Britain], especially as it relates to sharing threat information.”


In June 2017, DHS officials warned the energy industry that hackers had targeted the computer network of the Wolf Creek nuclear facility in Kansas. The threat was limited and did not involve safety or other critical systems, security experts told The Verge, but it served as a reminder that nuclear facilities are still very much in hackers’ crosshairs.

“The threat is not going to go away,” Howsley said. “It will get more subtle.”

Some hackers play the long game, lingering on peripheral networks for months in the hope of gaining a foothold into more critical systems. For network defenders, maintaining urgency in the absence of regular, successful attacks can be difficult. The shock value of events like Aurora and Stuxnet can only last so long as those who study them fall back into their routines. Rigorous exercises based on unnerving scenarios are critical to keeping engineers and cyber specialists on their toes.

As Phyllis Schneck, a former DHS official put it, “You need to keep showing people a good cyber explosion every once in a while.”

Correction: A previous version of this story indicated that the the Stuxnet attack was propagated through a USB drive. Though the virus was indeed propagated through portable media, it is unconfirmed whether the media used to breach the ‘air-gapped’ Iranian control system was a USB drive, a CD, a laptop, or a similar device.”

Reporting for this story was supported by a grant from The Pulitzer Center on Crisis Reporting.