A look inside the Department of Homeland Security's cyberhub


The building where the Department of Homeland Security tracks every cyber attack against the US is surprisingly bland. With its neutral exterior and circular drive, I wasn’t even sure we were at the right place until I saw our press liaison standing in the lobby. There are no signs to distinguish it from the generic office park that surrounds it, and the doorman wouldn’t even confirm if DHS had an office inside.

The National Cybersecurity and Communications Integration Center, better known by the abbreviated NCCIC, opened in 2009 to serve as a place where DHS could monitor cyber threats across government agencies and critical infrastructure, such as power grids and dams. If an attacker ends up on the Department of Agriculture’s network or a government employee surfs to a malicious website, for example, the NCCIC is supposed to detect it. Until recently, the government has relied on its own information gathering, as well as partnerships with outside companies, to monitor its network and stay ahead of digital threats. But now, DHS is restructuring its work because of a law passed this past December as part of a huge omnibus bill: the Cyber Information Sharing Act, or CISA. The legislation focuses the agency on an effort to build out a more comprehensive cyber threat detection system, one that’s fed by information shared with the government by various companies.

After the lobby and a short elevator ride, a bare, fluorescently lit hall led to an office door with a sign advertising weekly office doughnuts. Phyllis Schneck, the deputy under secretary for cybersecurity and communications at DHS, greeted us and took us through to the main NCCIC floor.

The NCCIC looks similar to what you see depicted in movies as NASA mission control — long tables with computers and screens. Employees include ex-military personnel and IT professionals. They speak to each other as they work, just like normal coworkers, except they’re coordinating the nation’s cybersecurity defenses. The workstations face four massive screens that have mini displays within them. They show different measurements, like the number of unsecured critical infrastructure hubs that are being launched and the network status of every agency. If suspicious traffic shows up on one of the four big screens, a person responds and loops in others, Schneck explained. The alert is noted through a change in the screen’s color — green signaling that everything is all clear and red meaning things aren’t looking so good. When we arrived, the entire center was powered down into "declassified mode," so we didn’t observe any real-time cyber threats.

In addition to government agencies, DHS also assists in investigations abroad. Following the hacking of a power grid in Ukraine, for example, DHS sent employees to the country to investigate. The agency later issued a report on the incident.

DHS doesn’t just watch for human threats. It also monitors natural phenomena, like sunspots, fires, floods, and typhoons. The agency tracks the sun, because as Schneck explained, a catastrophic situation could affect satellite communications, given that computers are just big magnets that respond to radiation.

As CISA gets going, the agency will start integrating private companies’ cyber experiences into its malware prevention system. Though DHS says this will better protect the US government and its entities from attacks, not everyone agrees. Privacy advocates worry about companies sharing personal data with the government, while others have concerns about disrupting corporate workflows. Consumers also might not feel comfortable with the idea that their data could be given to DHS without their permission.

Schneck says information sharing could greatly increase America’s cybersecurity defenses if done correctly. For instance, if a company realizes its employees are being spammed with malicious phishing emails, it could send information about the sender and the email’s contents to DHS, which will alert cybersecurity bureaus around the world to the threat’s existence, and, hopefully, thwart the attackers. Jeh Johnson, the secretary of homeland security,  has equated information sharing to the "see something, say something of cybersecurity."

The government pointed to its biggest cybersecurity failure as the primary evidence of CISA’s merits: the data breaches at the Office of Personnel Management. Investigators discovered that the same group who hacked into Anthem healthcare last year and stole information on millions of Americans also compromised OPM. If Anthem had shared information about the attacks  with the government, like IP addresses and the attackers’ malicious links, OPM officials could have monitored their own network for those same things, potentially protecting OPM from attacks, or at least detecting them sooner. (Attackers were in OPM’s network for months.) Essentially, this is where CISA would have been put into action, government officials said. In the weeks and months following the OPM disaster, lawmakers began strongly pushing for cyber information sharing.

DHS heralded CISA as the legislative solution the country needed. The bill encouraged companies to voluntarily share information about malicious cyber encounters by offering them immunity from related lawsuits, so they they could share users’ data without worrying about getting sued for privacy infringements.

But while this might entice companies to participate, privacy advocates say it leaves consumers in the dark. Only a small amount of data is needed to thwart an attack, Mark Jaycox, civil liberties legislative lead for the Electronic Frontier Foundation, said in an interview with The Verge. Oversharing could easily happen and remain undisclosed to the public. Beyond that, the list of companies sharing information is secret, too.

"We don’t know what companies have joined or asked to join [the information sharing]," Jaycox said. "We’re now approaching another side of this bill and why it was terrible — the transparency aspects."No consumers would feel comfortable using a service if they know their information, including personal details,  is going directly into DHS’ threat intelligence arsenal, Katie Moussouris, former chief policy officer at HackerOne, said in an interview. Just look at what happened to trust in technology companies after Edward Snowden’s leaks. A Pew Research Center study from 2014 found that 91 percent of US adults thought they had lost control of their personal data to companies.

Despite privacy concerns, the bill passed and is now law. Only six companies have signed up to fully participate, The Associated Press reported in March.

Following our tour of the NCCIC, Schneck took us to her office. She insists privacy was and continues to be a major consideration, and that the agency asks for only the essential technical details when incident information is shared.

"We brief privacy folks all the time," she said. "Look, it’s not worth protecting something if we’re giving up our rights as Americans."

So what information goes into the NCCIC? The agency published guidelines in February to clarify. Companies should share anything that helps investigate an incident or vulnerability. Information isn't directly related to a cyberthreat if "it is not necessary to assist others [to] detect, prevent, or mitigate" the threat. So, for example, with a phishing email, a company should send DHS the sender of the email, the malicious URL, any malicious files attached, the contents of the email, and any additional information that could help thwart future attacks. The name and email address of the email's targets should not be shared, the agency says. Whether companies will follow these guidelines remains to be seen, though the system’s guaranteed secrecy makes it difficult for the public to check.

Sharing this information will go a long way to keeping US entities and interests secure, Schneck said. She thinks of malware and threats in terms of virtual viruses, and just like a flu vaccine helps the body protect against the influenza virus, putting indicators of compromise into DHS’ cyber center will act as a vaccine of sorts — the NCCIC will be able to more quickly detect malware and other cyber threats across the country’s networks.

But while that might be the case, companies aren’t liable if they overshare users’ data, and consumers won’t readily know if that’s even happening. Some companies also say they’ve already created a system around warrant requests and will share user information if legally obliged. They don’t really need voluntary information sharing, Moussouris said.

Schneck remains idealistic, however. She hopes that greater information sharing will eventually allow the agency to detect and thwart an attack without a human ever getting involved, especially when it comes to state-sponsored attacks, she said.

Her vision is probably even more ambitious than it sounds. DHS needs to get more companies to share data, to improve its technology to be better able to detect changing threats, and humans may still be needed to determine when data is sensitive. The future of a fully autonomous threat detection system remains a long way off. Maybe that’s a good thing, though, as we grapple with the current system and the questions it introduces.

Photography: Amelia Holowaty Krales / The Verge