Astronauts will soon be able to use a supercomputer to help run science experiments on the International Space Station. The Spaceborne Computer, a joint project between NASA and Hewlett Packard Enterprise, launched to the ISS in 2017. It’s been limited to running diagnostic tests, figuring out how well a computer built for Earth could survive in space.
Now it will be available to process data for space-based experiments, which should save researchers on the ground valuable time. It will also save precious bandwidth in the tightly-controlled stream of data that NASA manages between the ISS and the ground. The exact experiments that the supercomputer will run in the next few months have not yet been disclosed.
No one expected that the Spaceborne Computer would actually be used in experiments when it launched. The computer was originally supposed to come back down to Earth early next year, in February or March. But then a Soyuz rocket failed dramatically last month, forcing astronauts headed to the ISS to make an emergency landing. In addition to complicating the plans for crewing the ISS, the failure also put the cargo schedule back in flux, giving the computer a little more time in space.
That development was an unexpected, but welcome opportunity for people working on the project. “We quietly clapped our hands and said this was really cool,” Mark Fernandez, High-Performance Computing Technology Officer for HPE says.
The idea is that by processing data in space, researchers don’t have to go through the time-consuming business of downloading large data sets from the ISS to Earth, and then processing them on the ground to figure out if it contains an exciting result. Instead, the system can preserve bandwidth by crunching the data in space and then deliver the result to researchers almost instantly.
This could be especially helpful to astronauts on future deep space missions, or missions to Mars, where the increased distance to Earth makes communicating or transmitting data incredibly difficult. On Mars missions, it can take as long as 24 minutes to transmit a small data packet between Mars and Earth.
The Linux-based computer is powerful and resilient, having successfully managed to operate aboard the ISS for over a year. It is able to make over a trillion calculations per second, which HPE engineers estimate is about 30 times faster than a typical laptop. It weighs 124 pounds on Earth, but on the ISS it is weightless, requiring just a few bolts to keep it in place on the space station’s ceiling.
One of the things that makes this system so unique in the world of space computing is that these computers weren’t explicitly designed for space. That’s unlike the computing hardware already present on the ISS, which has been stringently designed to withstand the rigors of spaceflight. “These are the same pizza box-sized servers that you would see in any data center,” Fernandez says. “There were no changes whatsoever to the hardware.”
What did change was the software. To adapt to the swiftly changing conditions as the ISS zooms around its orbit at 17,130 miles per hour, Fernandez and his team wrote software that is constantly on the lookout for potential issues, analyzing the system for dropped information, power fluctuations, and any other signal that all might not be well. So far, the system has adapted to power failures, including one triggered by a smoke alarm on the station (a false alarm), and another one that happened when an astronaut accidentally kneed the system’s emergency power shutoff. There have been issues, though.
Researchers are particularly interested in learning why nine of the computer’s 20 solid-state drives failed. On Earth, these drives are generally considered steady workers, but something in the space environment doesn’t agree with them. It wasn’t enough to sideline the computer entirely, but it’s still something that the researchers want to investigate.
Eventually, the supercomputer will come back down to Earth, and when it does, researchers will conduct what amounts to an autopsy. It’ll be taken apart and thoroughly scanned to understand how the harsh conditions of space — unpredictable levels of radiation, subatomic particles, and unstable power — can affect a commercial off-the-shelf computer. The team, Fernandez says, will literally go over the machine with a microscope to figure out what went wrong, and how to fix it for the next time a supercomputer launches into space.