Story image

NASA, SpaceX and HPE team up to send a supercomputer to space

This article was originally published on the HPE blog here.

The SpaceX CRS-12 rocket, developed by Elon Musk’s SpaceX, launched from Kennedy Space Center, Florida, and sent its Dragon Spacecraft to the International Space Station (ISS) National Lab. Aboard the Dragon is an HPE supercomputer.

This supercomputer, called the Spaceborne Computer, is part of a year-long experiment conducted by HPE and NASA to run a high-performance commercial off-the-shelf (COTS) computer system in space, which has never been done before. The goal is for the system to operate seamlessly in the harsh conditions of space for one year, roughly the amount of time it will take to travel to Mars.

Advancing the Mission to Mars

Many of the calculations needed for space research projects are still done on Earth due to the limited computing capabilities in space, which creates a challenge when transmitting data to and from space. While this approach works for space exploration on the moon or in low Earth orbit (LEO) when astronauts can be in near real-time communication with Earth, once they travel farther out and closer to Mars, they will experience larger communication latencies. 

This could mean it would take up to 20 minutes for communications to reach Earth and then another 20 minutes for responses to reach astronauts. Such a long communication lag would make any on-the-ground exploration challenging and potentially dangerous if astronauts are met with any mission critical scenarios that they’re not able to solve themselves.

A mission to Mars will require sophisticated onboard computing resources that are capable of extended periods of uptime. To meet these requirements, we need to improve technology’s viability in space in order to better ensure mission success. By sending a supercomputer to space, HPE is taking the first step in that direction. Future phases of this experiment will eventually involve sending other new technologies and advanced computing systems, like Memory-Driven Computing, to the ISS once we learn more about how the Spaceborne Computer reacts in space.

Lessons from the Mission to the Moon

When the United States successfully put two men on the moon, it captivated the world and inspired technological advancements from the microchip to memory foam. The mission to Mars is the next opportunity to propel technological innovation into the next frontier. The Spaceborne Computer experiment will not only show us what needs to be done to advance computing in space, it will also spark discoveries for how to improve high-performance computing (HPC) on Earth and potentially have a ripple effect in other areas of technology innovation.

HPC in Space

The Spaceborne Computer includes the HPE Apollo 40 class systems with a high-speed HPC interconnect running an open-source Linux operating system. Though there are no hardware modifications to these components, we created a unique water-cooled enclosure for the hardware and developed purpose-built system software to address the environmental constraints and reliability requirements of supercomputing in space. Generally, in order for NASA to approve computers for space, the equipment needs to be “ruggedized” or hardened to withstand the conditions in space. 

Think radiation, solar flares, subatomic particles, micrometeoroids, unstable electrical power, irregular cooling. This physical hardening takes time, money and adds weight, so HPE took a different approach to “harden” the systems with software. HPE’s system software will manage real time throttling of the computer systems based on current conditions and can mitigate environmentally induced errors. Even without traditional ruggedizing, the system still passed at least 146 safety tests and certifications in order to be NASA-approved for space.

This article was originally published on the HPE blog here.

You can watch the video here:

The key to financial institutions’ path to digital dominance
By 2020, about 1.7 megabytes a second of new information will be created for every human being on the planet.
Is Supermicro innocent? 3rd party test finds no malicious hardware
One of the larger scandals within IT circles took place this year with Bloomberg firing shots at Supermicro - now Supermicro is firing back.
Record revenues from servers selling like hot cakes
The relentless demand for data has resulted in another robust quarter for the global server market with impressive growth.
Opinion: Critical data centre operations is just like F1
Schneider's David Gentry believes critical data centre operations share many parallels to a formula 1 race car team.
MulteFire announces industrial IoT network specification
The specification aims to deliver robust wireless network capabilities for Industrial IoT and enterprises.
Google Cloud, Palo Alto Networks extend partnership
Google Cloud and Palo Alto Networks have extended their partnership to include more security features and customer support for all major public clouds.
DigiCert conquers Google's distrust of Symantec certs
“This could have been an extremely disruptive event to online commerce," comments DigiCert CEO John Merrill. 
Schneider Electric's bets for the 2019 data centre industry
From IT and telco merging to the renaissance of liquid cooling, here are the company's top predictions for the year ahead.