DataCentreNews UK - Specialist news for cloud & data centre decision-makers
Story image
New HPE Pointnext capabilities accelerate transition to memory-driven computing
Thu, 21st Jun 2018
FYI, this story is more than a year old

Hewlett Packard Enterprise today announced the launch of an incubation practice with specialised skills for solving big data problems through Memory-Driven Computing, leveraging the expertise of Hewlett Packard Labs and HPE Pointnext.

Through HPE Pointnext advisory and professional services capabilities, the company will work with customers to explore Memory-Driven Computing applications and deliver proofs-of-concept that will demonstrate dramatic performance gains.

Designed to improve performance and efficiency to increase of intellectual discovery and business opportunities, Memory-Driven Computing is a new computing architecture that puts memory, not processing, at the centre of the computing platform.

Hewlett Packard Labs is developing the technology innovations needed to enable Memory-Driven Computing as part of The Machine research project.

Using the new architecture, organisations will be able to process vast amounts of data faster and reduce the time to extract insight, from days to hours, hours to minutes, minutes to seconds, ultimately delivering real-time intelligence.

HPE artificial intelligence, data and innovation global vice president Beena Ammanath says, “We believe that all data is valuable. Our vision for Memory-Driven Computing is to enable customers to capture, keep and refine every last bit of their data, up to 10,000 times faster than yesterday's solution.

“The introduction of HPE Pointnext capabilities for Memory-Driven Computing will accelerate our ability to bring Memory-Driven Computing technologies to our customers and help them solve some of their most complex problems and more quickly.

HPE Pointnext has a team of emerging technology and AI experts working hand-in-hand with its first commercial customer beginning the journey to Memory-Driven Computing, Travelport – a commerce platform that provides distribution, technology, payment and other solutions for the $7 trillion global travel industry.

Speed and accuracy are essential to delivering the right travel experience, and today's travellers expect the best answers instantly.

Partnering with HPE, Travelport has built out a large-scale compute capability, internally referred to as the “wall of compute,” which has been able to keep pace with the ever-increasing demand of online travel searches.

The volume of shopping requests continues to double every 18 months and will reach an average of three shopping requests per month per person on the globe by 2020.

Today, that already translates to moving over 125 terabytes of data in and out of its data centers each day.

In April 2018, Travelport began the first commercial Memory-Driven Computing engagement, installing an HPE Superdome Flex in-memory computing system and leveraging HPE's expertise to rearchitect key Travelport algorithms using Memory-Driven Computing programming techniques.

HPE Pointnext is providing guidance to Travelport around IT deployment strategy, sharing expertise and learnings from previous proofs-of-concept to improve their ability to program applications for Memory-Driven Computing environments.

Additionally, HPE experts are helping Travelport and future customers identify a performance baseline for infrastructure upgrades, drive cost-benefit analyses for the transformation journey and port, tune, re-architect and refactor applications for Memory-Driven Computing.

HPE compute solutions vice president and general manager Randy Meyer says, “Travelport handles incredibly huge data sets and demanding workloads. A Memory-Driven Computing approach will help Travelport quickly uncover insights in those data sets, as well as realise performance gains.

As HPE Pointnext continues to work with customers on Memory-Driven Computing solutions, the team will unveil additional innovations and offerings for Memory-Driven Computing adoption, beginning with rapid assessments and proof-of-value services next year.

Accessing memory-driven computing

Through a collaboration between HPE Pointnext, Hewlett Packard Labs and HPE's own Global IT organisation, HPE is introducing a Memory-Driven Computing operating and development environment for customers and developers around the world.

The Memory-Driven Computing Sandbox will feature HPE Superdome Flex with Software-Defined Scalable Memory, a new system enhancement under development and key technology output of The Machine research project.

Software-Defined Scalable Memory includes new software and firmware advances that enable the industry-leading Superdome Flex memory fabric to address significantly larger pools of shared memory than previously possible.

The technology provides the ability to compose memory on the fabric and offers the ability to scale to 96 terabytes, all while offering faster and more resilient performance.

“In the Memory-Driven Computing Sandbox, our customers' developer teams will be able to experiment right along with the Hewlett Packard Labs team,” says VP, HPE Fellow and chief architect for Memory-Driven Computing Kirk Bresniker.

“With our existing case studies, we have seen how quickly innovation is sparked when Memory-Driven Computing technologies get into developers' hands, and their work helps inspire and shape our research agenda.

“This program will support rapid testing for customers and will provide early access to emerging technologies, directly from the Labs, before broad commercial availability. It's a 21st-century mash-up between Bill Hewlett and Dave Packard's edict to ‘design for the engineer on the next bench' and today's agile development methodologies.

As part of its strategy to commercialise innovations from The Machine research project, HPE will look for opportunities to bring select benefits of Memory-Driven Computing—like Software-Defined Scalable Memory—to customers faster.

Under The Machine research project, HPE is advancing toward a single architecture to manage a customer's entire enterprise, from every edge to any cloud.

Through this work, HPE aims to one day power a mesh of precision systems specific to individual workloads—from distributed systems that sit at the edge to mission-critical systems at the core of its customer's operations—all connected through a secure common operating platform.