Computerworld

HPE demos ‘The Machine’: Prototype 160TB memory-centric computer

Hewlett Packard Enterprise has given a prototype of its new memory-centric computer architecture, dubbed ‘The Machine’ its first public airing, in Washington DC
(Image: HPE)

(Image: HPE)

Hewlett Packard Enterprise has given a prototype of its new memory-centric computer architecture, dubbed ‘The Machine’ its first public airing, in Washington DC.

It has 160TB of addressable memory, but HPE says it expects the architecture could easily scale to “An exabyte-scale single-memory system and, beyond that, to a nearly-limitless pool of memory – 4,096 yottabytes… 250,000 times the entire digital universe today.”

The Machine abandons the traditional computer architecture of having a central processor with peripheral storage and replaces it with a vast fabric of non-volatile semiconductor memory that simultaneously fulfills the functions of long term storage and traditional computer memory, and that makes that data it holds available to multiple processors.

According to HPE “By eliminating the inefficiencies of how memory, storage and processors interact in traditional systems today, memory-driven computing reduces the time needed to process complex problems from days to hours, hours to minutes, minutes to seconds – to deliver real-time intelligence.”

It claims to have already achieved 1000 fold improvements over traditional architectures with certain types of analysis.

Jaap Suermondt vice president, software and analytics at HPE, who is responsible for The Machine’s software stack and applications, told Computerworld in August 2016 that The Machine was being designed to cater for the growing requirement to perform analytics on every-larger datasets and in the belief that traditional computing architectures would no longer be able to scale to meet demand.

In a briefing ahead of the prototype’s unveiling Kirk Bresniker, chief architect, Hewlett Packard Labs, told Computerwold that HPE would show that it had succeeded in realising not just the individual parts of The Machine’s architecture but had been able to combine these at scale and use the prototype to solve real-world highly data-intensive problems.

“Back in December we had everything working, but only one of each what we will demonstrated next week is achieving the scale we set out to achieve,” he said.

‘More capacity than anyone expected’

“We have a prototype with more capacity than anyone would have expected us to achieve. We have 160 TB of memory on the fabric, 1280 cores of ARM compute on the fabric,” he said.

“We have a photonic interconnect connecting a rack scale infrastructure that consists of 100 gigabit four colour coarse wavelength division multiplexing and it is all running a pretty interesting security analysis workload looking for subtle advanced persistent threats in the enterprise DNS architecture; threats that we experience at HPE every day.”

Central to commercial realisation of The Machine will be the development of non-volatile memory. Suermondt told Computerworld last year that there were a number of technologies for non-volatile memory approaching commercialisation that HPE was examining.

Bresniker said those investigations were continuing but the prototype used standard DRAM with the non-volatility being simulated by the use of an uninterruptible power supply.

“We continue our work in earnest looking at all the technologies coming out, but DRAM was the right answer for us to learn as much as we could as quickly as we could about having that much memory on a fabric,” he said. “That was a reasonable compromise for us.”

Commercialisation in stages

HPE is still not saying anything definite about the realisation of a commercial product. Asked, “Will it be 2020, 2025, 2030?” Bresniker said: “I think it wil be sooner than 2030. … You will see individual technologies coming out of our research and development that wil make or commercial platforms better and that will happen sooner rather than later.”

He added: “We have demonstrated the advantages, the efficacy of our approach. When we started this was not necessarily a proven thing. We had our hypotheses that when we brought together large next generation memories, task specific computation and adapted existing software and developed new software we would get an important speed up factor and what we have established through our emulation and simulation on the prototype is ranges of improvement that are 10, 100 times better.

10,000 times faster

“In extreme cases of teams doing brand new approaches to Monte Carlo analysis [s a technique used to understand the impact of risk and uncertainty in financial, project management, cost, and other forecasting models] an order of 10,000 times improvement and that is enough for use to says this is an approach that has merit, that we need to understand.”

He added that, while the prototype used off-the-shelf systems on a chip (Cavium’s dual socket capable ARMv8-A ThunderX2) the photonic fabric was proprietary and much of the other hardware used programmable devices, all of which would have to be replicated in commercial silicon before any commercial product would be produced. “Those design cycles are of the order of two to three years,” he said.

HPE’s largest R&D project ever

HPE says the project to develop The Machine is the largest in its history, and the company has been talking about it for several years. In her keynote presentation at HPE’s Discover 2016 event in London, CEO Meg Whitman said she believed in-memory computing represented “the next inflexion point in information technology.

“The ability to capture, analyse and store massive amounts of data at speeds that are unthinkable today has the potential to transform everything from healthcare to education, transportation retail.

While commercial realisation of The Machine might be several years away Whitman said the concepts underpinning it were already being implemented in commercial HPE products.

“You have already seen the impact of the program. One of the first examples was the launch last March [2016] of our new Proliant servers with persistent memory.

"We are also taking steps to futureproof other products like HPE Synergy systems to accept future photonics now in advanced developments.

“Our roadmap for the next several years includes the introduction of Machine technologies like silicon photonics, advanced non-volatile memory and memory fabric.”

Hyperbole?

HPE seems to be position The Machine as the key to unlocking the secrets of the universe. Commenting on the unveiling of the prototype, Whitman said: “The secrets to the next great scientific breakthrough, industry-changing innovation, or life-altering technology hide in plain sight behind the mountains of data we create every day.”

HPE says that the potential 4,096 yottabyte capacity of its architecture would make it possible “to simultaneously work with every digital health record of every person on earth; every piece of data from Facebook; every trip of Google’s autonomous vehicles; and every data set from space exploration all at the same time – getting to answers and uncovering new opportunities at unprecedented speeds.”

The Machine live on the web

HPE has scheduled a live webcast – via Whitman’s Facebook page of the prototype’s announcement at 4:20 am 17 May, AEST. Its promotion for this said astronauts travelling to Mars would need to be “guided by the most powerful computing system the world has ever seen,” but “the incremental increases we are seeing in computing power will not meet the demands of the challenge. We need a major computer upgrade, and Hewlett Packard Enterprise has the answer.”