Computerworld

The future of high performance storage

In the past 12 months, there’s been a lot of hype about new advancements in storage, writes Gartner’s Julia Palmer

In the past 12 months, there’s been a lot of hype about new advancements in storage that include new forms of non-volatile memory (such as Z-NAND from Samsung and 3D Xpoint from Intel and Micron).

There’s also been growing adoption of newer interfaces such as non-volatile memory express (NVMe) and more modern software, which seek to reduce the imbalance in data centre IT infrastructure created by CPU advancements of the past three decades. These interfaces promise low-latency, high-performance data access to applications, particularly for SQL/NoSQL data, high performance computing (HPC) workloads and big data applications.

These innovations have the potential to bridge the compute-storage performance gap in the data centre, but you need to be wary about the hype surrounding these new technologies. As well as offering benefits, this unproven, fast evolving market change will also deliver uncertainties to leaders responsible for IT infrastructure planning.

Faster interfaces between compute and storage that leverage parallel access can reduce storage access latencies, but they may also require you to explore significant additional investments or upgrades, which in turn, disrupt enterprise IT refresh cycles. Protect your investments by seeking warranties from vendors before deploying them in production.

The cumulative effects of these technologies have the potential to be far reaching; however, they’re in a nascent stage today. That immaturity and lack of proven deployments should be carefully weighed against the potential benefits.

Achieving storage-class memory

Many emerging memory technologies have strived to marry high performance and cost efficient density with non-volatility for persistence in order to be designated as storage-class memory. This is a classification of technologies that have read/write performance closer to DRAM (server memory), but are persistent, enabling them to store information without power. Most memory technologies fail in this pursuit. Every one of them has taken longer to mature than initial expectations.

Today's challenge is to rival the technology and massive manufacturing scale of the DRAM and NAND flash (performance storage) markets, by introducing a new tier of storage-class memory that capably blends the attributes of both with minimal compromise. The ultimate goal is to provide a scalable, persistent memory technology that can achieve access speeds closer to CPUs for only a few dollars per gigabyte.

Software needs to become more lightweight

Significant progress is being made in storage-class memory and high-speed interfaces that can deliver latencies in tens of microseconds. However, the traditional storage software stack continues to be a limiting factor in harnessing the full capabilities of the interface innovation, due to its optimisation for serial devices.

Traditional storage software uses inefficient locking mechanisms for multi-threading, which limit its ability to do parallel I&O tasks at scale. Moreover, they add complex, higher-level abstractions, such as blocks and pages, rather than addressing data at a byte level

For storage-class memory applications to function effectively, the storage software needs to become more lightweight with low overhead and optimised for parallel access at better read/write granularity.

The operating system, hypervisor and storage software to support storage-class memory are in early stages of evolution, which will further inhibit market adoption. Gartner expects the full potential of these innovations to be harnessed over the next two years.

Julia Palmer is a research director at Gartner, focused on emerging storage and hyperconverged strategies and technologies, addressing both software-defined and traditional data centre storage. She will be speaking at the upcoming Gartner IT Infrastructure, Operations and Data Centre Summit in Sydney next week (15-16 May 2017).