Understanding how new solid state drive technologies can benefit the data center

By combining SSDs and HDDs in the right mix, performance gains are possible while keeping costs under control

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Solid state drives (SSDs) can help IT managers maximize storage efficiency in a rapidly evolving data center environment. New technologies such as Vertical NAND (V-NAND), Non-Volatile Memory Express (NVMe), and PCI Express (PCIe) help SSDs deliver high bandwidth and low latency, while hard disk drives (HDDs) still offer efficient storage of large quantities of data with lower performance demands.

The key to maximizing efficiency and savings is aligning performance and capacity to dollars spent. By combining SSDs and HDDs in the right mix, performance gains are possible while keeping costs under control.

According to analyst firm IDC, about 90% of the world’s data is considered “cold data”, which means it is accessed infrequently after capture. The remaining 10% of the world’s data is hot, meaning it is captured and accessed frequently. Take Twitter for example; recent Tweets are pushed into feeds and liked, retweeted, or favorited becoming “hot”, while most Tweets older than a week “cool down” but are still searchable.

It is needlessly expensive to store all data in high-performance, low-latency storage devices, hence the use of tiered storage architectures, where each class of storage provides unique performance qualities that are best-suited to the data in that tier:

  • CPU cache and in-memory processing form the “hottest” tier, with small amounts of data in flight.
  • A “hot” tier handles data spilled from memory to storage, supporting high-performance writes. PCIe NVMe SSDs offer unprecedented transactional speeds and write endurance necessary for these demands.
  • A “warm” tier with increased data capacity uses 2-bit and 3-bit MLC Serial ATA (SATA) SSDs as they still offer solid transactional performance and endurance with lower cost per gigabyte.
  • A “cold” tier archives the bulk of the data in HDDs at the lowest cost per gigabyte.

Data should flow naturally from the “hot” to “warm” tiers and eventually to the “cold” tier. Should archival data suddenly find itself in higher demand, it can be migrated back to the “warm” or “hot” tier for processing. This approach allows each tier to be fully optimized around the right technologies, increasing overall data center performance without driving unnecessary costs.

Better SSDs with V-NAND technology

When it comes to NAND flash technology, it is important to understand the evolution of NAND and the performance, endurance, and cost differences between versions. For years, NAND flash advances made it possible to pack more and more bits into each cell.  But at some point NAND flash cells became so tightly packed they actually interfered with each other, reducing reliability. Smaller cells also became more susceptible to wear, and NAND flash endurance began to reach a limit.

With V-NAND technology, cell towers are created by stacking multiple layers, with scaling shifted from 2D to 3D. Rather than making cells smaller as in 2D NAND, V-NAND features relaxed intercell dimensions while still achieving significantly higher capacities with stacking. The result is that V-NAND delivers improved performance and endurance over planar NAND.

The V-NAND technology also enables increased performance and endurance in data centers. Benefiting from a larger cell geometry lowers the Error Correction requirements seen in the smaller Planar NAND geometries. This means that V-NAND SSDs operate with less energy than traditional planar NAND SSD, and far less energy than HDDs with spinning motors. Faster V-NAND flash also allows SSDs to take full advantage of faster interfaces.

V-NAND based SSD’s are also deliver higher endurance due to the reduced ECC requirements and lower energy consumption.

Depending on the application, the benefits can range from more users able to access data on the same network, improved response times for data analytics, or increased drive-writes on the SSD storage space.

Improve speed and performance with PCIe and NVMe

While huge strides have been made to improve NAND structure for better endurance, capturing the full performance gains requires improvements to the software interface connecting the SSD to the computer.

Non-Volatile Memory Express (NVMe) and PCI Express (PCIe) SSD technologies are transforming the speed and performance of data centers. With the PCIe interface and NVMe protocol, storage subsystems deliver higher bandwidth, lower latency, and avoid performance bottlenecks, all of which drive high-caliber data center performance. The switch from SATA or SAS to a PCIe interface provides data centers substantially more bandwidth than was ever possible with the earlier generation SATA interface. PCIe SSDs can be connected directly to the CPU without Host Bus Adapters, further reducing latency.

In addition to the electrical interface, operating systems also need an improved software interface for higher storage performance. Historically, SSDs and HDDs used Advanced Host Controller Interface (AHCI), which posed a bottleneck effect for SSDs since it was originally designed for high latency HDDs on SATA interface. The AHCI stack adds additional translation layers and, based on its design, can only support one queue with up to 32 outstanding commands.  By the nature of the technology, SSDs are inherently capable of much higher transfer speeds at lower latencies, but without an optimized software interface they can not reach their full potential. 

The road there has been bumpy, and is finally starting to smooth out. Prior to any standard approach, SSD vendors incorporating PCIe interfaces had to write a proprietary driver for improved performance. NVMe emerged as a new specification, using a simplified, low latency stack between the application and SSD to reduce I/O overhead and provide higher performance and improved efficiency. Where AHCI supports one queue and 32 commands, NVMe includes a vastly improved queueing system with support for thousands of queues, each allowing for up to 65,536 outstanding commands.  The transition to NVMe and PCIe SSDs contributes to improved random and sequential performance compared to SATA interface SSDs using the AHCI protocol.

Data centers that benefit from high transactional input/output operations per second (IOPS) will experience up to four times faster performance with these new and improved technologies. Now, data centers can maximize speed and improve performance using PCIe and NVMe SSDs, particularly in “hot” tiers with more frequently accessed data.

Handling the most intense transactions

At the single drive level, bringing in V-NAND SSDs with PCIe interfaces and NVMe drivers can substantially boost transactional speeds at critical points in a data center. With data increasing exponentially, understanding what data is “hot” and what is “cold” becomes important to devising an architecture to handle it cost effectively.

A tiered storage approach uses technology most cost effectively. In most data centers, the “cold” tier with HDDs already exists, and a “warm” tier of mid-range SSDs may be informally in place. Adding a “hot” tier with high-performance, high-endurance SSDs and focusing the most intense transactions there can take data center performance to new levels without breaking the bank.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about AdvancedMLCSASTwitter

Show Comments
[]