Server technology in the year 2021: Part 1

Servers get flashy, optical and green in the age of tera-architectures

With IT organisations purchasing computing power at ever-higher densities in blade chassis, racks and even containers, it’s time to take a look at the direction of server technologies over the next decade.

Greater densities and higher virtual machine proliferation are driving the trend toward fabric-based and Cloud computing.

New tools and technologies are being developed to speed up the adoption of virtualisation, which is now at around 40 per cent penetration in the enterprise, according to Gartner. This figure will double in the next few years delivering increasingly automated server provisioning and capacity management offerings.

The need to maximise performance yield and lower management costs is also driving the development of higher density racks.

Gartner said enterprises can expect 50kW racks by next year and 100kW racks by 2015. These racks are likely to support internal optical interconnects and mostly require liquid cooling.

This article examines five emerging technologies — some still in the embryonic stage — which have the potential to transform the data centre.

Optical System Buses: This refers to the use of optical signalling to replace electrical connections in system buses. Optical transducers will appear in memory, interconnects and processor modules, significantly reducing pin counts while boosting performance.

Gartner believes optical system buses have the potential to displace technologies such as HyperTransport and QuickPath Interconnect through single or dual-fibre interfaces.

Gartner predicts server density and performance scaling to continue through to at least 2022, supported in part by a transition to optical system buses. Gartner analyst, Carl Clauch, said racks using internal optical fabric could contain 1,000 or more servers, all interconnected with an optical backplane at high bus speeds.

“This technology will allow data centre processing capacity to continue on an exponential path,” he said.

Vendors investing in this technology include IBM, Intel, Kotura and Lightfleet. In fact, Lightfleet recently delivered a prototype to Microsoft Research — a 32-blade cluster using crisscrossing beams of light in 8-inch cubes as the cluster interconnect.

Tera-architectures: This refers to extremely large scale computing systems that self assemble from components and implement resilience through a software architecture designed to detect and automatically respond to component failure.

Software has to be written specifically for this environment. While virtual infrastructure is decoupling software from hardware architectures, it still requires systems that preserve state. For scalability, flexible provisioning and fault tolerance, the state data must be managed in real time.

The tera-architecture approach eliminates this overhead and allows full automation of the management of state. By significantly reducing the overhead of managing hardware, Clauch said tera-architectures promise to reduce the cost of computing by a factor of 10 over traditionally managed and designed systems.

He said these technologies are emerging in large, global scale data centres built to service Internet loads. Google (App Engine) and Microsoft (Windows Azure) have both introduced development environments that start to decouple software from hardware failures in a scalable environment. Apache Hadoop open-source software can also be used.

The first step toward tera-architectures is the implementation of virtualisation across the data centre. Clauch said tera-architectures require code development methods that create stateless services as well as programming approaches that can scale to extreme size.

Servers using flash memory for system speedup: This technology is positioned between RAM and regular rotating disks in terms of performance and price. It is faster but more expensive than mechanical disk drives.

On the other hand, it is slower and less expensive than RAM. Gartner believes it can be implemented as a new type of storage and has the potential to improve system and application performance. To exploit the value of flash as a new type of memory, Claunch said the server must implement space for the flash chips and some access mechanism.

"No standard exists for this yet which hampers the speed of adoption," he said. "Adoption will accelerate when a consistent access interface exists and both the Windows and Linux operating systems support it. This will happen by 2014."

Claunch said companies facing performance challenges with existing systems should look to the potential of flash memory to resolve this issue. But Claunch warns only limited vendor support exists today although this scenario will change in the next three years. Data placed in flash memory should be selected carefully, rather than forcing entire file systems. Flash also enables the use of slower, less-expensive processors.

Next in Part 2: Alternative Servers using low-power processors, and server digital power module management.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags servers

More about ApacheGartnerGoogleIBM AustraliaIBM AustraliaIntelLinuxMicrosoftTera

Show Comments
[]