How a data center works, today and tomorrow

The future of data centers will rely on cloud, hyperconverged infrastructure and more powerful components

A data center is a physical facility that enterprises use to house their business-critical applications and information, so as they evolve, it’s important to think long-term about how to maintain their reliability and security.

Data center components

Data centers are often referred to as a singular thing, but in actuality they are composed of a number of technical elements such as routers, switches, security devices, storage systems, servers, application delivery controllers and more. These are the components that IT needs to store and manage the most critical systems that are vital to the continuous operations of a company. Because of this, the reliability, efficiency, security and constant evolution of a data center are typically a top priority.

Data center infrastructure

In addition to technical equipment a data center also requires a significant amount of facilities infrastructure to keep the hardware and software up and running. This includes power subsystems, uninterruptable power supplies (UPS), ventilation and cooling systems, backup generators and cabling to connect to external network operators.

Data center architecture

Any company of significant size will likely have multiple data centers possibly in multiple regions. This gives the organization flexibility in how it backs up its information and protects against natural and manmade disasters such as floods, storms and terrorist threats. How the data center is architected can be some of the most difficult decisions because there are almost unlimited options. Some of the key considerations are:

  • Does the business require mirrored data centers?
  • How much geographic diversity is required?
  • What is the necessary time to recover in the case of an outage?
  • How much room is required for expansion?
  • Should you lease a private data center or use a co-location/managed service?
  • What are the bandwidth and power requirements?
  • Is there a preferred carrier?
  • What kind of physical security is required?

Answers to these questions can help determine where and how many data centers to build. For example, a financial services firm in Manhattan likely requires continuous operations as any outage could cost millions. The company would likely decide to build two data centers within close proximity, such as New Jersey and Connecticut, that are mirror sites of one another. An entire data center could then be shut down with no loss of operations because the entire company could run off just one of them.

However, a small professional-services firm may not need instant access to information and can have a primary data center in their offices and back the information up to an alternate site across the country on a nightly basis. In the event of an outage, it would start a process to recover the information but would not have the same urgency as a business that relies on real time data for competitive advantage.

While data centers are often associated with enterprises and web-scale cloud providers, actually any company can have a data center. For some SMBs, the data center could be a room located in their office space.

To help IT leaders understand what type of infrastructure to deploy, in 2005, the American National Standards Institute (ANSI) and Telecommunications Industry Association (TIA) published standards for data centers, which defined four discrete tiers with design and implementation guideline. A tier one data center is basically a modified server room, where a tier four data center has the highest levels of system reliability and security. A complete description of each data center can be found here (http://www.tia-942.org/content/162/289/About_Data_Centers) on the TIA-942.org website.

Future data center technologies

As is the case with all things technology, data centers are currently undergoing a significant transition, and the data center of tomorrow will look significantly different from the one most organizations are familiar with today.

Businesses are becoming increasingly dynamic and distributed, which means the technology that powers data centers need to be agile and scalable. As server virtualization has increased in popularity, the amount of traffic moving laterally across the data center (East-West) has dwarfed traditional client server traffic, which moves in and out (North-South).

This is playing havoc with data center managers as they attempt to meet the demands of this era of IT. But as the Bachman Turner Overdrive song goes, “B-b-b-baby, you just ain't seen n-n-nothin' yet”.

Here are the key technologies that will evolve data centers from being static and rigid environments that are holding companies back to fluid, agile facilities capable of meeting the demands of a digital enterprise.

Public clouds

Historically, businesses had a choice of building their own data center, using a hosting vendor or a managed service partner. This shifted ownership and the economics of running a data center, but the long lead times required to deploy and manage technology still remained. The rise of Infrastructure as a Service (IaaS) from cloud providers like Amazon Web Services and Microsoft Azure gives businesses an option where they can provision a virtual data center in the cloud with just a few mouse clicks. ZK Research data shows that over 80% of companies are planning hybrid environments, meaning the joing use of private data centers and public clouds.

Software-defined networking (SDN)

A digital business can only be as agile as its least agile component. and that’s often the network. SDN can bring a level of dynamism never experienced before. (Here is a deeper dive on SDN.)

Hyperconverged infrastructure (HCI)

One of the operational challenges of data centers is having to cobble together the right mixture of servers, storage and networks to support demanding applications. Then, once the infrastructure is deployed, IT operations needs to figure out how to scale up quickly without disrupting the application. HCI simplifies that by providing an easy to deploy appliance, based on commodity hardware that can scale out by adding more nodes into the deployment. Early use cases for HCI revolved around desktop virtualization but have recently expanded to other business applications such as unified communications and databases.

Containers

Application development is often slowed down by the length of time it takes to provision the infrastructure it runs on. This can significantly hamper an organizations ability to move to a DevOps model. Containers are a method of virtualizing an entire run time environment that allows developers to run applications and their dependencies in a self-contained system. Containers are very lightweight and can be created and destroyed quickly so they are ideal to test how applications run under certain conditions.

Microsegmentation

Traditional data centers have all the security technology at the core, so as traffic moves in a North-South direction, it passes through the security tools and protects the business. The rise of East-West traffic means the traffic bypasses firewalls, intrusion prevention systems and other security systems and enabling malware to spread very quickly. Microsegmentation is a method of creating secure zones in a data center where resources can be isolated from one another so if a breach happens, the damage is minimized. Microsegmentation is typically done in software, making it very agile.

Non-volatile memory express (NVMe)

 Everything is faster in a world that is becoming increasingly digitized, and that means data needs to move faster. Traditional storage protocols such as the small computer system interface (SCSI) and Advanced Technology Attachment (ATA) have been around for decades and are reaching their limit. NVMe is a storage protocol designed to accelerate the transfer of information between systems and solid state drives greatly improving data transfer rates.

GPU (Graphics processing units) Computing

Central processing units (CPUs) have powered data center infrastructure for decades but Moore’s Law is coming to a physical limitation. Also, new workloads such as analytics, machine learning and IoT are driving the need for a new type of compute model that exceeds what CPUs can do. GPUs, once only used for games, operate fundamentally different as they are able to process many threads in parallel making them ideal for the data center of the not too distant future.

Data centers have always been critical to the success of businesses of almost all sizes, and that won’t change. However, the number of ways to deploy a data center and the enabling technologies are undergoing a radical shift. To help build a roadmap to the future data center, recall that the world is becoming increasingly dynamic and distributed. Technologies that accelerate that shift are the ones that will be needed in the future. Those that don’t will likely stick around for a while but will be increasingly less important.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about AdvancedAmazonAmazon Web ServicesEastManhattanMicrosoftMicrosoft AzureTechnologyWest

Show Comments
[]