Cisco bets state-of-the-art data center on UCS

Cisco has a new green data center built on integrated blade architecture

Cooling plant

To cool the data center Cisco uses an air-side economizer design that reduces the need for mechanical chilling by simply ducting filtered, fresh air through the center when the outside temperature is low enough. The design saves energy and money and of course is very green.

To understand how that works you need to have a handle on the main components of the cooling system, the pre-chilling external towers, the internal chillers and the air handlers.

The first stage includes three 1,000-ton cooling towers on the roof of the facility, where water is cooled by dripping it down over a series of louvers in an open air environment and then collected and fed to the chillers in a closed loop. (Pic 6. Tony Fazackarley in front of the cooling towers.)

That pre-cooled water is circulated through five chillers (three 1,000-ton and two 500-ton machines), reducing the amount of refrigeration required to cool water in a second closed loop that circulates from the chillers to the air handlers. (The chillers don't use CFC coolant, another green aspect of the facility.) (Pic 7. One of five chillers.)

A series of valves activated by cranks spun by chains makes it possible to connect any tower to any chiller via any pump, a redundancy precaution. And on the green side, the chillers have variable frequency drives, meaning they can operate at lower speeds when demand is lower, reducing power consumption. (Pic 8. The pumps used to circulate the cooling fluids; note the chains hanging from the valves that can be used to reconfigure the system on the fly.)

The chillers feed coils in the big boxy air handlers which pull in hot air from the data halls and route conditioned air back to the computing rooms. So far, nothing too outlandish for a large, modern data center. But here is where the air-side economizer design comes into play, a significant piece of the green story. (Pic 9. Air handlers play a key role in the air-side economizer design, making it possible to cool the facility using fresh, outside air.)

When it is below 78 degrees, the chillers are turned off and louvers on the back of the air handlers are opened to let fresh air in, which gets filtered, humidified or dehumidified as needed, and passed through the data halls and out another set of vents on the far side.

Fazackarley says they estimate that, even in hot Texas, they will be able to operate in so-called free-air mode 51 per cent of the time, while chillers will be required 47 per cent of the time and two per cent of the time they will use a mix of the two.

Savings in cooling costs are expected to be $600,000 per year, a huge win on the balance sheet and in the green column.

When online, DC2 should boast a Power Usage Effectiveness (PUE) rating of 1.25. PUE indicates how much of the power in the data center goes to computing vs. cooling and other overhead.

How good is a PUE of 1.25? "Very good, as it requires a very high level of IT and physical infrastructure optimization in tandem," says Bruce Taylor, vice president of Uptime Institute Symposia. "But keep in mind a new data center usually has a 'bad' utilization effectiveness ratio because of the standard practice of building the physical facility, including the power and cooling systems, prior to its actually being needed, to allow for capacity demand growth. Leaders like Intel are able to design facilities that tightly couple the IT hardware and the electrical and mechanical systems that power and cool it."

And Taylor is a fan of the air-side economizer design: "Wherever it is feasible to use 'free' outside air in the management of thermals, that increases effectiveness and energy efficiency."

Other green aspects of the facility:* Solar cells on the roof generate 100 kilowatts of power for the office spaces in the building.* A heat pump provides heating/cooling for the office spaces.* A lagoon captures gray water from lavatory wash basins and the like and is used for landscape irrigation.* Indigenous, drought-resistant plants on the property reduce irrigation needs.(Pic 10. The roof-top solar arrays provide electricity for the office spaces.)

Data halls

The data halls, of course, haven't yet been filled with computing gear, just the empty racks that will accept the UCS chassis. While there is no raised floor, the concrete slab has been tiled to mimic the standard raised floor layout to help the teams properly position equipment. (Pic 11. Tony Fazackarley and Jim Cribari in a data hall with the racks that will accept the UCS systems. Note the tiles on the concrete slab mimic the typical raised floor dimensions.)

Air can't be circulated through the floor, but Cisco uses a standard hot/cold aisle configuration, with cold air pumped down from above and hot air sucked up out of the top of the racks through chimneys that extend part way to the high ceiling above the cold air supply. The idea, Cribari says, is keep the air stratified to avoid mixing. The rising hot air either gets sucked out in free-air mode or is directed back to the air handlers for chilling.

Power bus ducts run down each aisle and can be reconfigured as necessary to accommodate different needs. As currently designed, each rack gets a three-phase, 240-volt feed.

All told, this facility can accommodate 240 UCS clusters (120 in each hall). A cluster is a rack with five UCS chassis in it, each chassis holding eight server blades and up to 96GB of memory. That's a total of 9,600 blades, but the standard blade has two sockets, each of which can support up to eight processor cores, and each core can support multiple virtual machines, so the scale is robust. The initial install will be 10 UCS clusters, Cribari says.

Network-attached storage will be interspersed with the servers in each aisle, creating what Cribari calls virtual blocks or Vblocks. The Vblocks become a series of clouds, each with compute, network and storage. (Pic 12, UCS racks.)

The UCS architecture reduces cable plant needs by 40 per cent, Cribari says. Each chassis in a cluster is connected to a top-of-rack access switch using a 10Gbps Fibre Channel over Ethernet (FCoE) twinax cable that supports storage and network traffic.

From that switch, storage traffic is sent over a 16Gbps connection to a Cisco MDS SAN switch, while network traffic is forwarded via a 40Gbps LAN connection to a Cisco Nexus 7000 switch. In the future, it will be possible to use FCoE to carry integrated storage/LAN traffic to the Nexus and just hang the storage off of that device.

The cable reduction not only saves on upfront costs – the company estimates it will save more than a million dollars on cabling in this facility alone – but it also simplifies implementation, eases maintenance and takes up less space in the cabinet. The latter increases air circulation so things run cooler and more efficiently.

That air circulation, in fact, is what enables Cisco to put up to five chassis in one rack, Cribari says. That's a total of about 13 kilowatts per rack, "but we can get away with it because the machines run cooler without all that cabling and air flow is better."

Put to use

When all is said and done and Texas DC2 comes online, it will be married to Texas DC1 in an active/active configuration -- creating what Cisco calls a Metro Virtual Data Center (MVDC) -- that will enable critical applications to live in both places at once for resiliency, Cribari says.

With MVDC, which will be emulated in a pair of data centers in the Netherlands as well, traffic arrives at and data is stored in two locations, Cribari says. Applications that will implement MVDC include critical customer facing programs, such as Cisco.com to safeguard order handling, and apps that are central to operations, such as the company's demand production program.

Cisco is currently trialing MVDC using applications in DC1 and a local collocation facility.

DC2 will otherwise serve as a private internal cloud, supporting what the company calls Cisco IT Elastic Infrastructure Services, or CITEIS. "It's basically targeted at the infrastructure-as-a-service layer, combining compute, storage, and networking," Manville says. "CITEIS should be able to service 80 per cent of our x86 requirements, but we think there are still going to be some real high-end production databases we'll have to serve with dedicated environments, and maybe not even virtualized, so using UCS as a bare-metal platform."

The virtualization technology of choice for CITEIS is VMware supporting a mix of Linux and Windows. Regarding the operating system choice, Manville says "there is no religion about that. We'll use whatever is needed, whatever works."

While Manville says cloud tech will account for half of his TCO expectations, the other half will stem from capabilities baked into UCS, many of which improve operational efficiencies.

When you plug a blade into a UCS chassis, for example, the UCS Manager residing in the top-of-rack switch delivers a service profile that configures everything from the IP address to the BIOS, the type of network and storage connections to be used, the security policies and even the bandwidth QOS levels.

"We call it a service profile instead of a server profile because we look more at what the apps that will be supported on the blade will require," says Jackie Ross, vice president of Cisco's Server Access and Virtualization Group.

Once configured, service profiles can be applied to any blade, and storage and network connections can be changed as needed without having to physically touch the machine; any blade can access Ethernet, Fibre Channel, FCoE, etc., Ross says.

That speeds provisioning, aiding agility, Manville says. The goal is to get to 15-minute self-service provisioning. "We have this running but haven't turned it over to the application developers for various chargeback and other authorization issues. But our sys admins are seeing significant productivity gains by being able to provision virtual machines in an automated fashion."

Taken all together, the broad new IT strategy – including the build-out of Texas DC2 and the shift to a highly virtualized cloud environment driven by the company's new computing tools – is quite ambitious and, if they pull it all off, will be quite an accomplishment.

Cisco is definitely taking the long view. There is enough real estate at the DC2 complex, and the core infrastructure has been designed to accommodate, the doubling of the "raised floor" space in coming years.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags unified communicationsNetworkingenvironmentData Centergreen ITvirtualizationhardware systemsGreen data centerConfiguration / maintenance

More about C2CiscoFibre ChannelHewlett-Packard AustraliaHPIntelLANLeaderLeaderLinuxPerot SystemsTCOTier 3VMware Australia

Show Comments
[]