The Australian data centre landscape

The current state of data centre affairs

IDC analyst, Matthew Oostveen

IDC analyst, Matthew Oostveen

Data centre infrastructure has changed dramatically over the past few years. While there are still many firms that use large-scale servers, the movement has been toward a scale-out deployment of servers using hundreds, if not thousands, of blades and racks. Additional equipment such as storage systems, network devices, and power and cooling equipment add to the complexity and cost of the growing size of data centre infrastructure. Data centre managers and chief financial officers are facing highly increased complex IT environments. Issues of power and cooling and how costs can be reduced are the ‘hot’ topics of the day. Coupled with this is the increased importance of sporting a ‘green” corporate image. Aging data centres are structurally outdated and inefficient, costing thousands — if not millions — of dollars extra to maintain each year.

IDC uses a data centre taxonomy which is based on the floor size of the facility and the types of security and redundancy employed. The first category, the server room, is a secondary computer location that is usually under IT control. These sites are typically less than 50 square metres and have some power, cooling and security capability. Server rooms offer significant opportunity for streamlining due to their large numbers. IDC research reveals that nearly 20 per cent of surveyed businesses have more than 10 server rooms.

Localised data centres could either be a primary or secondary location, usually less than 100 square metres, requiring badge or pin access. They have some power and cooling redundancy to ensure constant temperatures. Data centres exhibit economies of scale; the larger the facility the cheaper it is to run equipment. In a localised data centre, 41 per cent of the capital outlay goes to building design and construction. That number drops to 15 per cent for enterprise class facilities.

The next two data centre taxonomies are what we typically think of when someone says ‘data centre’. A mid-tier data centre is the primary server location for an organisation. It is a large room, but often less than 500 square metres. It has superior cooling systems that are redundant and protected by levels of physical and digital security.

Finally, there is the enterprise class data centre, which are not common in Australia. An enterprise class data centre is, in most cases, the primary server location for an organisation. It is very often in excess of 500 square metres and has advanced cooling systems, redundant power and is protected by multiple levels of physical and digital security. Enterprise class facilities are expensive to run, with nearly 30 per cent costing up to U$US500,000 per month in operating costs.

Data centres are capital intensive facilities that require large operating budgets to maintain. Australia has some of the oldest data centres in the Asia Pacific region which is significant in that old data centres are more expensive to maintain, less reliable and often unable to cope with the demands placed on them by modern servers and storage.

Despite their age, IDC research shows that very few CIOs (less than 10 per cent) intend to build a new facility. The reason for this is cost, with new facilities ranging upwards of $10 million for localised facilities and more than $100 million for enterprise class centres. Instead of building new facilities, CIOs are looking for ways to upgrade and refit their existing data centre.

In response to customer demands, vendors are ramping up offerings and introducing new technologies for data centres. IDC recognises four levels of solutions which customers are using to address their power and cooling issues:

  1. System: Solutions at the system level run a wide gamut, from processor to software. Server solutions include low-voltage processors and memory, processors with power throttling capabilities and improved power supplies. IDC also includes server virtualisation, as well as power management software, in this category.

  2. Rack: Rack-level solutions include blanking panels and reducing/arranging the cables to improve air flow through racks; network sensors to monitor and provide alerts for temperature, humidity, brownouts and blackouts, heating, ventilation, and air conditioning loss et cetera; and rack enclosures and supplemental cooling units that are situated overhead, next to, in front of, or behind the rack such as in row cooling.

  3. Room: This category encompasses solutions around the layout, design and infrastructure in datacentres, including hot/cold aisles configuration and hot/cold aisle containment, heating ventilating, and air-conditioning (HVAC) system modernisation, cabling reduction/floor plenum cleanout, and even the build-out of new space.

  4. Services: Power and cooling services are part of a broad set of data centre services that include energy efficiency analysis, computational fluid dynamics, thermal assessments and thermal zone mapping, architectural and engineering design services and even datacentre co-location or hosting.

Matthew Oostveen is research manager at IDC Australia.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags IDCMatt Oostveendata centre coolingcomputerworld data centre directory

More about etworkIDC Australia

Show Comments
[]