Gartner: 10 critical IT trends for the next five years

IT was forced to support tablets, IM and wireless networks - and more such technologies are on the horizon

Trying to stay ahead of the curve when it comes to IT issues is not a job for the faint of heart. That point was driven home at Gartner's IT annual IT Symposium fest in Orlando, Florida, where analyst David Cappuccio outlined what he called "new forces that are not easily controlled by IT are pushing themselves to the forefront of IT spending."

The forces of Cloud computing, social media/networking, mobility and information management are all evolving at a rapid pace. These evolutions are largely happening despite the controls that IT normally places on the use of technologies, Cappuccio stated. "IT was forced to support tablets, and end users forced them to support IM and wireless networks a few years ago. And more such technologies are on the horizon," he said.

Cappuccio's presentation listed the following as the Ten Critical Trends and Technologies Impacting IT During the Next Five Years. The following is taken from Cappuccio's report:

1. Disruption: Business users expect the same level of IT performance and support as they experience with consumer-based applications and services. Business-user demand for customer satisfaction is far outstripping the IT support organizations supply. IT organizations must invest in the development of IT service desk analyst skills and attributes, and organize appropriately to increase IT's perceived value to the rest of the organization. Business-user satisfaction can be a moving target, but enabling higher levels of productivity at the IT service desk level demonstrates that the IT organization cares about the business, and that it's committed to ensuring that users meet their goals and objectives. While a focus on traditional training, procedures, security access, knowledge management and scripts is warranted, a focus on next-generation support skills will be paramount to meet the needs and expectations of the business more efficiently.

2. Software Defined Networks: SDN is a means to abstract the network just as server virtualization abstracts the server. It transforms the network topology from box/port at a time configuration to flow at a time -- linked to application. Abstracts the network like a hypervisor abstracts the server and it gives programmatic control. With SDN the controller has a view of the entire network topology both the virtual and physical components of it including switches, firewalls, ADC, etc. and provides the abstracted view to provisioning and managing the network connections and services that the applications and the operator requires.

OpenFlow is a great example of that generalized network tunneling protocol that provides a generic API that any network operator can use to create his own control and management schemes based on the application requirements of his organization. And there will be other OpenFlow type SDN protocols that are designed ground up from an application level logic than from the traditional network paradigm of protocol, device and link-based thinking.

When used along with encapsulations like OpenFlow SDN can be used to dynamically extend a private cloud into a hybrid model to masking the enterprise specific IP addresses from the cloud provider's infrastructure. SDN also promises to allow service providers to offer dynamic provisioned WAN services, potentially across multi-provider/multi-vendor networks. Of course, there is the potential for significant organizational disruption as traditional network skills begin to shift, and alignment with specific vendor products or platforms becomes less rigid.

3. Bigger data and storage: A fact that data centers have lived with for many years remains true today: Data growth continues unabated. From an IT perspective, one of the main issues is not awareness of the issue, but prioritization of the issues. We have spent so many years dealing with this, and surviving, that storage management projects are usually initiated from the ground up, rather than top-down, relegating many of these to "skunkworks" status with little long-term funding.

Leading-edge firms have realized the problem and are beginning to focus on storage utilization and management as a means to reduce floor space usage and energy usage, improve compliance and improve controls on growth within the data center. Now is the time to do this, because most of the growth during the next five years will be in unstructured data -- the most difficult to manage from a process or tool point of view. Technologies that will become critical over the next few years are in-line deduplication, automated tiering of data to get the most efficient usage patterns per kilowatt, and flash or SSD drives for higher-end performance optimization, but with significantly reduced energy costs. NAND pricing continues to improve at a rapid pace, moving from $7,870 per gigabyte in 1997 down to $1.25 per gigabyte today -- and this trend will continue.

4. Hybrid Clouds: Vendors increasingly use cloud computing as a marketing label for many old technologies and offerings, devaluing the term and trend. Although cloud computing is a natural evolution of various enterprise and Web-based technologies and trends, it is a mistake to simply relabel these older technologies as "cloud computing." This new computing model drives revolutionary changes in the way solutions are designed, built, delivered, sourced and managed.

Cloud computing is heavily influenced by the Internet and vendors that have sprung from it. Companies such as Google deliver various services built on a massively parallel architecture that is highly automated, with reliability provided via software techniques, rather than highly reliable hardware. Although cost is a potential benefit for small companies, the biggest benefits of cloud computing are built-in elasticity and scalability, which reduce barriers and enable these firms to grow quickly. A hybrid cloud service is composed of services that combine either for increased capability beyond what any one of them have (aggregating services, customizing them, or integrating two together), or for additional capacity.

There is an emerging trend in hybrid data centers whereby growth is looked at from the perspective of applications criticality and locality. As an example, if a data center is nearing capacity, rather than begin the project to define and build another site, workloads are assessed based on criticality to the business, risk of loss, easy of migration, and a determination is made to move some workloads either to co-location facilities, hosting, or even to a cloud type service. This frees up floor space in the existing site for future growth, both solving the scale problem, and deferring capital spend for potentially years. An alternative to this is for older data centers to begin migrating critical work off-site, thus reducing downtime risks and business interruptions, while freeing up the old data center for additional work (non-critical), or for a slow, in-place, retrofit project.

5. Client server: In the PC world of the last quarter century, both the operating system and application were primarily resident on the desktop (some large and complex applications such as ERP were located on servers that could be remote from clients). Today, anything goes! The operating system -- as well as the application -- can be executed on the PC or a server -- or streamed to a PC when needed. Choice of architecture is dependent on user needs and the time frame for implementation. No longer does one size fit all.

Regarding Windows 8 deployments, 90% of enterprises will bypass broad scale deployment, and will focus on optimized Windows 8 deployments on specific platforms (e.g., mobile, tablet) only. Servers have been undergoing a long-term evolutionary process. They have moved from stand-alone pedestals to rack-mounted form factors in a rack cabinet. The latest step in x86 server hardware evolution is the blade server. It has taken hardware from just single servers with internal peripherals in a rack cabinet to a number of more dense servers in a single chassis with shared back plane, cooling and power resources. A true component design allows for the independent addition of even more granular pieces like processors, memory, storage, and I/O elements.

As blades have grown, so has the marketing push from server providers to position blades as the next most advanced technical step in server evolution and even, in some cases, as the ultimate server solution. It always take a closer examination of multiple factors -- required density, power/cooling efficiency requirement, high availability, workload etc. -- to reveal where blades, rack and skinless really do have advantages. Moving forward this evolution will split into multiple directions as appliance use increases and specialty servers begin to emerge (e.g., analytics platforms).

6. The Internet of Things: This is a concept that describes how the Internet will expand as physical items such as consumer devices and physical assets are connected to the Internet. The vision and concept have existed for years; however, there has been acceleration in the number and types of things that are being connected and in the technologies for identifying, sensing and communicating. Key advances include: 

Embedded sensors: Sensors that detect and communicate changes (e.g., accelerometers, GPS, compasses, cameras) are being embedded not just in mobile devices but in an increasing number of places and objects.

Image recognition: Image recognition technologies strive to identify objects, people, buildings, places, logos and anything else that has value to consumers and enterprises. Smartphones and tablets equipped with cameras have pushed this technology from mainly industrial applications to broad consumer and enterprise applications.

NFC payment: NFC allows users to make payments by waving their mobile phone in front of a compatible reader. Once NFC is embedded in a critical mass of phones for payment, industries such as public transportation, airlines, retail and healthcare can explore other areas in which NFC technology can improve efficiency and customer service.

7. Appliance madness: Organizations are generally attracted to appliances when they offer hands-off solutions to application and functional requirements, but organizations are also repelled by appliances when they require additional investments (time or software) for management functions. Thus, successful appliance products must not only provide a cost-effective application solution, they must require minimum management overhead.

Despite the historical mixed bag of successes and failures, vendors continue to introduce appliances to the market because the appliance model represents a unique opportunity for a vendor to have more control of the solution stack and obtain greater margin in the sale. In short, appliances aren't going away any time soon. But what's new in appliances is the introduction of virtual appliances. A virtual appliance enables a server vendor to offer a complete solution stack in a controlled environment, but without the need to provide any actual hardware. We see virtual appliances gaining popularity and fully expect to see a broad array of virtual appliance offerings emerge during the next five years. However, the growth in virtual appliances will not kill physical appliances; issues such as physical security, specialized hardware requirements and ecosystem relations will continue to drive physical requirements.

The very use of the appliance terminology creates great angst for some vendors and users -- particularly for physical appliances. Strictly speaking, a highly integrated platform like Oracle's Exadata or VCE Vblock is not a true appliance; these are factory integrated systems that will require some degree of configuration and tuning, even when the software stack is integrated; they will never fit the classic notion of a "pizza box." But while such systems will not be consumed as appliances, they are certainly packaged and sold in a very appliance-like manner. Many other physical appliances will be more faithful to the concept -- they will be plug & play devices that can only deliver a very prescribed set of services.

8. Complexity: The sources of complexity within IT are easy to spot. They include the number of initialization parameters for input into starting an Oracle database (1,600) and the number of pages (2,300) of manuals to use a Cisco switch. The complexity increases, though, when we look at combining several elements such as Microsoft Exchange running on VMware. What makes this complexity worse, however, is the fact that we are not getting our money's worth: Historical studies suggest that IT organizations actually use only roughly 20% of the features and functions in a system. This results in large amounts of IT debt, whose high maintenance costs for "leaving the lights on" divert needed funds from projects that can enhance business competitiveness.

9. Evolution toward the virtual data center: As we enter the third phase of virtualization (phase 1: MF/Unix, phase 2: basic x86) we see that the higher the proportion of virtualized instances, the greater the workload mobility across distributed and connected network nodes, validating fabric and cloud computing as viable architectures. As more of the infrastructure becomes virtualized, we are reshaping IT infrastructure. We will see more of the possibilities in the future where the "fabric" will eventually have the intelligence to analyze its own properties against policy rules that create optimum paths, change them to match changing conditions and do so without requiring laborious parameter adjustments. X86 virtualization is effectively the most important technology innovation behind the modernization of the data center. With it will be a sea-change in how we view the roles of compute, network and storage elements -- from physical hardwired to logical and decoupled applications.

10. IT demand: With the increased awareness of the environmental impact data centers can have, there has been a flurry of activity around the need for a data center efficiency metric. Most that have been proposed, including power usage effectiveness (PUE) and data center infrastructure efficiency (DCiE), attempt to map a direct relationship between total facility power delivered and IT equipment power available. Although these metrics will provide a high-level benchmark for comparison purposes between data centers, what they do not provide is any criteria to show incremental improvements in efficiency over time. They do not allow for monitoring the effective use of the power supplied -- just the differences between power supplied and power consumed.

For example, a data center might be rated with a PUE of 2.0, an average rating, but if that data center manager decided to begin using virtualization to increase his or her average server utilization from 10% to 60%, while the data center itself would become more efficient using existing resources, then the overall PUE would not change at all. A more effective way to look at energy consumption is to analyze the effective use of power by existing IT equipment, relative to the performance of that equipment. While this may sound intuitively obvious, a typical x86 server will consume between 60% and 70% of its total power load when running at very low utilization levels. Raising utilization levels has only a nominal impact on power consumed, and yet a significant impact on effective performance per kilowatt.

Pushing IT resources toward higher effective performance per kilowatt can have a twofold effect of improving energy consumption (putting energy to work) and extending the life of existing assets through increased throughput. The PPE metric is designed to capture this effect.

Follow Michael Cooney on Twitter: @nwwlayer8 and on Facebook.

Read more about data center in Network World's Data Center section.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags cloud computingmanagementinternetGartnerNetworkingciscoData storageData Centerhardware systemsConfiguration / maintenanceLAN & WANsoftware defined networksGartner research

Show Comments
[]