Cloud computing's present and future: What you need to know

Cloud computing is seeping into IT organizations and companies,

I had the opportunity to participate in two conferences over the past couple of weeks, and got what are essentially headlines ripped from today's newspapers about the state of cloud computing in the real world as well as a figurative text message from the future of cloud computing.

The first conference was "The Business of Cloud Computing," sponsored by Opal Events. It was a relatively small event, but the content represented perhaps the best end-user perspective on cloud computing I have ever seen at a conference. The second conference was Structure 2011, put on by GigaOM. It was also outstanding and carried the feel of a peek into the future of IT and just what a wrenching transformation cloud computing will impose onto established IT practices.

What I took away from the two conferences is the sense that cloud computing is seeping into IT organizations and companies, often without any "official" approval or strategy, but with the undeniable momentum of a locomotive. Let me share some of the presentations and facts that struck me during this two week period.

The Business of Cloud Computing: CIOs Share Successes

Steve Phillipott, CIO of Amylin Pharmaceuticals presented one of the keynotes at "The Business of Cloud Computing." Over the past couple of years his organization has completely changed its way of doing business--driven, admittedly, by constraints like a tough business environment and completely full data centers. Amylin is moving all of its computing out of its local data center and into a remote outsourced colo provider.

At the same time, the pharmaceutical company is re-evaluating the deployment requirements for its apps, selecting non-core as candidates for SaaS migration, and strategic apps as ones that require significant IT resources and will remain in-house. Amylin has also implemented a stringent financial model to evaluate deployment options. It found that the projections for applications hosted in external public clouds would save, on average, 30 percent to 50 percent; applications written in force.com were delivered in 1/8th to 1/10th the same applications would have required to be written for internal deployment.

Joel Manfredo, CIO of Orange County, gave a presentation focused on green IT and what IT organizations will need to do in the future. The most striking thing about Manfredo's presentation: While leading cloud data centers operate at around 1.2 PUE [Power Usage Effectiveness] (Manfredo is skeptical of the 1.07 announced by Facebook as part of its Open Compute Project), the average corporate data center has a 2.5 PUE. Manfredo's own data center is currently running at 1.77 PUE. Of course, energy represents only around 25 percent of total data center costs; administration represents around 50 percent of costs.

One constant theme in the end-user heavy Business of Cloud Computing conference was the ongoing battle for financial resources. One participant, who works for a highly profitable consumer products company, shared the experience that a request to replace fifteen year-old desktops was denied with the rationale "if it hasn't broken yet, it's probably good enough to go for another year." (This may have been hyperbole on his part, although it seemed like systems were used well past their sell-by date at his firm.) It was quite interesting to contrast this situation with the one Phillipott outlined, where he has been able to obtain sufficient resources to enable investment with the end goal of reducing total spend.

The Future of Cloud Computing: Structure 2011

The Structure 2011 event, by contrast, was like fast-forwarding to the future. The developments in cloud computing--and, more importantly, in cloud-based applications--are breathtaking. Presentations by various cloud service providers indicate that they are preparing for a future in which they will play a predominant role in IT deployment. A number of small, early-stage companies described approaches to computing that promise dramatic change in application development and configuration.

Werner Vogels, CTO of Amazon, cited a couple of striking facts during his "State of the Cloud" keynote. The first was that Amazon Web Services' [AWS] S3 storage service contained, at the end of Q1 2011, 339 billion objects. S3 contained 212 billion objects at the end of Q4 2010, so the service grew by 29 percent in one quarter. I think that represents a 176 percent compounded annual growth rate, which at the large numbers we're talking about is absolutely staggering. It's obvious that people are beginning to treat S3 as the world's largest external drive, with new use cases springing up on a daily basis.

The second striking fact noted by Vogels is that during November 2010, Amazon.com completely transitioned its web server tier to AWS. If one thinks about the enormous load Amazon experiences during the holiday shopping season (the company is by far the largest online retailer), it's clear that AWS is capable of managing huge application loads. Moreover, given the fact that AWS supported all of the Amazon.com holiday shopping while still supporting everyone else's cloud computing indicates just how much infrastructure AWS has available.

Impact on IT: DevOps

On the subject of cloud computing's impact on IT organizations, one panel devoted to DevOps described it as a cultural change in IT, being marked by a change in the relationship between application developers and infrastructure administrators. Frankly, I am a bit uncomfortable with any emerging trend within IT being characterized first and foremost as a cultural change. It seems to me that the cultural change phrase is used as shorthand to summarize a range of process and organization changes that result from managing the application lifecycle as an end-to-end seamless process rather than as a series of disjointed way stations.

To my mind, DevOps results from the integration of operations requirements upstream in the project lifecycle, with operations personnel participating in projects much earlier than in the past, so that systems will be more robust and deliver higher SLAs.

In addition, many of the operations tasks traditionally implemented by human administrators are captured in automation policies and code so that when the application is in production it can autonomically respond to changing conditions. This change in timing and implementation has the effect of changing the relationship between developers and operations--an undoubtedly positive effect--so that applications are more agile, scalable, robust, and crucially, less expensive to operate. An important point is that DevOps applications that operate in this manner tend to be part of a business value chain critical to overall company success. Another important point is that DevOps attacks the 50 percent of total data center costs that administration represents. While writing this post, I came across this interesting blog post that describes the transition to DevOps in a very clear manner and that outlines just how different it is from traditional IT organization processes.

Platform-as-a-Service

Several sessions were devoted to PaaS [platform as a service]. This form of cloud computing has been the ignored stepsister of the more widely adopted IaaS and SaaS, but that is changing fast. At HyperStratus, we have concluded that PaaS will be the form mainstream IT organizations will ultimately use to implement elastic applications. This is because most enterprise developers will find designing applications that interact directly with low-level infrastructure resources too difficult. Leveraging PaaS frameworks will allow enterprise developers to focus on business logic while automatically achieving cloud computing's scalability and elasticity. Fittingly, there were a number of different PaaS offerings discussed, and one PaaS startup, dotcloud, won the Structure Launchpad competition.

APIs in Cloud Computing

I had the privilege of moderating a session devoted to the role of APIs in cloud computing. The rise of external services that can be called by applications is exploding, and this presents a number of implications. First, it provides a way to improve developer productivity by reducing the need to develop so much of an application. If an external service is used to provide some set of functionality or data, that is less work for the application developer, who is then freed to work on the areas of the application unavailable as standardized services. In other words, the developer can work on the differentiating aspects of the application, which is where business value for the sponsoring company lives.

Second, APIs provide a way for companies to realize revenues by offering interfaces to their systems, enabling unrelated web applications to generate transactions for the companies without the end user even realizing what is going on. Sears was cited as an example of this: By providing an API, the retailer allows external sites to sell the vast spread of merchandise Sears offers, and perhaps offers a way for a troubled retailer to find a new source of revenues. To provide a perspective on how pervasive API calls can be within applications, in another presentation someone from Salesforce said that of its 500 million daily page views, fully 50 percent of them are served up through the Salesforce API.

A third implication of APIs is the change in the architecture of applications. Instead of applications being a monolithic chunk of code written by one organization, they are assemblages of code and external services, which can pose challenges in terms of bug tracking, support responsibility, performance testing, and the like. As one person noted during the session, your application is only as strong as the weakest external service link.

Overall, I couldn't help but compare the themes of these two conferences. While The Business of Cloud Computing indicated the enthusiasm of end user organizations for cloud computing, it also illustrated the tentative pace of "official" adoption. Meanwhile, the companies represented at Structure are busily engaged in dismantling existing application design patterns and substituting vastly different approaches to architecting, implementing, and managing applications--and achieving success one application at a time. One can't help but feel that IT is experiencing the phenomenon of being undermined from within, as individual application groups make decisions falling outside the approved approaches, leaving the long-term implications of the decisions to be worked out by a later set of executives. I firmly believe that we are on the cusp of more change in IT than we have seen throughout its history, and those of us working in the field have the enviable opportunity to be immersed in this transformation.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of "Virtualization for Dummies," the best-selling book on virtualization to date.

Follow Bernard Golden on Twitter @bernardgolden. Follow everything from CIO.com on Twitter @CIOonline

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags cloud computinginternet

More about Amazon.comAmazon Web ServicesAmylin PharmaceuticalsFacebookOrangeStratusWikipedia

Show Comments
[]