As the data storage market in Australia surpasses $2 billion, the term Big Data is becoming increasingly irrelevant. It’s all big data now.
The ‘big’ part has lost a further degree of relevancy as what it really translates to is the ever-growing complexity and massive amounts of processing power needed to cope with it, which would have been hard to imagine even a few years’ ago.
The Big Data revolution, if we should still be calling it that, carries more issues than just its incredible need for heavy-duty storage and processing. Our growing ability to understand and make use of this massive resource is proving incredibly disruptive and is changing our entire approach to IT, even if it is still early days.
Stats clashing with compute
Size isn’t everything. A connection between advanced statistical models and new compute technologies to better understand and benefit from data resources is the real driving force behind this revolution. The ability to generate rules – or algorithms – to look for patterns in data and solve problems in a fraction of the time using traditional computing methods is at the centre.
It’s not a new concept, just one that’s finding its way into more and more aspects of our daily lives. Google has had our search patterns figured out for a long time now and built them into its algorithm-driven search engine. Now Netflix uses algorithms to figure out what we want to watch next. Driverless cars, voice-driven assistants, next-day deliveries, high-speed trading – we owe these and more services and technologies every day to algorithms.
Just as apps have changed the way we interact with computers, algorithms are enabling a quantum leap in machine learning, enabling us to do some incredible things with big data.
While the size of this data and its ability to be processed so quickly are incredible, the real value lies firstly in the vast intelligence algorithms can provide, and secondly in how enterprises can monetise this. Gartner calls it the ‘algorithm economy’, and claims it will be the next big thing in big data. In the research firm’s own words: “data is dumb, algorithms are where the real value lies.”
More pressure on storage and processing
This next big thing poses even more pressure on data storage and processing resources. CIOs have been increasingly turning to public cloud resources to handle this demand, but in the same way that many organisations prefer to keep business-critical data in-house and away from the public cloud, the same logic is likely to be applied to algorithms. As their use becomes more important, so will the want to keep it safe in-house and away from the cloud.
Somewhat ironically, this opens an area where algorithms should have a huge impact on IT infrastructure. Hybrid cloud combining public and private cloud is fast becoming the preferred model for the enterprise – Gartner recently predicted it being the preferred choice for 90 per cent of companies worldwide by 2020, with Australia following that trend.
But public and private are yet work perfectly in tandem. Yes, you can keep some things on-premises and others in the cloud, but you can’t bounce them between the two as you please. The right algorithm might be able to break through this barrier and finally empower the enterprise to dip in and out as it pleases.
It’s a case of “physician heal thyself.” The algorithm economy will complicate data storage and processing issues, but algorithms will also be necessary in solving those issues by leveraging big data insights to better manage the big data resources from which they come.
This is already happening – machine learning tools are now able to automatically balance storage demands and compute workloads across a public/private platform mix. It will become a significant part of the digital transformation process going on in the enterprise, and in government through the likes of the Digital Transformation Agency. Watch the infrastructure space as these algorithms begin to help it take a new shape.
Matt Young is SVP and head of Asia Pacific and Japan at Nutanix