NBN says it will employ machine learning to help it analyse the 38 terabytes of network health and performance data it gathers every day in order to take a proactive approach to mitigating faults across the collection of broadband technologies it is rolling out across Australia.
NBN’s executive general manager for IT strategy and architecture, Arun Kohli, told the Cebit Big Data and Analytics conference that the company had successfully been training machine learning models to help identify burgeoning network faults before they take down an end user’s service
“To manage the health of the network we can go two ways,” Kohli told the conference. The “traditional telco way” is “being reactive”: The telco finds out about the fault from network data or receives a call from an end user notifying them about a problem. But because NBN has so much network data and is able to take advantage of advances in machine learning, the company wants “to move the dial from being reactive to proactive,” he said.
NBN last year announced the launch of a Tech Lab that it said would employ machine learning techniques to help deliver insights into pain points encountered by end users, leveraging a range of open source projects including Apache SPARK, Kafka, Flume, Cassandra and JanusGraph.
The company last month revealed that it had struck R&D agreements with the University of Melbourne and the University of Technology Sydney that would focus on a number of areas including AI.
Kohli said that traditionally telcos have had access to a significant amount of data but its scale and the cost of infrastructure needed to make the most effective use of it have always been a challenge – on top of that, the time it took to centralise the data meant that in many cases they were limited to producing reports and compiling metrics.
“On top of the typical applications of insights, metrics, predictive analytics we have gone to the next level -- of machine learning -- to be more proactive for the end-user customer experience,” Kohli said.
“The amount of data which the network tells you about its health is quite big and it is growing,” Kohli said.
“So how do we use that? We can still write the reports, being reactive, but our focus is always… on the customer experience and the philosophy which we have taken is, for customer experience you have to be proactive.”
“Of course we do a lot of reporting, like any big enterprise,” Kohli said. However, the company has been training machine learning models to identify specific characteristics for connections that use the different technologies employed in the National Broadband Network rollout.
Not every fault on NBN’s network will lead to an immediate outage – and although an end user may be suffering performance degradation related to the company’s access network they may not be aware.
The company stitches together a combination of geospatial data it obtains from third parties with its own network data to help pin down services likely to go down.
For technologies such as fibre to the node that rely on the copper phonelines already installed in households, NBN has successfully trained machine learning models to recognise a variety of impairment types – including in-premises issues such as bridged taps and outside plant issues such as cable pair corrosion – using spectral data.
Kohli said the company has similarly successfully trained models to help identify, at scale, which hybrid-fibre coaxial (HFC) models are likely to “flap” – one of the key performance issues negatively affecting the end-user experience for households with HFC connections.