This was the year reality became virtual – and it shows no sign of being a passing fad. Deloitte expects a surge in the use of virtual reality (VR) technology by gamers, predicting it will become a billion-dollar market globally this year with “headware” such as the recently-launched Google Daydream, Oculus Rift, Samsung Gear and other brands accounting for about 70 per cent of market sales, and content sales making up the remaining 30 per cent.
However, gaming is just the tip of the iceberg for what VR is truly capable of. Imagine being able to “attend” a university lecture from the comfort of your couch, or being a trainee surgeon and watching an open-heart surgery in regional Australia from a hospital in Sydney. Imagine being able to dive into the Great Barrier Reef without getting wet, or watching the cricket from the viewpoint of the batter or bowler.
If some of this sounds unrealistic, think again: The National Basketball Association in the US is already streaming a live game a week in VR, Intel recently purchased VR company VOKE to help create “a truly immersive sports experience”, and in the UK a surgery to remove a tumour was streamed live in VR.
It’s important to note that these examples haven’t hit a critical mass in usage… yet. But once live VR reaches the mainstream – i.e. when streaming begins to match the size of the gaming VR market – it will come at the cost of increased network complexity and a lot more data traversing the network.
The live VR experience will ultimately succeed or fail based on its ability to be fully immersive with a seamless experience. You need to be able to turn your head and have the image turn with you – otherwise you are at risk of unsteady use and even nausea, as it’s an unsettling feeling to turn your head and not have the horizon turn at the same pace. And when you are depending on a live feed you’re at the whim of the connection’s stability and latency.
So the question becomes: are our networks, as presently designed, capable of taking us where we need to go?
We can’t take the success of gaming VR applications as evidence of network capability. Gaming VR applications use “static” content, wherein the content can be downloaded, and played later, in many cases without a network connection. While these files can be huge, beyond the download itself the network is not under a constant strain when the file is used. It’s when we get into that “live” element that we’ll begin seeing a completely different beast in terms of network demand.
The bandwidth required for a live feed is immense and this can potentially chew up all the valuable bandwidth needed for other day-to-day activities, leading to poor performance across the network, low or lost productivity, and a poor experience for the user.
In short, it’s a huge leap from downloading static imagery and content to live-streaming a fully immersive and seamless experience. Enterprises that stand to benefit the most from what VR promises need to start considering how they can prepare for a very near future, because chances are their traditional networks weren’t designed for a VR world.
So how will they ensure a seamless experience for end-users without having to continually feed the bandwidth beast and find ways to spot issues before customers get ‘sea sick’?
Moving the content closer to the edge of the network is one way to reduce latency, but for a mass cross-country VR broadcast, such as the cricket, that’s a lot of edges. What’s required is an approach to networking that enables the network operator and the business to have full visibility and control of the content, and its use, regardless of where it’s housed.
Software-Defined Wide-Area Networking (SD-WAN) offers one pathway to deliver an ideal end-user experience. A Software-Defined architecture should allow these businesses and network operators the ability to ramp up or down resources and priorities on the network as required. If content at one “edge” is not being utilised in full, for example, the SD-WAN can shift those resources to the edge that needs it most – and it can be programmed to do so ahead of time.
Additionally, the WAN can also be optimised to ensure data is not unnecessarily being sent and re-sent, thus lowering the amount of traffic on the network. While this may be impossible for the live video itself, data from other applications on the same network can be “de-duplicated” i.e. sent once and cached. This should increase the bandwidth available, reduce latency and thus help prevent a “glitchy” feed. Files can also be compressed where possible, ensuring the least amount of bandwidth is being used at any one time.
But ultimately it comes down to knowing what is happening on the network. Application-aware networking, coupled with the end-to-end visibility of network performance and end-user experience, allows application issues to be proactively detected and fixed before they impact end-users. For example, let’s say there are issues with bandwidth for a high-resolution stream. To combat the issue, the operator may be able to lower the quality slightly while maintaining a steady stream and preventing that herky-jerky motion that could lead to a nauseated end-user.
We’re headed towards a world where we might be able to be in two places at once thanks to VR, yet we can’t expect something that revolutionary to be easy to do – and once live VR hits the mainstream our networks will be put under intense strain. But Australia is well placed to get ahead of the curve of this nascent technology by re-thinking and evolving the way the networks are managed and operated from static designs based on legacy requirements, to a software-defined model.
Ian Raper is regional vice-president, product solutions APJ, at Riverbed Technology