Computerworld

NBN 101: Floating the submarine cable question

We take a deep look at the issue of international links to the internet and what this means for broadband investment in Australia

This article is part of Computerworld Australia's NBN 101 series, in which we take a look at the arguments surrounding the fibre-to-the-home (FTTH) network, and dissect them one by one. The articles are meant to be an overview of the debates central to the National Broadband Network (NBN) and other broadband infrastructure projects to give you a grounding as more and more media outlets and commentators speak out on the project. We encourage people to take the discussion further in the comments section.

In our first article we took a look at how Australia’s NBN plan compares to the rest of the world and the statistics and graphs from the OECD, and then we strapped in for a tour of speeds. We also had a look at wireless technologies versus fibre optic, then we delved into the economic argument for a high-speed national broadband network, and how applications and potential service packages may play a role in the NBN. Then more recently we discussed whether mobility is a friend of foe.

Now we turn out attention to our international links to the internet, as it is one of the topics that surprisingly popped up during the Federal Election.

Bedding down the submarine cable links

One of the conversations emanating out of the Federal Election campaign broadband discourse centred on our international links to the internet.

In short, the argument went that mass infrastructure investment in projects like the National Broadband Network (NBN), no matter how fast, would be ultimately bottlenecked by Australia's international links. Support for this argument has some convincing elements: 70 per cent of the content Australians access is based overseas and the submarine cable links connecting us to the rest of the digital world simply aren't abundant.

To build a network of the NBN's scale without factoring in additional international links would relegate Australians to a proverbial pipe dream (excuse the pun).

Throughout the election campaign several commentators used this reasoning to various ends. Pro-NBNers, on the other hand, conjectured with the notion that Australian internet access would become much more local, as increased bandwidth afforded greater benefits for internal communication and applications.

However, most really didn’t look beyond a very shallow interpretation, and in many ways it is fundamentally erroneous.

So let’s look a bit deeper.

Aside from the fact the US is a highly successful mass content producer and has for some time put in a lot of national effort into bolstering the IT industry, one of the reasons so much of the internet content Australians access is located offshore is the manner in which our networks are architected.

IDC analyst David Cannon explains that when websites were mainly static content, it didn’t matter if updates to information took up to two or three days to complete. Basically, around the late 90s and even early this century, a lot of this content was cached domestically, predominantly with hosting company, Melbourne IT.

At the time, ISPs bought data from the big carriers – Telstra, Optus, AAPT, and WorldCom (now Verizon) – in two strands: Domestic and international.

“All of it travelled back and forth locally at a cheap price,” Cannon said. “Then all of a sudden websites started to become more dynamic with multimedia capabilities. What was happening was the cached data wasn’t keeping up with what the websites were trying to achieve. Simultaneously there was the Southern Cross pipes coming onboard, making data far more accessible and much cheaper.

“What they were hoping to do was sell clear channel pipes to the ISPs who could go to LA and peer with basically the internet and negotiate data rates themselves, and the telco would just sell them the pipe. That didn’t work out because they were really asking an arm and a leg. I remember selling the first STM-1 Southern Cross pipe to connect.com, which was AAPT’s ISP.

“I think it was something like $US12,000 a month or something like that just for a 2 megabit per second (Mbps) pipe. Back then that was good capacity, but of course now that is just ridiculous. Then if you wanted to move to STM-1 or anything like that you were talking about hundreds of thousands of dollars US per month. It was just out of reach.”

To cut a long story short, things evolved - prices came down, broadband emerged and the dynamic content on websites meant it was better to go direct to the US for data instead of paying to cache it domestically. And we have lived with that architecture for the past 10 years.

So do we have a capacity bottleneck to access this data? Not even close.

As far back as early last year Robin Russell, CEO of the Australia-Japan Cable, wrote in an article that international networks are nowhere near being considered a capacity constraint.

“That proposition can be despatched immediately,” he wrote. “Each of the four networks that will be providing the bulk of international connections for Australia is capable of carrying at least a terabit per second of data. The total international capacity in use for the Australian market in 2009 is estimated to be around 300 gigabits per second. Accordingly, total capacity usage could double, then double again, then double again, and then double yet again before the capabilities of those networks was exhausted. It would therefore be difficult to say that international networks are a capacity bottleneck in the Australian market.”

The four cables he was referring to are:

  • Southern Cross Cable Network
  • Australia-Japan Cable
  • Telstra’s Endeavour
  • Pipe Network’s PPC-1

There are other submarine cables but these are the four major transit routes for most of our internet traffic.

(See the images at the top of this story for the cable routes and maps.)

Moreover, there are other cable projects in the works. In July, it was announced raw network bandwidth out of Australia is set to get a two-fold increase with a $US400 million undersea cable.

Data carriers Pacnet and Pacific Fibre are partnering to build the Pacific Fibre cable, a low-latency undersea fibre optic cable spanning Australia, New Zealand and the US.

The bandwidth of the new cable will be a minimum of two fibre pairs with 64 wavelengths per pair. Each wavelength has a throughput capacity of 40 gigabits per second (Gbps) for a total of 5.21 terabits per second (Tbps) bandwidth.

The cable length is estimated to be 13,600 km long and will connect Sydney, Auckland and Los Angeles, bypassing the likes of Guam and Hawaii for the time. The pipe can be upgraded to 12Tbps with 100G technology.

And in July, many analysts and observers backed the announced upgrade to the Asia Pacific Cable Network 2 (APCN2) from 10 Gigabits per second (Gbps) to 40 Gbps.

So capacity is really not even close to being an issue at this stage. Will we need more in future? Most likely, yes, particularly if NBN Co makes good on promises to deliver peak speeds of 1Gbps. But the existing cables can be upgraded by swapping out the terminal equipment to increase the already abundant capacity.

Yet, although we don’t have a bandwidth bottleneck by any stretch of the imagination, it doesn’t mean you will get top speeds from content that is located offshore.

Layer 10’s Dr Paul Brooks explains.

“Because of the round trip delay, you are not going to get 100Mbps of download from an international server regardless of how much capacity is sitting there unused,” Brooks said. “If you have a server close by, yes you can get your average PC can get 20, 30 or 40MBps from that. But with exactly the same download even if you had infinite capacity would still give you only 6 to 10Mbps from the international server because of the round trip delay.”

So even if we build the NBN we aren’t going to get the speeds they promised anyway, so what’s the point in spending the money?

Well, there are a number of reasons why this conclusion is acutely short-sighted – not least that we won’t be using broadband infrastructure just to access internet content that comes from the US - and it shouldn’t be used to preclude building top-class telecommunications infrastructure, even if it isn’t the NBN as we know it under the Labor administration.

Next: The domestic caching trend

Page Break

Video is driving local caching and hosting

Ever since video has emerged as the “killer” application on the internet, there has been a gradual move back to caching content in Australia.

“The nature of video is that it is not by nature dynamic, it is static. So when someone posts a video, that video doesn’t change. So there is no reason why you would haul that video a million times over the pipe in a very short time frame,” says IDC's Cannon. “This is the type of business model that Akamai and those guys are based on; it’s called a content delivery network. It is really there to manage video, because it is the culprit.

“It can’t be a centrally distributed architecture. You need to have that content pushed out to the edges [places like Australia] and then have people access it from as close as possible. A couple of months ago Telstra announced it was going to build a content delivery network based on Cisco kit, which was helping them deliver the T-Box product. Yes, right now today 70 per cent or more of content comes from international pipes because there is no caching locally. But that is changing to deal with video and if you look at what driving the potential hockey stick in terms of data growth it is all about video.”

Brooks agreed with Cannon that with some applications and content requiring the minimal amount of latency – which can become an issue on international links - there will be an increase in the amount of local caching.

“A lot of the content distribution networks are establishing Australian nodes here,” he said. “The content in many cases does come from an Australian server even if it appears to be coming from overseas. There is a lot more of that happening.

“From a content provider point of view, if you were putting together applications that were taking advantage of 100Mbps available speeds you wouldn’t locate it offshore because they would never be able to use it anyway, not because of any capacity shortfall but just because of the speed of light.”

Australia has had a well-documented multi-billion dollar data centre investment rush in recent years to partly help cater for this growing trend along with other factors such as replacing old facilities, cloud computing and simply to cater for the data flood many organisations are experiencing.

IBRS analyst James Turner notes that while there will always be offshore content, Australia could be well placed to act as a caching centre and disaster recover site.

“We are one per cent of the internet population of the world and already our saturation rate of the number of Australians online exceeds that of many other regions, which means overall our percentage of the internet population will be decreasing. That means there will always be more stuff elsewhere,” he said.

“There is definitely going to be an investment in bigger data centres in Australia. Part of it is going to be around the geo-location of data to comply with jurisdiction and people’s perceived concerns. But there is always going to be a demand to host stuff locally.”

This is also driven by the fact Australian ISPs continue to pay up to 17.5 times more for IP transit over international submarine cables than their international counterparts and with video traffic being data heavy they are looking to reduce costs while simultaneously improving the quality of service.

According to statistics provided to Computerworld Australia by analyst firm, TeleGeography, the median price paid by ISPs in Sydney for a fully committed gigabit Ethernet port to an upstream service provider for wholesale internet access in Q1 of 2010 was $US148.

In stark contrast, the same service in Miami (US) and Bucharest (Romania) was $US8, in Tokyo (Japan) $US33, in Taipei (Taiwan) $US45, in Lima (Peru) $US68, and in Mumbai (India) $US85.

That would certainly explain why big cloud computing providers the likes of Microsoft and Amazon are traditionally lax to support local centres.

But, when it comes to internet content - which is only one part of the whole broadband infrastructure debate – the question is whether we want to consume more than we do now going forward, and what the best infrastructure choice is to support those habits.

The statistics seem to suggest we will want more data – as we have pointed out previously – and the ISPs seem to think so also with several announcing vastly higher data plans in recent weeks; a trend that has continued unabated.

“Some ISPs are utilising a form of caching for video traffic now and they are getting 20 to 30 per cent improvements and that is what is leading to the consumer broadband market unlimited data push,” Cannon argued. “That is coming out of nowhere. Everyone is saying, international traffic equals bottleneck, but how come all of a sudden we have gone from 100GB of data to 1TB? We’ve gone ten-fold in data download limits and nothing has happened on international connectivity. How has that happened? It’s because of the caching solutions.”

Yes, people can use existing technologies to access a lot of this video. But not everyone can at a good level of service quality and at the same time as they are using multiple devices with multiple users on any one connection. And that is without factoring in the potential for multi-stream HD video or 3D video content.

Additionally, existing technologies are very much a one way street at the moment – i.e. upload speeds are poor and the incentive to create data-heavy content (such as HD video) is conspicuously absent - which one could argue is an inhibiting factor to fostering the digital economy.

Keeping in mind, again, that the internet content discussion is only one part of the whole broadband infrastructure debate and that many applications will be focussed on domestic traffic only, it is still reasonable to conclude a ubiquitous and scalable network with better upload speeds would very likely result in more people having more access to and potentially creating more internet content, while encouraging more local caching and thus faster/better delivery and service levels.

That said, and bringing the discussion back to submarine cables, there is a strong argument for including investment in submarine cables to help drive down the IP transit cost as part of any national broadband plan.

Next: Undersea cables aren’t that expensive

Page Break

International links should be part of the equation, but to what extent?

Buddecomm director, Paul Budde, has long been a vocal advocate for increasing the number of international links to help drive down the cost of bringing in data from overseas and argues this should be part of any national broadband infrastructure plan.

“Yes, we get a lot of our content from overseas and we are far away from the other English speaking countries,” Budde said. “So yes these costs are high and therefore we do need more capacity. But what you also increasingly start seeing is that we start hosting or replicating some of that content in Australia as well. On one side we will always have that international link issue as that is our geographical situation but on the other side new technologies are going to assist us as well.”

Like many others, Budde says the prices paid by ISPs for international data traffic are too high (See above).

“If there is indeed enough capacity then the prices are far too high in comparison to other submarine cables,” he said. “If you bring in more competition and capacity then automatically prices should drop.”

While investing in an international link would not have much impact on the internet experience in Australia, it could be achieved in a much quicker time frame and for far cheaper than both the Liberal and Labor party plans.

For example, the Pipe Network’s PPC-1 cable launched late last year is estimated to have cost $US150 million and three years to construct.

According to the Australian-Japan Cable’s Russell, his investment was $US500 million. The Southern Cross network was roughly $US1200 million and Telstra’s Endeavour cable $US150 million. In total the investment across the four key undersea cable links sits at approximately $US2 billion.

While Budde may have solid ground in suggesting an increase in competition would decrease prices and potentially have a flow on effect to the prices we pay ISPs, Russell argues that as these cable operators are only worth 10 per cent of the ISP value chain the impact would be minimal; a 10 per cent decrease in prices at best.

On the whole, while the price for IP transit may be a point of contention, it is clear that the existing submarine cable situation in Australia and the fact internet content is based overseas should not be used to argue against building out broadband infrastructure, whatever shape that may take. However, there are arguments for more international links and it would seem prudent to include this as part of any national broadband plan.

PPC-1 launch video: