Open source helps Facebook achieve massive app scalability

People all over the world spend a total of eight billion minutes a day on Facebook. Some 400 billion Web pages are viewed every month, 3.5 billion pieces of content are shared every week and the site logs a staggering 25TB of data every day. David Recordon, senior open programs manager at Facebook, talks about how the social networking giant uses open source tools to achieve its massive app scalablilty.

Another critical part of Facebook’s infrastructure is Hive, a specialised execution front end for Hadoop, the open source software for clustering and cloud computing first developed at Yahoo.

Facebook first became interested in Hadoop as a means of processing their Web site's traffic data that was generating terabytes per day and overtaxing their Oracle database. Though they were happy with Hadoop, they wanted to simplify its use so that engineers could express frequently used analysis operations in SQL. The resulting Hadoop-based data warehouse application became Hive, and it now helps to process more than 10TB of Facebook data daily. Now Hive is available as an open source subproject of Apache Hadoop.

“Many companies are building on each other’s open source technologies and that is what we’re big on. We want to open source the things we are finding useful in Facebook,” Recordon says.

Facebook currently runs a large Hadoop-Hive cluster for data processing and analytics. “We have a lot of information coming into this cluster and we couldn’t have done it without Hadoop coming before us,” Recordon says. “An important point, from an open source perspective, is being able to build on top of what others have released, make it solve your use cases better and then release it again for others to build on [and] innovate together.”

The initial idea behind Hive was to simplify Hadoop so non-engineers could work with it. Now about 250 people per month from different parts of Facebook are running jobs on the Hive-Hadoop infrastructure.

Databases

With so much data manipulation going on at Facebook the question persists: where is all that data stored? The open source relational database management system MySQL has played a key role at Facebook from the site's inception, and the company currently runs thousands of MySQL servers in multiple data centres.

“It’s a 'share-nothing' architecture,” MacVicar says. “In an army, if one soldier loses his head then nothing happens. At Facebook it doesn’t really matter if a single server disappears -- we have other servers that will back it up.”

MacVicar says Facebook uses MySQL’s InnoDB engine because it’s the only one with full transaction support. “We use databases as a way to keep persistent data. Memcache is a distributed index and database is persistent storage,” MacVicar says. “The Web server will look at the distributed index to see if the data is there and if that fails it will go to persistent storage. We also have other search services that will look for data.”

In keeping with the rest of its distributed architecture, Facebook has independent database clusters rather than one really large cluster. According to its own internal records, Facebook has experienced about 8000 server-years of runtime without any data loss or corruption -- at least due to the software.

So how does a large site like Facebook prevent data loss? Recordon says standard approaches to running database clusters “applies to us too”.

“Replication is working [and] the lessons for running databases are similar for all types of sites. Those best practices apply to a much larger scale.”

An open innovation culture

From the outside, Facebook may look like a single, unified "grand design" in social networking, but many of the technologies it now relies on were born out of “hack projects”, Recordon says.

“We’re not necessarily going out to solve a specific problem,” he says. “Rather, a few people will just sit down and work on something. When we want to solve a problem we often try two or three ways of doing things. We prototype them then throw two away and move forward with the third one.”

“We have an engineering culture where small groups of people can have a large impact,” Recordon says.

Part of Facebook's engineering culture is to “move fast”, and open source technologies help it to achieve that speed. Recordon cites the photo serving tool Haystack, which was built by only three employees, as a good example of the kind of innovations produced by Facebook's "engineering culture".

When asked how Facebook scales internally with its developers and how it ensures all the developers are on a consistent frame of reference when developing new features, Recordon says scaling is "more than technology, it’s about scaling an organisation and scaling an engineering team".

“We’ve developed a variety of internal tools to help us do that,” he says, adding that Facebook has layered apps on top of version control repositories to help combat collaboration issues.

Facebook won’t reveal the exact number of servers it has, only that it’s “tens of thousands” overall. With data centres on east and west coast of the US, Facebook also uses content distribution networks (Akamai) for images, css and javascript around the world.

Operating system capacity planning and trending the tools used by Facebook include Ganglia and Cactii, as well as some internally-developed software. The main source code repository is subversion, but about half of the developers are now using Git, which Facebook has developed apps for which handle code reviewing and documentation.

Another open source project that Facebook uses is Cassandra, a distributed, fault-tolerant database. Cassandra is now hosted by the Apache Software Foundation and since going open source in 2008 has attracted some large sites, including Digg and Twitter. At Facebook, Cassandra is used for inbox search, but Recordon admits the company hasn’t done much active development work on it during the past year.

“At this point I think Digg, Twitter and Rackspace are the largest contributors to Cassandra inside of the Apache project,” Recordon says. “From our perspective it’s great to see a community develop it and push it forward.”

Facebook has mastered open source high-performance computing, but the site also uses the commercial Oracle RAC for a some of its data management.

For corporate IT managers and CIOs the Facebook story is a lesson in innovation and pragmatism. If a start-up can achieve the vast scale of data management that Facebook has using a mixture of public software and in-house code, then perhaps it will prompt others to re-examine established methods that involve commercial off-the shelf software and look farther afield for scalable technology solutions.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags cloud computingopen sourcedatabasesFacebookLinuxsoftware developmentmysqlphpapachelampmemcachedhigh performance computing (HPC)

More about Akamai TechnologiesAMPApacheApache Software FoundationFacebookGoogleHewlett-Packard AustraliaHPLinuxMicrosoftMySQLOracleWikipediaYahoo

Show Comments
[]