News

Facebook tackles Hadoop achilles' heel

Running what they believe is the world’s largest Hadoop-based collection of data, Facebook engineers have developed a way to circumnavigate a core weakness of the data analysis platform, that of relying on only a single name server to coordinate all operations.

Facebook engineer Andrew Ryan discussed the work-around at the Hadoop Summit, being held this week in San Jose, California. He also posted a summary of his talk on Facebook.

Facebook has what it believes is the world’s largest collection of data on the Hadoop Distributed File System (HDFS), over 100 PBs worth, spread out over 100 different clusters across its data centres.

While increasingly popular for large-scale data analysis tasks, Hadoop has what is known in engineering terms as a single point of failure. While a Hadoop deployment may across hundreds or thousands of servers, the entire operation depends on a single server, called the namenode, to coordinate all the traffic among the data nodes. Should that single namenode stop operating, then the data nodes could not communicate and, in effect, the whole system would cease to function.

Facebook has estimated that resolving this weakness would cut the downtime in its data warehouse by almost half.

In order to solve this problem, Facebook created software, called Avatarnode, that can switch to a backup namenode should the primary fail for some reason. In this setup, each data node routinely sends updates to both the primary and backup namenodes. Should the primary namenode stop functioning, then the backup namenode would take over operations. The software, named after the James Cameron film “Avatar,” relies on the Hadoop Zookeeper configuration management tool.

The company offers Avatarnode as open source, in the hopes that Hadoop administrators could benefit from its use. Facebook released the software in 2010, and has been pressed into production duty at the company.

“The Avatarnode is running our most demanding production workloads inside of Facebook today, and will continue to lead to substantial improvements in reliability and administration of HDFS clusters,” Ryan wrote.

“Moving forward, we’re working to improve Avatarnode further and integrate it with a general high-availability framework that will permit unattended, automated, and safe failover,” he added.

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines