Cloudera expands Hadoop ecosystem

Cloudera has released version 3 of its Hadoop distribution

With the new release of its Hadoop distribution, Cloudera has radically expanded the set of supporting tools for the data processing framework.

"What we saw was that most organizations deploy quite a bit more than just Hadoop. There is this whole ecosystem of other open source components that collectively make up the whole Hadoop ecosystem that people run in production today," said Cloudera vice president of products Charles Zedlewski.

With version 3 of its Hadoop package, CHD3, Cloudera has added and integrated seven additional programs, all of which should help smooth the process of setting up and running Hadoop jobs, Zedlewski asserted.

"People will want to consume a complete system that has all been tested and integrated together," Zedlewski said.

The prior version of Cloudera's package included the core Hadoop programs, the Hive data warehouse software and the Pig data flow scripting language. The core Hadoop package itself contains the MapReduce distributed workforce engine, the Hadoop Distributed File System (HDFS), and a set of assorted tools called Hadoop Commons.

The new package includes additional programs such as a data aggregation tool called Flume, the data format converter called Sqoop, a Hadoop graphical user interface called Hue, and a configuration tool called the Zookeeper. All the tools are open source, under the Apache Foundation license.

First developed as an offshoot of the Apache Lucene search engine, Hadoop is a framework for processing large amounts of data scattered across multiple nodes. It is particularly well-suited for processing and analyzing vast amounts machine-generated data that can't fit into standard relational databases.

The new distribution can streamline a lot of the work required to set up Hadoop jobs, Zedlewski said. He offered an example of how these additional tools could help speed clickstream analysis, which involves building records of how users click through different Web sites.

The source data for clickstream tracking comes from activity logs of multiple servers. "Collecting clickstream data from 2,000 servers is not trivial," he said. The data must be put into the Hadoop file system, and then reorganized by each individual's session. This "sessionization process," can involve 40 or more steps. After the material is organized, it then must be exported back out to a data warehouse or database in a easily-accessible format.

This new version eliminates a lot of the scripting work needed by providing tools to get the data into Hadoop, to reorganize the data once in Hadoop, and to export the resulting data set back out again.

The freely downloadable CHD3 package is compatible with the Red Hat, CentOS, SuSE and Ubuntu Linux distributions. It can also be run on the Amazon and Rackspace cloud services, and has been integrated with business intelligence and ETL (extract load and transform) vendor tools, such as those offered by Informatica, Jaspersoft, Microstrategy, Neteeza and Teradata.

Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags databasessoftwareapplicationsdata warehousingclouderadata mining

More about Amazon Web ServicesApacheIDGInformaticaLinuxRed HatSuseTeradata AustraliaUbuntu

Show Comments
[]