CDH4 comes with Mahout library by default. You don't need to install mahout unless if you want to upgrade to the latest version.
Mahout is a scalable machine learning library which supports many algorithms for clustering, classification, topic modeling, prediction and recommendation systems. It can have terabytes of input data and process the clustering or classification in less than an hour depending on how powerful is your hadoop cluster. In my case I am using 4 powerful PCs with virtual nodes.
I had a task to model topics from twitter data for 2 months. I tried to run LDA(Latent Dirichlet Allocation - topic modeling algorithm) with R on one PC and it took several hours to build a document matrix of one day tweets(around 800,000) . And for improving results the number of topics K should be bigger than 500 thus exponentially increasing the LDA processing time. It is how I turned into Mahout.
Except Mahout there are other parallel machine learning libraries. Notable ones are MLlib , SparkR, hiveR. Initially I tried sparkR and MLlib, as they depend on Apache Spark(alternative to hadoop mapreduce) it was not mature enough to run huge datasets. Apache Spark is very fast and promising in-memory map-reduce library, which makes it kind a memory eater. There were many cases where I tried to run 2 queries with Apache Shark on parallel and the Spark just died by itself. It could not handle heavy load.
I did not try to use hiveR yet, I will post later about it.
Three reasons why I chose Mahout are stability, maturity and a good support.
Let us run Mahout "k-means" example on CDH4 node.
Go to "/opt/cloudera/parcels/CDH/share/doc/mahout-doc-0.7+22/examples/bin" folder and list out contents. There is "cluster-reuters.sh" file that downloads input data set(news contents from reuters), generates sequence files and converts them into vectors(tf, tf-idf, word count etc..)
Note that before running shell script you need to define HADOOP_HOME,MAHOUT_HOME,CLASSPATH..
Add this to the beginning of "clustter-reuters.sh" file:
HADOOP_HOME="/opt/cloudera/parcels/CDH/lib/hadoop"
MAHOUT_HOME="/opt/cloudera/parcels/CDH/lib/mahout"
MAHOUT=$MAHOUT_HOME/bin/mahout
CLASSPATH=${CLASSPATH}:$MAHOUT_HOME/lib/hadoop/hadoop-common.jar
Mahout is a scalable machine learning library which supports many algorithms for clustering, classification, topic modeling, prediction and recommendation systems. It can have terabytes of input data and process the clustering or classification in less than an hour depending on how powerful is your hadoop cluster. In my case I am using 4 powerful PCs with virtual nodes.
I had a task to model topics from twitter data for 2 months. I tried to run LDA(Latent Dirichlet Allocation - topic modeling algorithm) with R on one PC and it took several hours to build a document matrix of one day tweets(around 800,000) . And for improving results the number of topics K should be bigger than 500 thus exponentially increasing the LDA processing time. It is how I turned into Mahout.
Except Mahout there are other parallel machine learning libraries. Notable ones are MLlib , SparkR, hiveR. Initially I tried sparkR and MLlib, as they depend on Apache Spark(alternative to hadoop mapreduce) it was not mature enough to run huge datasets. Apache Spark is very fast and promising in-memory map-reduce library, which makes it kind a memory eater. There were many cases where I tried to run 2 queries with Apache Shark on parallel and the Spark just died by itself. It could not handle heavy load.
I did not try to use hiveR yet, I will post later about it.
Three reasons why I chose Mahout are stability, maturity and a good support.
Let us run Mahout "k-means" example on CDH4 node.
Go to "/opt/cloudera/parcels/CDH/share/doc/mahout-doc-0.7+22/examples/bin" folder and list out contents. There is "cluster-reuters.sh" file that downloads input data set(news contents from reuters), generates sequence files and converts them into vectors(tf, tf-idf, word count etc..)
Note that before running shell script you need to define HADOOP_HOME,MAHOUT_HOME,CLASSPATH..
Add this to the beginning of "clustter-reuters.sh" file:
HADOOP_HOME="/opt/cloudera/parcels/CDH/lib/hadoop"
MAHOUT_HOME="/opt/cloudera/parcels/CDH/lib/mahout"
MAHOUT=$MAHOUT_HOME/bin/mahout
CLASSPATH=${CLASSPATH}:$MAHOUT_HOME/lib/hadoop/hadoop-common.jar
for f in $MAHOUT_HOME/lib/*.jar; do
CLASSPATH=${CLASSPATH}:$f;
done
Make sure to run script with "hdfs" user privilege and all accessing folders must have hdfs privilege.
>su hdfs
this will make sure that you won't mess up with account privileges, after the dataset is downloaded the sequence files are created on a local directory.
You can change the script to generate sequence files on HDFS directly.
Just remove "MAHOUT_LOCAL=true" from the script to generate sequence files on HDFS.
One last thing, change "dfs" commands to "fs" , dfs was the old deprecated version of HDFS command.
Comments
Post a Comment