Skip to main content

Running Mahout K-Means example

   CDH4 comes with Mahout library by default. You don't need to install mahout unless if you want to upgrade to the latest version.
Mahout is a scalable machine learning library which supports many algorithms for clustering, classification, topic modeling, prediction and recommendation systems. It can have terabytes of input data and process the clustering or classification in less than an hour depending on how powerful is your hadoop cluster. In my case I am using 4 powerful PCs with virtual nodes.
   I had a task to model topics from twitter data for 2 months. I tried to run LDA(Latent Dirichlet Allocation - topic modeling algorithm) with R on one PC and it took several hours to build a document matrix of one day tweets(around 800,000) . And for improving results the number of topics K should be bigger than 500 thus exponentially increasing the LDA processing time. It is how I turned into Mahout.
   Except Mahout there are other parallel machine learning libraries. Notable ones are MLlib , SparkR, hiveR. Initially I tried sparkR and MLlib, as they depend on Apache Spark(alternative to hadoop mapreduce) it was not mature enough to run huge datasets. Apache Spark is very fast and promising in-memory map-reduce library, which makes it kind a memory eater. There were many cases where I tried to run 2 queries with Apache Shark  on parallel and the Spark just died by itself. It could not handle heavy load.
I did not try to use hiveR yet, I will post later about it.
Three reasons why  I chose Mahout are stability, maturity and a good support.

Let us run Mahout "k-means" example on CDH4 node.
Go to "/opt/cloudera/parcels/CDH/share/doc/mahout-doc-0.7+22/examples/bin" folder and list out contents. There is "cluster-reuters.sh" file that downloads input data set(news contents from reuters), generates sequence files and converts them into vectors(tf, tf-idf, word count etc..)
Note that before running shell script you need to define HADOOP_HOME,MAHOUT_HOME,CLASSPATH..

Add this to the beginning of "clustter-reuters.sh" file:

HADOOP_HOME="/opt/cloudera/parcels/CDH/lib/hadoop"
MAHOUT_HOME="/opt/cloudera/parcels/CDH/lib/mahout"
MAHOUT=$MAHOUT_HOME/bin/mahout
CLASSPATH=${CLASSPATH}:$MAHOUT_HOME/lib/hadoop/hadoop-common.jar
for f in $MAHOUT_HOME/lib/*.jar; do
        CLASSPATH=${CLASSPATH}:$f;
done

Make sure to run script with "hdfs" user privilege and all accessing folders must have hdfs privilege.
>su hdfs
this will make sure that you won't mess up with account privileges, after the dataset is downloaded the sequence files are created on a local directory. 
You can change the script to generate sequence files on HDFS directly. 
Just remove "MAHOUT_LOCAL=true" from the script to generate sequence files on HDFS.

One last thing, change "dfs" commands to "fs" , dfs was the old deprecated version of HDFS command.

Comments

Popular posts from this blog

Three essential things to do while building Hadoop environment

Last year I setup Hadoop environment by using Cloudera manager. (Basically I followed this video tutorial :  http://www.youtube.com/watch?v=CobVqNMiqww ) I used CDH4(cloudera hadoop)  that included HDFS, MapReduce, Hive, ZooKeeper HBase, Flume and other essential components. It also included YARN (MapReduce 2) but it was not stable so I used MapReduce instead. I installed CDH4 on 10 centos nodes, and I set the Flume to collect twitter data, and by using "crontab" I scheduled the indexing the twitter data in Hive. Anyways, I want to share some of my experiences  and challenges that I faced. First, let me give some problem solutions that everyone must had faced while using Hadoop. 1. vm.swappiness warning on hadoop nodes It is easy to get rid of this warning by just simply running this shell command on nodes: >sysctl -w vm.swappiness=0 More details are written on cloudera's site 2. Make sure to synchronize time on all nodes (otherwise it will give error on n

NLP for Uzbek language

    Natural language processing is an essential tool for text mining in data analysis field. In this post, I want to share my approach in developing stemmer for Uzbek language.      Uzbek language is spoken by 27 million people  around the world and there are a lot of textual materials in internet in uzbek language and it is growing. As I was doing my weekend project " FlipUz " (which is news aggregator for Uzbek news sites) I stumbled on a problem of automatic tagging news into different categories. As this requires a good NLP library, I was not able to find one for Uzbek language. That is how I got a motive to develop a stemmer for Uzbek language.       In short,  Stemming  is an algorithm to remove meaningless suffixes at the end, thus showing the core part of the word. For example: rabbits -> rabbit. As Uzbek language is similar to Turkish, I was curious if there is stemmer for Turkish. And I found this: Turkish Stemmer with Snowball.  Their key approach was to u

NAT Traversal or how to make P2P on Android

Many of us used BitTorrent(or uTorrent) to download files on internet in a short time. Their download speed is high due to Peer-to-peer technology. That means, rather than downloading file from server, we are getting the file from another computer. But how two computers that have a local IP and are behind NAT, how they can connect each other? For that, NAT Traversal methodologies come for help. Note that there are mainly 2 types of NAT: Symmetrical(complex NATs:carrier-grade NAT) and Full (home network or small enterprises). let us consider Full NATs first. Methodologies of NAT traversal are: UPnP - old and hardware oriented method NAT-PMP (later succeeded by PCP)- introduced by Apple, also hardware oriented(i.e: not all routers have it, and even if it had, it is turned off by default) UDP Punching  - this is done by STUN which uses public server to discover NAT public IP & port TCP Punching -  similar to UDP punching but more complicated Symmetrical NATs are a big is