Skip to main content

Building Term Matrix on HBase and Calculating TF-IDF

     Finding term's related terms in real-time would not be easy if you have 2 millions of tweets and 10 millions of words. Although Hadoop can process huge amount of data in parallel it is not real-time.
What if we want to see top  related words for the term "축구" in real-time.
Top related words can be found by calculating TF(Term frequency) values for each word a certain set of documents. Then we calculate IDF(Inverted Document Frequency) values by looking at all documents. IDF helps to exclude  meaningless words (이, 다, 나, 너, 니, 하고, 보다  and so on) . Then We calculate TFIDF and sort all of them by the same values.
Check out TFIDF on wiki: http://en.wikipedia.org/wiki/Tf%E2%80%93idf

The process is quite complicated, but I will try to explain it as simple as possible.
Here are the steps that I went through:

1. Extracting tweetId and the term from Hive into a Text file.

   By using Hive I split each tweet into words and separated each word in a line with its corresponding tweet id.  HiveQL query looks like this:

> select id_str, txt from twitter lateral view explode(split(extractNoun(text),' ')) words as txt where datehour=2014071023 and text not like '%RT %' and source not like '%bot%' and user.lang='ko';

The output look something like this:

487069487825297408      농구덕
487069487825297408      무슨짓
487069487825297408      한
487069488517365761      오늘
487069488517365761      이상
487069488517365761      버스
487069488517365761      흑흑
487069488517365761      시
487069488517365761      경계
487069488517365761      때

Then I stored this a text file.

2. Creating tables in HBase

 We need 3 tables: "TermMatrix" , "Docs" , "Stats". Here is the schema sample:

"Stats" table stores the total number of tweets per each hour.
From the above table "TermMatrix" you can see that Row Key is "term" , Columns represent each hour and cell contains the list of documents(tweets) that has that term.
These two tables helps us to calculate TF, IDF values.

3. Inserting data into Hbase Tables.

    I insert the data into HBase tables from above text file which we created in step 1.


4. Calculating TF-IDF

I made a simple calculation example about how to get TFIDF:

Target Term is searched keyword, in our case it is "축구"
In above image I intentionally removed "Log" function to simplify explanation.
Actual IDF Formula is     IDF = LOG(NumberOfDocs / TermTotalCount)

From TFIDF calculation you can see that TFIDF value of "is" drastically low from "people" because of it is used in many documents.

My Final Results:


My second searched keyword was "브라질"

getting tids took:25
building random list took:206
building freq List took :1562
sorting took :4
getting total took :130
final tfidf calcul took :1309

term:브라질           tfidf:10.10 
term:독일               tfidf:1.50 
term:월드컵           tfidf:1.33 
term:수니              tfidf:1.08 
term:축구             tfidf:0.98 
term:7                   tfidf:0.92 
term:독                tfidf:0.89 
term:1                tfidf:0.87 
term:경기          tfidf:0.86 
term:네덜란드 tfidf:0.81 
term:참패         tfidf:0.79 
term:4강전       tfidf:0.60 
term:콜롬비아 tfidf:0.59 
term:음주가무 tfidf:0.58 
term:회식        tfidf:0.57 
term:현지       tfidf:0.51 
term:우승      tfidf:0.50 
term:홍명보 tfidf:0.49 
term:응원     tfidf:0.49 

14/07/16 13:40:55 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x146ea39ad3fba67
14/07/16 13:40:55 INFO zookeeper.ZooKeeper: Session: 0x146ea39ad3fba67 closed
Done......
Total processing in Secs:5

It is taking around 5 seconds to get "브라질" related terms with TFIDF sorting of one day tweet.
I am working on optimization to reduce it.

Comments

Popular posts from this blog

Three essential things to do while building Hadoop environment

Last year I setup Hadoop environment by using Cloudera manager. (Basically I followed this video tutorial :  http://www.youtube.com/watch?v=CobVqNMiqww ) I used CDH4(cloudera hadoop)  that included HDFS, MapReduce, Hive, ZooKeeper HBase, Flume and other essential components. It also included YARN (MapReduce 2) but it was not stable so I used MapReduce instead. I installed CDH4 on 10 centos nodes, and I set the Flume to collect twitter data, and by using "crontab" I scheduled the indexing the twitter data in Hive. Anyways, I want to share some of my experiences  and challenges that I faced. First, let me give some problem solutions that everyone must had faced while using Hadoop. 1. vm.swappiness warning on hadoop nodes It is easy to get rid of this warning by just simply running this shell command on nodes: >sysctl -w vm.swappiness=0 More details are written on cloudera's site 2. Make sure to synchronize time on all nodes (otherwise it will give error on n

NLP for Uzbek language

    Natural language processing is an essential tool for text mining in data analysis field. In this post, I want to share my approach in developing stemmer for Uzbek language.      Uzbek language is spoken by 27 million people  around the world and there are a lot of textual materials in internet in uzbek language and it is growing. As I was doing my weekend project " FlipUz " (which is news aggregator for Uzbek news sites) I stumbled on a problem of automatic tagging news into different categories. As this requires a good NLP library, I was not able to find one for Uzbek language. That is how I got a motive to develop a stemmer for Uzbek language.       In short,  Stemming  is an algorithm to remove meaningless suffixes at the end, thus showing the core part of the word. For example: rabbits -> rabbit. As Uzbek language is similar to Turkish, I was curious if there is stemmer for Turkish. And I found this: Turkish Stemmer with Snowball.  Their key approach was to u

NAT Traversal or how to make P2P on Android

Many of us used BitTorrent(or uTorrent) to download files on internet in a short time. Their download speed is high due to Peer-to-peer technology. That means, rather than downloading file from server, we are getting the file from another computer. But how two computers that have a local IP and are behind NAT, how they can connect each other? For that, NAT Traversal methodologies come for help. Note that there are mainly 2 types of NAT: Symmetrical(complex NATs:carrier-grade NAT) and Full (home network or small enterprises). let us consider Full NATs first. Methodologies of NAT traversal are: UPnP - old and hardware oriented method NAT-PMP (later succeeded by PCP)- introduced by Apple, also hardware oriented(i.e: not all routers have it, and even if it had, it is turned off by default) UDP Punching  - this is done by STUN which uses public server to discover NAT public IP & port TCP Punching -  similar to UDP punching but more complicated Symmetrical NATs are a big is