Thursday, June 19, 2014

ThreeJs - 3D data visualization on web


     I was into 3D since high school years. My first experience was making 3D models with 3Dsmax.
Since few years back Google Chrome had been trying to bring this technology to all desktops through web. With the rise of HTML5 it became possible to render 3D on client side by using WebGL via javascript. However, using WebGL via Javascript was not easy so open-source communities developed various wrapper libraries to simplify using webGL.
   Among them ThreeJS (http://threejs.org/) is one the best choice to use IMHO. It is more mature comparing to others, and there are many examples to follow.

It is better to see rather than reading, Check out this ThreeJS example: http://www.georgeandjonathan.com/#6
Is not it amazing?
Dynamic, interactive and open-source 3D visualization directly on your Chrome Browser.
It runs on the latest Android Tablets and Phones too.
In few years, I think everyone will be able to browse 3D sites on their devices.

So, by using this library I am trying to make some impressing data visualization.
Here is how I visualized the Word Cloud in 3D:


  Unfortunately, it is just a screenshot from my browser and next time I am going to upload live demo.

Friday, June 13, 2014

Korean NLP on Hive

These days I am building a small platform for doing "Topic Modeling" with Apache Mahout.
While working on this project I tried to maximize the speed of NLP processing as fast as possible. Previously, I was running NLP on a single node after retrieving the tweet messages from Hive. Then I found out about UDF(User Defined Functions) which makes it possible to run custom libraries on Hadoop nodes during the mapping process of MapReduce.
By using UDF, I  attached open-source Korean NLP library Hannanum and made a library to process NLP and Text processing parallel.
Here is how the hive query looks like:

>select extractNoun(text) from twitter;

I put the source code on GitHub
Feel free to use and ask questions.

Here is my K-Means dumped cluster output (processing k=100 topics with one-day tweet data)
Output.txt

Wednesday, June 11, 2014

Hive: Creating External Table from existing one


   Apache Hive is an excellent tool to retrieve data from HDFS  using SQL queries. I used Json Serde library to map Json files into Hive table.
So initially I created table like this:

CREATE EXTERNAL TABLE twitter (
  coordinates struct, type:string,
  created_at string,
  entities struct, text:string, urls:array, url:string, user_mentions:array, name:string, screen_name:string,
  favorite_count int,
  favorited boolean,
  filter_level string,
  geo struct, type:string,
  id_str string,
  in_reply_to_screen_name string,
  in_reply_to_status_id_str string,
  in_reply_to_user_id_str string,
  lang string,
  retweet_count int,
  retweeted boolean,
  retweeted_status struct, text:string, urls:array, url:string;, user_mentions:array, name:string, screen_name:string, favorite_count:int, favorited:boolean, geo:struct, type:string, id_str:string, in_reply_to_screen_name:string, in_reply_to_status_id_str:string, in_reply_to_user_id_str:string, lang:string, retweet_count:int, retweeted:boolean, source:string, text:string, truncated:boolean, user:struct,
  source string,
  text string,
  truncated boolean,
  user struct)
PARTITIONED BY (datehour INT)
ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe'
LOCATION '/user/hive/warehouse/twitter';

Everything worked great, till I realized that the mapping was not working so well.. I was not able to retrieve original tweets of retweets(Retweeted_status) as there was some problem with data type mismatch.
So, I simplified the table schema with this:

CREATE EXTERNAL TABLE twitternew (
  id BIGINT,
  created_at STRING,
  source STRING,
  favorited BOOLEAN,
  retweet_count INT,
  retweeted_status STRUCT<
    text:STRING,
    user:STRUCT>,
  entities STRUCT<
    urls:ARRAY>,
    user_mentions:ARRAY>,
    hashtags:ARRAY>>,

  text STRING,
  user STRUCT<
    screen_name:STRING,
    name:STRING,
    friends_count:INT,
    followers_count:INT,
    statuses_count:INT,
    verified:BOOLEAN,
    utc_offset:INT,
    time_zone:STRING>,
  in_reply_to_screen_name STRING
)
PARTITIONED BY (datehour INT)
ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe'
LOCATION '/user/hive/warehouse/twitter';
Now,  How we can move already partitioned table data from previous to the new one.
There were several options, First option is to execute HQL "load data inpath" again for each hourly accumulated folder. But it would take a lot of time moving all data from one folder to another.
So I found easy alternative way

"alter table twitternew add partition (datehour=2014010100);"

Call above script for each already indexed hours. In my case I wrote a bash script to call from "2014010100" to "2014061210". This will update the Hive metastore without moving any files anywhere.

Note that new table "twitternew" table has the same LOCATION path as previous table, also the partition name datehour must be same too.

You can drop the old table after successfully running your test sql queries. good luck!