Skip to main content

Hive: Creating External Table from existing one


   Apache Hive is an excellent tool to retrieve data from HDFS  using SQL queries. I used Json Serde library to map Json files into Hive table.
So initially I created table like this:

CREATE EXTERNAL TABLE twitter (
  coordinates struct, type:string,
  created_at string,
  entities struct, text:string, urls:array, url:string, user_mentions:array, name:string, screen_name:string,
  favorite_count int,
  favorited boolean,
  filter_level string,
  geo struct, type:string,
  id_str string,
  in_reply_to_screen_name string,
  in_reply_to_status_id_str string,
  in_reply_to_user_id_str string,
  lang string,
  retweet_count int,
  retweeted boolean,
  retweeted_status struct, text:string, urls:array, url:string;, user_mentions:array, name:string, screen_name:string, favorite_count:int, favorited:boolean, geo:struct, type:string, id_str:string, in_reply_to_screen_name:string, in_reply_to_status_id_str:string, in_reply_to_user_id_str:string, lang:string, retweet_count:int, retweeted:boolean, source:string, text:string, truncated:boolean, user:struct,
  source string,
  text string,
  truncated boolean,
  user struct)
PARTITIONED BY (datehour INT)
ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe'
LOCATION '/user/hive/warehouse/twitter';

Everything worked great, till I realized that the mapping was not working so well.. I was not able to retrieve original tweets of retweets(Retweeted_status) as there was some problem with data type mismatch.
So, I simplified the table schema with this:

CREATE EXTERNAL TABLE twitternew (
  id BIGINT,
  created_at STRING,
  source STRING,
  favorited BOOLEAN,
  retweet_count INT,
  retweeted_status STRUCT<
    text:STRING,
    user:STRUCT>,
  entities STRUCT<
    urls:ARRAY>,
    user_mentions:ARRAY>,
    hashtags:ARRAY>>,

  text STRING,
  user STRUCT<
    screen_name:STRING,
    name:STRING,
    friends_count:INT,
    followers_count:INT,
    statuses_count:INT,
    verified:BOOLEAN,
    utc_offset:INT,
    time_zone:STRING>,
  in_reply_to_screen_name STRING
)
PARTITIONED BY (datehour INT)
ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe'
LOCATION '/user/hive/warehouse/twitter';
Now,  How we can move already partitioned table data from previous to the new one.
There were several options, First option is to execute HQL "load data inpath" again for each hourly accumulated folder. But it would take a lot of time moving all data from one folder to another.
So I found easy alternative way

"alter table twitternew add partition (datehour=2014010100);"

Call above script for each already indexed hours. In my case I wrote a bash script to call from "2014010100" to "2014061210". This will update the Hive metastore without moving any files anywhere.

Note that new table "twitternew" table has the same LOCATION path as previous table, also the partition name datehour must be same too.

You can drop the old table after successfully running your test sql queries. good luck!

Comments

Popular posts from this blog

NLP for Uzbek language

    Natural language processing is an essential tool for text mining in data analysis field. In this post, I want to share my approach in developing stemmer for Uzbek language.      Uzbek language is spoken by 27 million people  around the world and there are a lot of textual materials in internet in uzbek language and it is growing. As I was doing my weekend project " FlipUz " (which is news aggregator for Uzbek news sites) I stumbled on a problem of automatic tagging news into different categories. As this requires a good NLP library, I was not able to find one for Uzbek language. That is how I got a motive to develop a stemmer for Uzbek language.       In short,  Stemming  is an algorithm to remove meaningless suffixes at the end, thus showing the core part of the word. For example: rabbits -> rabbit. As Uzbek language is similar to Turkish, I was curious if there is stemmer for Turkish. And I found this: Turkish St...

Streaming Twitter tweets to HBase with Apache Flume

            Apache Hbase is a great noSQL database for storing enormous amount of data that can scale in three axis. Apache Hbase was based on Google's BigTable that stores all  web contents in internet. By knowing row key and column id we can retrieve the value at the matter of milliseconds. HBase runs on top of HDFS and friendly with MapReduce tasks. So it can scale up together with Hadoop. One thing which seems to be disadvantage is HBase depends on ZooKeeper, while other big table based databases like Cassandra is independent. Nevertheless, I did not face any problem with it. Apache Hbase is really fast. Currently I am using it for TF-IDF based keyword retrieval, and it can retrieve results from 2 million tweets in few seconds. Anyways, Let me get back to the topic. My plan was to stream twitter data directly to Hbase by using Apache Flume. Fortunately, Flume has a Hbase sink plugin that comes by default in lib folder. We can use two kinds o...

Three essential things to do while building Hadoop environment

Last year I setup Hadoop environment by using Cloudera manager. (Basically I followed this video tutorial :  http://www.youtube.com/watch?v=CobVqNMiqww ) I used CDH4(cloudera hadoop)  that included HDFS, MapReduce, Hive, ZooKeeper HBase, Flume and other essential components. It also included YARN (MapReduce 2) but it was not stable so I used MapReduce instead. I installed CDH4 on 10 centos nodes, and I set the Flume to collect twitter data, and by using "crontab" I scheduled the indexing the twitter data in Hive. Anyways, I want to share some of my experiences  and challenges that I faced. First, let me give some problem solutions that everyone must had faced while using Hadoop. 1. vm.swappiness warning on hadoop nodes It is easy to get rid of this warning by just simply running this shell command on nodes: >sysctl -w vm.swappiness=0 More details are written on cloudera's site 2. Make sure to synchronize time on all nodes (otherwise it will give error on n...