Skip to main content

Streaming Twitter tweets to HBase with Apache Flume

            Apache Hbase is a great noSQL database for storing enormous amount of data that can scale in three axis. Apache Hbase was based on Google's BigTable that stores all  web contents in internet. By knowing row key and column id we can retrieve the value at the matter of milliseconds.
HBase runs on top of HDFS and friendly with MapReduce tasks. So it can scale up together with Hadoop.
One thing which seems to be disadvantage is HBase depends on ZooKeeper, while other big table based databases like Cassandra is independent. Nevertheless, I did not face any problem with it.
Apache Hbase is really fast. Currently I am using it for TF-IDF based keyword retrieval, and it can retrieve results from 2 million tweets in few seconds.
Anyways, Let me get back to the topic.

My plan was to stream twitter data directly to Hbase by using Apache Flume. Fortunately, Flume has a Hbase sink plugin that comes by default in lib folder. We can use two kinds of sinks : HBaseSink and AsyncHBaseSink. The latter is highly recommended.
While using this plugin, we need to have a serializer that takes Events from Flume and serializes the data then puts into Hbase table.
There are some default serializers which can be used like SimpleAsyncHbaseEventSerializer, however they don't suit for us as our data is not simple, it is a twitter data.
That is why we need to user AsyncHbaseEventSerializer interface  to create our own serializer.

Our serializer simply gets the twitter events(statutes) and puts it into Hbase.
You can check the source code at : https://github.com/ahikmat85/flume2basesplitter

Flume conf:

TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = hbaseSink

TwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sources.Twitter.consumerKey =
TwitterAgent.sources.Twitter.consumerSecret =
TwitterAgent.sources.Twitter.accessToken =
TwitterAgent.sources.Twitter.accessTokenSecret =
TwitterAgent.sources.Twitter.keywords =

TwitterAgent.sinks.hbaseSink.type=org.apache.flume.sink.hbase.AsyncHBaseSink
TwitterAgent.sinks.hbaseSink.channel=MemChannel
TwitterAgent.sinks.hbaseSink.table=hbtweet
TwitterAgent.sinks.hbaseSink.columnFamily=tweet
TwitterAgent.sinks.hbaseSink.serializer=org.apache.flume.sink.hbase.SplittingSerializer
TwitterAgent.sinks.hbaseSink.serializer.columns=tweet:nothing

TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transactionCapacity =1000


Reference:  https://blogs.apache.org/flume/entry/streaming_data_into_apache_hbase

Comments

  1. This comment has been removed by the author.

    ReplyDelete
  2. Hi,

    I am getting the following error while running the Flume.

    16/03/13 17:16:55 ERROR flume.SinkRunner: Unable to deliver event. Exception follows.
    org.apache.flume.FlumeException: No row key found in headers!
    at org.apache.flume.sink.hbase.SplittingSerializer.setEvent(SplittingSerializer.java:36)
    at org.apache.flume.sink.hbase.AsyncHBaseSink.process(AsyncHBaseSink.java:223)
    at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
    at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
    at java.lang.Thread.run(Thread.java:745)
    16/03/13 17:16:57 INFO twitter.TwitterSource: Processed 100 docs
    16/03/13 17:16:59 INFO twitter.TwitterSource: Processed 200 docs
    16/03/13 17:17:00 ERROR flume.SinkRunner: Unable to deliver event. Exception follows.
    org.apache.flume.FlumeException: No row key found in headers!
    at org.apache.flume.sink.hbase.SplittingSerializer.setEvent(SplittingSerializer.java:36)
    at org.apache.flume.sink.hbase.AsyncHBaseSink.process(AsyncHBaseSink.java:223)
    at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
    at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
    at java.lang.Thread.run(Thread.java:745)




    I am new to Java and Flume both. Can you help me out in figuring what should i change to make it run?

    ReplyDelete

Post a Comment

Popular posts from this blog

Three essential things to do while building Hadoop environment

Last year I setup Hadoop environment by using Cloudera manager. (Basically I followed this video tutorial :  http://www.youtube.com/watch?v=CobVqNMiqww ) I used CDH4(cloudera hadoop)  that included HDFS, MapReduce, Hive, ZooKeeper HBase, Flume and other essential components. It also included YARN (MapReduce 2) but it was not stable so I used MapReduce instead. I installed CDH4 on 10 centos nodes, and I set the Flume to collect twitter data, and by using "crontab" I scheduled the indexing the twitter data in Hive. Anyways, I want to share some of my experiences  and challenges that I faced. First, let me give some problem solutions that everyone must had faced while using Hadoop. 1. vm.swappiness warning on hadoop nodes It is easy to get rid of this warning by just simply running this shell command on nodes: >sysctl -w vm.swappiness=0 More details are written on cloudera's site 2. Make sure to synchronize time on all nodes (otherwise it will give error on n

NLP for Uzbek language

    Natural language processing is an essential tool for text mining in data analysis field. In this post, I want to share my approach in developing stemmer for Uzbek language.      Uzbek language is spoken by 27 million people  around the world and there are a lot of textual materials in internet in uzbek language and it is growing. As I was doing my weekend project " FlipUz " (which is news aggregator for Uzbek news sites) I stumbled on a problem of automatic tagging news into different categories. As this requires a good NLP library, I was not able to find one for Uzbek language. That is how I got a motive to develop a stemmer for Uzbek language.       In short,  Stemming  is an algorithm to remove meaningless suffixes at the end, thus showing the core part of the word. For example: rabbits -> rabbit. As Uzbek language is similar to Turkish, I was curious if there is stemmer for Turkish. And I found this: Turkish Stemmer with Snowball.  Their key approach was to u

NAT Traversal or how to make P2P on Android

Many of us used BitTorrent(or uTorrent) to download files on internet in a short time. Their download speed is high due to Peer-to-peer technology. That means, rather than downloading file from server, we are getting the file from another computer. But how two computers that have a local IP and are behind NAT, how they can connect each other? For that, NAT Traversal methodologies come for help. Note that there are mainly 2 types of NAT: Symmetrical(complex NATs:carrier-grade NAT) and Full (home network or small enterprises). let us consider Full NATs first. Methodologies of NAT traversal are: UPnP - old and hardware oriented method NAT-PMP (later succeeded by PCP)- introduced by Apple, also hardware oriented(i.e: not all routers have it, and even if it had, it is turned off by default) UDP Punching  - this is done by STUN which uses public server to discover NAT public IP & port TCP Punching -  similar to UDP punching but more complicated Symmetrical NATs are a big is