Skip to main content

Extending CentOS disk space (adding more lvm)


How to extend filesystem capacity in hadoop data node?

First, attach your new hard drive to your PC (or add virtual hard drive if you are running on VirtualBox)

Mount gParted ISO on your CentOS node.
Boot with gParted.

Create partition table for new hard drive:
Device > Create Partition Table>  GPT is recommended

Create new partition: "lvm2 pv" is recommended

Boot CentOs
Stop hadoop services on node.

fdisk -l

vgextend vg_hadoop2 /dev/sdb1

lvdisplay

lvextend -L+8.48G /dev/vg_hadoop2/lv_root

resize2fs -F /dev/vg_hadoop2/lv_root

df -h

That's it

Comments

Popular posts from this blog

NLP for Uzbek language

    Natural language processing is an essential tool for text mining in data analysis field. In this post, I want to share my approach in developing stemmer for Uzbek language.      Uzbek language is spoken by 27 million people  around the world and there are a lot of textual materials in internet in uzbek language and it is growing. As I was doing my weekend project " FlipUz " (which is news aggregator for Uzbek news sites) I stumbled on a problem of automatic tagging news into different categories. As this requires a good NLP library, I was not able to find one for Uzbek language. That is how I got a motive to develop a stemmer for Uzbek language.       In short,  Stemming  is an algorithm to remove meaningless suffixes at the end, thus showing the core part of the word. For example: rabbits -> rabbit. As Uzbek language is similar to Turkish, I was curious if there is stemmer for Turkish. And I found this: Turkish St...

Three essential things to do while building Hadoop environment

Last year I setup Hadoop environment by using Cloudera manager. (Basically I followed this video tutorial :  http://www.youtube.com/watch?v=CobVqNMiqww ) I used CDH4(cloudera hadoop)  that included HDFS, MapReduce, Hive, ZooKeeper HBase, Flume and other essential components. It also included YARN (MapReduce 2) but it was not stable so I used MapReduce instead. I installed CDH4 on 10 centos nodes, and I set the Flume to collect twitter data, and by using "crontab" I scheduled the indexing the twitter data in Hive. Anyways, I want to share some of my experiences  and challenges that I faced. First, let me give some problem solutions that everyone must had faced while using Hadoop. 1. vm.swappiness warning on hadoop nodes It is easy to get rid of this warning by just simply running this shell command on nodes: >sysctl -w vm.swappiness=0 More details are written on cloudera's site 2. Make sure to synchronize time on all nodes (otherwise it will give error on n...

Why Uzbekistan needs its own local CDN

 Introduction Imagine that you're serving a website and the majority of your users are people from Uzbekistan. In other words, your business is targeting the local market of Uzbekistan.  To make your website faster you will need a CDN, this can help your business to perform better. There are several reasons why your website can be slow without the CDN acceleration: 1. No existing Tier 2 network. Tier 2 network plays an important role when it comes to the speed of the internet. It enables Tier 3 internet service providers to directly connect to the internet without other intermediate layer. In Uzbekistan, UzTelecom is the largest internet provider. According to the ` traceroute ` command it uses RETN tier-2 network. The RETN unfortunetly does not have the lines(network) in  Uzbekistan according to their map ( source ). This means that the majority of internet traffic needs to go through the single UzTelecom, which creates an overhead for the speed of internet. 2. Slow inte...