HADOOP: “How to integrate LVM with Hadoop and provide Elasticity to DataNode Storage?”​

Shobhit Sharma
14 min readNov 10, 2020

In Hadoop Distributed Storage Cluster, The DataNodes are responsible for serving read and write requests from the file system’s clients. In simple terms, DataNodes are used as a storage node. But, the DataNode’s storage is static, and the filesystem of DataNode is not too smart to increase their storage size in case of storage limit exceed. For this scenario, the term LVM comes to solve this challenge.

LVM or Logical Volume Manager is a well-known Linux system software. “LVM is a tool for logical volume management which includes allocating disks, striping, mirroring, and resizing logical volumes. With LVM, a hard drive or set of hard drives is allocated to one or more physical volumes. LVM physical volumes can be placed on other block devices which might span two or more disks.” ~ Red Hat

Let’s implement LVM in Hadoop DataNode to provide Elasticity

Attaching External Disk(s) to the DataNode (Attaching EBS Volume to EC2 Instance in my case)

For this operation, I am using EC2 Instance on AWS Cloud and attaching EBS Volumes. To create and attach EBS Volumes, we need to follow the below steps. (in my case, I am using AWS CLI 2 instead of AWS Web Interface)

Step 1) First, according to the requirement, we need to create 2 EBS Volumes using the following command.

For Disk#1

aws ec2 create-volume --size 1 --availability-zone us-east-2a

In this command, I’ve used 1 as a size that means it will create an EBS Volume of 1 GB in availability zone us-east-2a (this should be the same where your EC2 instance is running, in my case it is us-east-2a)

The output will be

{
"AvailabilityZone": "us-east-2a",
"CreateTime": "2020-11-08T08:04:09+00:00",
"Encrypted": false,
"Size": 1,
"SnapshotId": "",
"State": "creating",
"VolumeId": "vol-01893a518b349ad7f",
"Iops": 100,
"Tags": [],
"VolumeType": "gp2"
}

Same for Disk#2

aws ec2 create-volume --size 1 --availability-zone us-east-2a

The output will be

{
"AvailabilityZone": "us-east-2a",
"CreateTime": "2020-11-08T08:05:39+00:00",
"Encrypted": false,
"Size": 1,
"SnapshotId": "",
"State": "creating",
"VolumeId": "vol-076497ac979630249",
"Iops": 100,
"Tags": [],
"VolumeType": "gp2"
}

After that, it is recommended to follow this Optional Step of creating Name-Value pair (it will add a unique name or Tag to the EBS Volume, it’s a good practice)

To create Name-Value Pair, we should have VolumeId for what we want to give a name or Tag. To perform this we need to run the following command

For Disk#1 with volume ID : vol-01893a518b349ad7f

aws ec2 create-tags --resources vol-01893a518b349ad7f --tags Key=Name,Value="shobhitFirstNewDisk1GB"

For Disk#2 with volume ID : vol-076497ac979630249

aws ec2 create-tags --resources vol-076497ac979630249 --tags Key=Name,Value="shobhitSecondNewDisk1GB"

Done! If it will not print any message in return that means the name tag is created successfully.

Attaching EBS Volume to the EC2 Instance

For attaching EBS Volume to the EC2 Instance, we need to run the following command

For Disk#1

aws ec2 attach-volume --volume-id vol-01893a518b349ad7f --instance-id i-0b9406ce63f3342d8 --device /dev/sdg

In this command, the volume-id is the same as the first EBS Volume’s ID which is created in the previous steps and the instance-id is the EC2 Instance’s ID where the Hadoop DataNode is running. The “/dev/sdg” is the disk volume address or directory name. (It must be unique, like for this volume it is shown as /dev/sdg, and for the next or 2nd EBS Volume it will be /dev/sdh).

The output will be

{
"AttachTime": "2020-11-08T08:22:46.971000+00:00",
"Device": "/dev/sdg",
"InstanceId": "i-0b9406ce63f3342d8",
"State": "attaching",
"VolumeId": "vol-01893a518b349ad7f"
}

For #Disk2 we need to follow the same step, but this “/dev/sdh” is different from the previous one.

aws ec2 attach-volume --volume-id vol-076497ac979630249 --instance-id i-0b9406ce63f3342d8 --device /dev/sdh

The output will be

{
"AttachTime": "2020-11-08T08:23:09.549000+00:00",
"Device": "/dev/sdh",
"InstanceId": "i-0b9406ce63f3342d8",
"State": "attaching",
"VolumeId": "vol-076497ac979630249"
}

That’s all

Implementation of LVM

To provide elasticity to DataNode, we need to understand the following commands before using them

  1. pvcreate — This command converts the attached Disk or external disk into physical volume.
  2. vgcreate — This command creates the volume groups for all the specified physical volumes.
  3. vgdisplay — This command displays all the available or created volume groups.
  4. lvcreate — This command creates the logical volume from the volume group.
  5. lvdisplay — This command displays all the available or created logical volumes from the volume groups.
  6. lvextend — This command extends the current logical volume size from the available space of the volume group.
  7. resize2fs — This command extends the unallocated space of the partition without formatting it.

IMPORTANT : Make sure that your Linux os has “lvm2” package installed already. If not then we need to run the following command to install “lvm2”

In my case, I am using RHEL 8 on AWS

yum install lvm2

The output will be

Last metadata expiration check: 0:00:12 ago on Sun 08 Nov 2020 08:45:12 AM UTC.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
lvm2 x86_64 8:2.03.09-5.el8 rhel-8-baseos-rhui-rpms 1.6 M
Upgrading:
device-mapper x86_64 8:1.02.171-5.el8 rhel-8-baseos-rhui-rpms 373 k
device-mapper-libs x86_64 8:1.02.171-5.el8 rhel-8-baseos-rhui-rpms 406 k
Installing dependencies:
device-mapper-event x86_64 8:1.02.171-5.el8 rhel-8-baseos-rhui-rpms 268 k
device-mapper-event-libs x86_64 8:1.02.171-5.el8 rhel-8-baseos-rhui-rpms 267 k
device-mapper-persistent-data
x86_64 0.8.5-4.el8 rhel-8-baseos-rhui-rpms 468 k
libaio x86_64 0.3.112-1.el8 rhel-8-baseos-rhui-rpms 33 k
lvm2-libs x86_64 8:2.03.09-5.el8 rhel-8-baseos-rhui-rpms 1.1 M
Transaction Summary
================================================================================
Install 6 Packages
Upgrade 2 Packages
Total download size: 4.5 M
Is this ok [y/N]: y
Downloading Packages:
(1/8): libaio-0.3.112-1.el8.x86_64.rpm 257 kB/s | 33 kB 00:00
(2/8): device-mapper-event-1.02.171-5.el8.x86_6 1.9 MB/s | 268 kB 00:00
(3/8): lvm2-2.03.09-5.el8.x86_64.rpm 9.7 MB/s | 1.6 MB 00:00
(4/8): device-mapper-persistent-data-0.8.5-4.el 4.6 MB/s | 468 kB 00:00
(5/8): device-mapper-event-libs-1.02.171-5.el8. 2.7 MB/s | 267 kB 00:00
(6/8): lvm2-libs-2.03.09-5.el8.x86_64.rpm 11 MB/s | 1.1 MB 00:00
(7/8): device-mapper-libs-1.02.171-5.el8.x86_64 4.1 MB/s | 406 kB 00:00
(8/8): device-mapper-1.02.171-5.el8.x86_64.rpm 3.8 MB/s | 373 kB 00:00
--------------------------------------------------------------------------------
Total 12 MB/s | 4.5 MB 00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Upgrading : device-mapper-8:1.02.171-5.el8.x86_64 1/10
Upgrading : device-mapper-libs-8:1.02.171-5.el8.x86_64 2/10
Installing : device-mapper-event-libs-8:1.02.171-5.el8.x86_64 3/10
Installing : libaio-0.3.112-1.el8.x86_64 4/10
Installing : device-mapper-persistent-data-0.8.5-4.el8.x86_64 5/10
Installing : device-mapper-event-8:1.02.171-5.el8.x86_64 6/10
Running scriptlet: device-mapper-event-8:1.02.171-5.el8.x86_64 6/10
Installing : lvm2-libs-8:2.03.09-5.el8.x86_64 7/10
Installing : lvm2-8:2.03.09-5.el8.x86_64 8/10
Running scriptlet: lvm2-8:2.03.09-5.el8.x86_64 8/10
Cleanup : device-mapper-libs-8:1.02.169-3.el8.x86_64 9/10
Cleanup : device-mapper-8:1.02.169-3.el8.x86_64 10/10
Running scriptlet: device-mapper-8:1.02.169-3.el8.x86_64 10/10
/sbin/ldconfig: /lib64/libhdfs.so.0 is not a symbolic link
/sbin/ldconfig: /lib64/libhadoop.so.1 is not a symbolic link
/sbin/ldconfig: /lib64/libhdfs.so.0 is not a symbolic link/sbin/ldconfig: /lib64/libhadoop.so.1 is not a symbolic link
Verifying : libaio-0.3.112-1.el8.x86_64 1/10
Verifying : lvm2-8:2.03.09-5.el8.x86_64 2/10
Verifying : device-mapper-event-8:1.02.171-5.el8.x86_64 3/10
Verifying : device-mapper-persistent-data-0.8.5-4.el8.x86_64 4/10
Verifying : device-mapper-event-libs-8:1.02.171-5.el8.x86_64 5/10
Verifying : lvm2-libs-8:2.03.09-5.el8.x86_64 6/10
Verifying : device-mapper-libs-8:1.02.171-5.el8.x86_64 7/10
Verifying : device-mapper-libs-8:1.02.169-3.el8.x86_64 8/10
Verifying : device-mapper-8:1.02.171-5.el8.x86_64 9/10
Verifying : device-mapper-8:1.02.169-3.el8.x86_64 10/10
Upgraded:
device-mapper-8:1.02.171-5.el8.x86_64
device-mapper-libs-8:1.02.171-5.el8.x86_64
Installed:
device-mapper-event-8:1.02.171-5.el8.x86_64
device-mapper-event-libs-8:1.02.171-5.el8.x86_64
device-mapper-persistent-data-0.8.5-4.el8.x86_64
libaio-0.3.112-1.el8.x86_64
lvm2-8:2.03.09-5.el8.x86_64
lvm2-libs-8:2.03.09-5.el8.x86_64
Complete!

Great! It is now installed…

To perform this operation, we need to follow the below steps

Converting Disk (EBS Volume) into Physical Volume

Step 1) First of all, we need to check the volume name or directory before. To list volumes in Linux, we need to run the following command

fdisk -l

The output will be

Disk /dev/xvda: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xd00a186f
Device Boot Start End Sectors Size Id Type
/dev/xvda1 2048 4095 2048 1M 83 Linux
/dev/xvda2 * 4096 20971486 20967391 10G 83 Linux
Disk /dev/xvdg: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/xvdh: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

In this output, it can be seen that there are those two disks available which were created earlier in the previous steps

Step 2) Then, we need to convert them into physical volume using “pvcreate” command according to the following syntax

pvcreate /dev/xvdg /dev/xvdh

In this command, The /dev/xvdg/ and /dev/xvdh are disk 1 and disk 2 of 1 GB size of both respectively.

The output will be

Physical volume "/dev/xvdg" successfully created.

Physical volume "/dev/xvdh" successfully created.

Step 3) Now, run the following command to check or confirm whether physical volume(s) are created or not

pvdisplay

The output will be

"/dev/xvdg" is a new physical volume of "1.00 GiB"
--- NEW Physical volume ---
PV Name /dev/xvdg
VG Name
PV Size 1.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID iL6S03-ORy2-3m09-gCEo-nqcI-dXQ5-L2IoKq
"/dev/xvdh" is a new physical volume of "1.00 GiB"
--- NEW Physical volume ---
PV Name /dev/xvdh
VG Name
PV Size 1.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0

PV UUID wruZ9o-NSnn-1jqQ-b4N6-0H4M-T8OY-OwTy0d

Now, the two physical volumes are created from the attached EBS Volumes with their PV UUID respectively.

Creating Volume Group of Physical Volumes

To create a volume group of physical volumes, we need to run the following command

vgcreate hadoopElasticDisk /dev/xvdg /dev/xvdh

In this command, the “hadoopElasticDisk” is the desired volume group name, and /dev/xvdg/ /dev/xvdh are the physical volumes.

The output will be

Volume group "hadoopElasticDisk" successfully created

(Optional) We can also verify whether it is created or not by running the following command

vgdisplay hadoopElasticDisk

The output will be

--- Volume group ---
VG Name hadoopElasticDisk
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 1.99 GiB
PE Size 4.00 MiB
Total PE 510
Alloc PE / Size 0 / 0
Free PE / Size 510 / 1.99 GiB

VG UUID Rv11tg-EpyE-Ctbp-M5Ax-8z6y-1bfq-KLrHeD

That’s all! Now the volume group “hadoopElasticDisk” has a size of 1.99 GiB (2 GB in size) and active PV number is 2 (that means, it is created with those 2 Physical volumes which were created earlier in previous steps)

Creating Logical Volume from Volume Group

Now, the next step is to create logical volume from Volume Group which was created earlier in the previous steps. To do this, we need to run the following command

lvcreate --size 1.5G --name lvDisk1 hadoopElasticDisk

In this command, I’ve used a size of 1.5 GB, and the desired name for the logical volume is “lvDisk1” and “hadoopElasticDisk” as a volume group from where logical volume will be created.

The output will be

Logical volume "lvDisk1" created.

We can verify it by running the following command

lvdisplay

This command will show all the information about created and available Logical Volumes

The output will be

--- Logical volume ---
LV Path /dev/hadoopElasticDisk/lvDisk1
LV Name lvDisk1
VG Name hadoopElasticDisk
LV UUID ukBMIO-rSG8-Whr8-08FV-HtOh-XY3U-upTJQ6
LV Write Access read/write
LV Creation host, time ip-172-31-3-145.us-east-2.compute.internal, 2020-11-08 09:10:45 +0000
LV Status available
# open 0
LV Size 1.50 GiB
Current LE 384
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 8192

Block device 253:0

That’s all! It is showing that Logical Volume named “lvDisk1” and its the size of 1.50 GiB. The LV Path should be remembered for the next step.

Formatting the Logical Volume

To format the logical volume, we need to run the following command

mkfs.ext4 /dev/hadoopElasticDisk/lvDisk1

In this command, the disk path is selected as “/dev/hadoopElasticDisk/lvDisk1” that means from device hadoopElasticDisk (Volume Group) [DISK], select the lvDisk1 (Logical Volume) [PARTITION]

The output will be

mke2fs 1.45.4 (23-Sep-2019)
Creating filesystem with 393216 4k blocks and 98304 inodes
Filesystem UUID: 34449fee-36cc-4764-a86b-b270fa7f788b
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

When you see it, that means the Logical Volume is formatted successfully

Mounting the Logical Volume to the Hadoop DataNode Directory

This is a very important section, Before doing any operation, this information should be remembered that “all the volumes and disk(s) have their own path or directory, it’s like a folder and The Hadoop DataNode also uses a folder as Hadoop File System”. The conclusion is if we mount Logical Volume to the Hadoop DataNode Directory, This results in that “Hadoop DataNode File System will use Logical Volume as a Storage for their read-write operations.”

To mount the formatted logical volume to the Hadoop DataNode Folder, we need to run the following command

mount /dev/hadoopElasticDisk/lvDisk1 /data/

The /data is the Hadoop DataNode Folder used for “Read / Write Operations” by DataNode in Hadoop Distributed Storage Cluster.

To verify whether it is mounted successfully or not, we need to run this command

df -h

The output will be

Filesystem                             Size  Used Avail Use% Mounted on
devtmpfs 386M 0 386M 0% /dev
tmpfs 408M 0 408M 0% /dev/shm
tmpfs 408M 11M 398M 3% /run
tmpfs 408M 0 408M 0% /sys/fs/cgroup
/dev/xvda2 10G 2.0G 8.1G 20% /
tmpfs 82M 0 82M 0% /run/user/1000
/dev/mapper/hadoopElasticDisk-lvDisk1 1.5G 4.5M 1.4G 1% /data

Great! The device “/dev/mapper/hadoopElasticDisk-lvDisk1” is successfully mounted to the Hadoop DataNode folder “/data”.

Verifying Hadoop Cluster whether it is sharing 1.5 GB storage or not by running Hadoop Report Command as follows

hadoop dfsadmin -report

This command will show the Hadoop cluster report. In my case, I have only a single Hadoop DataNode active for this experiment.

The output will be

Configured Capacity: 1551745024 (1.45 GB)
Present Capacity: 1449730048 (1.35 GB)
DFS Remaining: 1449721856 (1.35 GB)
DFS Used: 8192 (8 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Name: 3.133.131.103:50010
Decommission Status : Normal
Configured Capacity: 1551745024 (1.45 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 102014976 (97.29 MB)
DFS Remaining: 1449721856(1.35 GB)
DFS Used%: 0%
DFS Remaining%: 93.43%
Last contact: Sun Nov 08 09:17:30 UTC 2020

In this output, it can be seen that there is a total of 1 active datanode which is connected with the Hadoop Distributed Storage Cluster and its size is 1.35 GB (1.5 GB Total).

We can also check Hadoop Web UI using NameNode public IP

http://<namenode_public_ip>:50070

The result will be

Providing Elasticity to Hadoop DataNode using LVM “on the fly”

In the case of Data Exceed Limit in Hadoop DataNode, and according to this experiment, if 1.5 GB will be going to be exceeded at any time, then in this situation, one of the LVM and one other command will increase the size on the fly. Because, according to this experiment, we have only 500 MB left in the volume group out of 2 GB and the rest 1.5 GB is already reserved by Logical Volume which is used as the Hadoop DataNode folder. We can extend it from 1.5 GB to the remaining Volume group size without stopping Hadoop, without formatting any partition, without stopping any services, on the fly, the size will be increased from 1.5 GB to 2 GB by running the following commands

Command#1

To do this, we need to first run the following command

lvextend --size +500M /dev/hadoopElasticDisk/lvDisk1

This will extend the “lvDisk1” logical volume from 1.5 GB to 2 GB in size by adding unallocated or remaining 500 M from “hadoopElasticDisk” volume group.

Size of logical volume hadoopElasticDisk/lvDisk1 changed from 1.50 GiB (384 extents) to <1.99 GiB (509 extents).
Logical volume hadoopElasticDisk/lvDisk1 successfully resized.

Great! According to this command, the size is increased from 1.50 GiB to 1.99 GiB (2 GB) on a fly.

To verify this, we need to run the following command

df -h

The result will be

Filesystem                             Size  Used Avail Use% Mounted on
devtmpfs 386M 0 386M 0% /dev
tmpfs 408M 0 408M 0% /dev/shm
tmpfs 408M 11M 398M 3% /run
tmpfs 408M 0 408M 0% /sys/fs/cgroup
/dev/xvda2 10G 2.0G 8.1G 20% /
tmpfs 82M 0 82M 0% /run/user/1000
/dev/mapper/hadoopElasticDisk-lvDisk1 1.5G 4.5M 1.4G 1% /data

Oops! According to this command, the size is not increased. Still, it is 1.5 GB after extending. Now understand one more important thing here. When we extend the Logical volume on the fly, It will increase the size but the size will not be increased according to the iNode Table of the partition. One of the options is to format the partition to get complete 2 GB but it will also delete all the data. This is not a best practice. Now, this time, we need to run the final command to extend the size from 1.5 GB to 2 GB without formatting it. It will recreate the inode table without losing any data and without formatting the partition.

Command#2

To do this operation, we need to run the following command

resize2fs /dev/hadoopElasticDisk/lvDisk1

This command will automatically resize or merge the unallocated space by recreating the inode table.

The output will be

resize2fs 1.45.4 (23-Sep-2019)
Filesystem at /dev/hadoopElasticDisk/lvDisk1 is mounted on /data; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/hadoopElasticDisk/lvDisk1 is now 521216 (4k) blocks long.

Great! Now we can again verify it by using the command as follows

df -h

The output will be

Filesystem                             Size  Used Avail Use% Mounted on
devtmpfs 386M 0 386M 0% /dev
tmpfs 408M 0 408M 0% /dev/shm
tmpfs 408M 11M 398M 3% /run
tmpfs 408M 0 408M 0% /sys/fs/cgroup
/dev/xvda2 10G 2.0G 8.1G 20% /
tmpfs 82M 0 82M 0% /run/user/1000
/dev/mapper/hadoopElasticDisk-lvDisk1 2.0G 4.5M 1.9G 1% /data

Finally, it is now 2 GB from 1.5 GB on the fly without stopping Hadoop or any other service or without formatting the partition.

Now, we can again check the Hadoop Admin Report by running the following command

hadoop dfsadmin -report

The output will be

Configured Capacity: 2067611648 (1.93 GB)
Present Capacity: 1944625152 (1.81 GB)
DFS Remaining: 1944616960 (1.81 GB)
DFS Used: 8192 (8 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Name: 3.133.131.103:50010
Decommission Status : Normal
Configured Capacity: 2067611648 (1.93 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 122986496 (117.29 MB)
DFS Remaining: 1944616960(1.81 GB)
DFS Used%: 0%
DFS Remaining%: 94.05%
Last contact: Sun Nov 08 09:22:45 UTC 2020

Great! Now, the size is automatically synced on the Hadoop Distributed Storage Cluster from 1.5 GB to 2 GB on the fly without stopping Hadoop or any other services.

We can also check Hadoop Web UI using NameNode public IP

http://<namenode_public_ip>:50070

The result will be

That’s all!

The conclusion is that the challenge of Hadoop DataNode Elasticity can be solved using LVM Concept in Linux.

***

This article is written, edited, and published by Shobhit Sharma

--

--

Shobhit Sharma

Documenting my life's experiences and learnings | Developer | For Technology Articles @ shobhitsharma.net