Getting Started with Big Data & Hadoop

Harsh Last Updated : 27 Apr, 2022
5 min read

This article was published as a part of the Data Science Blogathon.

Introduction on Big Data & Hadoop

The amount of data in our world is growing exponentially. It is estimated that at least 2.5 quintillions of data are being generated every day. No wonder why Big Data is a fast-growing field with great opportunities all around the globe.

In this article, we’ll be focusing on the basics of Big Data, why Big Data is used & learn about a Big Data framework known as Hadoop. This is a beginner-friendly article and will help you briefly get into the world of Big Data.

Let’s talk about Big Data first…

Scenario

In an alternate universe, you (Zark Muckerburg) had a brilliant idea to create a social media platform called “FaceCook”. Users can share their photos, videos, articles, etc. on this platform with their friends. The first problem you’ll encounter is data storage. The data will include structured as well as unstructured content. It’ll contain texts, images, videos, etc. So we need a robust system to safely store the data.

Big Data

The precise way to define Big Data is using the 5 V’s.

Big Data

1) Volume

The data we need to store is very large in volume. It may be multiple Petabytes & so on.

2)  Velocity

It refers to the high-speed accumulation of data. Imagine millions of users posting on FaceCook every minute.

3)  Variety

Big Data may consist of structured (table), unstructured (pictures, videos, etc) & semistructured (log files) data. There’s a vast variety of data types in Big Data.

4) Veracity

It refers to the uncertainty & inconsistencies in data. The reasons for such inconsistencies may be a multitude of data sources, different data types, etc. This may cause data to be very messy.

5) Value

Data in itself is of no value unless it is extracted, cleaned & analyzed thoroughly to gain insights about it. Hence you can generate value out of Big Data.

Now let’s study the different approaches to storing Big Data.

Traditional Single Database

The first naive approach anyone will have is simply setting up a single centralized database & use it to retrieve, and store the content FaceCook users post. So you set up a database server in your backyard & start using it. At first, this approach may seem viable but as the userbase increases; our database size will increase as well.
But this will lead to a high workload on the server thus putting it at the risk of data loss due to overloading.
A single database isn’t so scalable either so there is no way this approach will work.

Why not Just Divide the Work?

The issue with a single database centre is the processing may get slower. So we can simply divide the data into parts & store a single part per computer instance. This is called a Distributed File System.
Here a cluster of file servers is formed which are interconnected via a computer network.

Let’s say we want to store  4 Gb of data in a distributed file system, we can divide the data into 4 parts and store the data in a cluster of 4 file servers.

But wait. . .
What if due to some technicalities, PC 1 gets destroyed. This will cause some parts of the data to be unavailable. Imagine losing half of your posts due to company failure, you’ll sue the company lol.
So we need a utility to store & manage big data. It should also provide fault-tolerance and good performance as well.
Here Hadoop comes into the picture.

Hadoop

Hadoop is an open-source framework that is used to efficiently store & process large datasets ranging in size from GBs to Petabytes of data. Instead of using a centralized single database server to store data, Hadoop features clustering multiple commodity computers for fault-tolerance & parallel processing.

Hadoop consists of four main modules:

  • Hadoop Distributed File System (HDFS)
  • Yet Another Resource Negotiator (YARN)
  • MapReduce
  • Hadoop Common

We will focus on HDFS in this article/

Hadoop Distributed File System (HDFS)

Apache Hadoop Distributed File System (HDFS) follows a master-slave architecture, where a cluster comprises of single NameNode (Master) & all other nodes are DataNodes (Slaves).

The main activities of NameNode are:
1. Maintains & manages DataNodes.
2. Records metadata of actual data like filename, path, block location, etc
3. Receives heartbeat reports from all DataNodes. If NameNode stops receiving a heartbeat report from any DataNode, that DataNode is considered dead.

The main activities of DataNodes are:
1. Stores actual data
2. Serves read & write requests from the client.

Blocks

Data is firstly divided into blocks of size 128 MB by default in Hadoop 2.x architecture. All the blocks except the last should always be of size 128 MB.

How is Data Stored in HDFS?

The data storage process in HDFS is as follows:

  1. The client asks for available DataNode addresses to store the blocks.
  2. The NameNode which contains metadata like filename, path, block location, etc returns available DataNodes addresses to the client.
  3. The client directly stores the data into the DataNode.

Block Replication & Rack Awareness

We’ve already talked earlier that there might be a partial or total data loss even if a single node in a cluster fails to work. To prevent this, Hadoop has a feature called Block Replication.

In HDFS, Datablocks are replicated & stored in other DataNodes for better fault-tolerance. By default, the replication factor (how many copies of a block should be kept in the cluster) is 3 but it can be tweaked as per user requirements.

A rack is a collection of 30-40 DataNodes in a cluster. Rack Awareness is a concept in HDFS so that NameNode chooses the closest possible DataNode for serving read/write requests.
Rack Awareness is used to reduce network traffic while read/write operations, achieve fault tolerance, reduce latency, etc.

NameNode maintains Block Replication with Rack Awareness policies as below

  • No more than one replica is stored in a single DataNode.
  • No more than two replicas are stored in the same rack.
  • Number of racks for block replication < Number of replicas

So after the client gets the DataNode addresses from the NameNode,

The client directly writes the data on the DataNode.
For the usual case of Block Replication Factor of 3, the client first writes the block into a DataNode. Then this block is replicated into another DataNode within the same Rack. Finally, this block is replicated into a DataNode in a different rack.

This entire architecture is already visualized in the diagram above. .

Conclusion

So, we have studies right from the basics of Big Data to Hadoop framework fundamentals. This is not the end, as Hadoop & Big Data in general consists of more complex topics. Nevertheless, this is a step in the right direction.

Key Takeaways

    • Big Data means Data with high Volume, Velocity, Variety, Veracity & Value.
    • Distributed File System provides better performance than Traditional File System.
    • HDFS follows a master-slave architecture.
    • HDFS provides fault-tolerance by Block Replication.
    • HDFS provides better read/write speeds due to Rack Awareness

 

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

I'm an Engineering student, Mobile Dev Head at GDSC RAIT and an avid Data Science Enthusiast. I love to document everything I learn and thus I'm a technical writer as well. Technical writing helps me question my in-depth knowledge about any topic.

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details