Introduction to Big Data and Hadoop

What is Big data?

Data represents useful information. Over the past decade, data has been significantly increasing.  Generally we store data in traditional relational databases. This is by far the most efficient method to store and retrieve considering small amounts (generally in megabytes or gigabytes) of data. 

But what if the data is huge? (By huge I mean really, in terms of terabytes or petabytes or exabytes or even zetabytes of data). Such large amount of data is considered as big data. There is no specific meaning for Big Data. It can be defined in many ways. But, if we were to define what big data is, we can define it in the following way:

“Big data is the data which cannot be processed on a single machine”

So we need some way to store and process data in an efficient manner. This can be achieved using “Hadoop”.

What is Hadoop?

Hadoop is an open source software framework used for distributed storage and processing of big data using map reduce programming model.

How Hadoop evolved:

Hadoop was created by Doug Cutting and Mike Cafarella in 2005. Back in 2003, some folks at google published two papers discussing how google stores their data internally and also talked about the distributed file system. At that time Doug and Cafarella were working on an open source search engine called NUTCH. They read the papers and decided to start an open source project called Hadoop which can store and process data efficiently using Map reduce invented by the folks at Google. Thus Hadoop came into existence. Today Hadoop has become the operating system for Big data.

Secret of the name Hadoop:

Doug cutting, one of the co-founders of Hadoop and the current Chief Architect of Cloudera named it after a toy elephant owned by his son.

Challenges in Big Data:

The 3 V’s Formula:

All the time we hear data scientists and data engineers talk about the 3 Vs formula. The 3 V’s stand for Volume, Variety and Velocity.


Data is generating in large amounts and as a result it is extremely difficult to handle the data efficiently. By large we mean data in sizes of terabytes or even petabytes of data. So we have to find a way to store such huge volumes of data. Hadoop uses a distributed File system in order to store the data.
A point to remember is that even a single piece of information is essential as it may be useful in the future. So you cannot throw any data.


Data is being generated in huge volumes and also in different formats. For example, you want to store the call data of a particular user. Now the data can be stored in different formats. The call data may be converted into text and stored or the company might want to store the data in raw audio format so that in future they may develop software which can analyse the audio files and convert them into text. Hence it is very important to handle data coming in different formats.


So far we have seen how important it is to store huge volumes of data in different formats. There is another important thing to consider is a way to handle data coming at great speeds. Data is generated vary fast and there must be a mechanism to store and process data in a quick time.

The 3 V’s formula has to be kept in mind when dealing with Big Data.

In the next post let is understand about the working of Hadoop which is a way to store data and Map Reduce which is a way to process data efficiently.


  1. I have an interest to learn Hadoop technology in online; the online tutorials help me a lot to understand the Big Data concepts well, In that your post provides useful information about Hadoop. Thanks for sharing…
    Big Data Training in Chennai

  2. Great put Good stuff.All the topics were explained quickly understand for me.I am waiting for your next fantastic blog.Thanks for sharing.Any coures related details learn...

    Hadoop Online Training
    Data Science Online Training|
    R Programming Online Training|


Post a Comment

Thanks for your comments!

Popular Posts