Skip to main content

Introduction to Big Data and Hadoop

What is Big data?

Data represents useful information. Over the past decade, data has been significantly increasing.  Generally we store data in traditional relational databases. This is by far the most efficient method to store and retrieve considering small amounts (generally in megabytes or gigabytes) of data. 

But what if the data is huge? (By huge I mean really, in terms of terabytes or petabytes or exabytes or even zetabytes of data). Such large amount of data is considered as big data. There is no specific meaning for Big Data. It can be defined in many ways. But, if we were to define what big data is, we can define it in the following way:

“Big data is the data which cannot be processed on a single machine”

So we need some way to store and process data in an efficient manner. This can be achieved using “Hadoop”.

What is Hadoop?

Hadoop is an open source software framework used for distributed storage and processing of big data using map reduce programming model.

How Hadoop evolved:

Hadoop was created by Doug Cutting and Mike Cafarella in 2005. Back in 2003, some folks at google published two papers discussing how google stores their data internally and also talked about the distributed file system. At that time Doug and Cafarella were working on an open source search engine called NUTCH. They read the papers and decided to start an open source project called Hadoop which can store and process data efficiently using Map reduce invented by the folks at Google. Thus Hadoop came into existence. Today Hadoop has become the operating system for Big data.

Secret of the name Hadoop:

Doug cutting, one of the co-founders of Hadoop and the current Chief Architect of Cloudera named it after a toy elephant owned by his son.

Challenges in Big Data:

The 3 V’s Formula:

All the time we hear data scientists and data engineers talk about the 3 Vs formula. The 3 V’s stand for Volume, Variety and Velocity.


Data is generating in large amounts and as a result it is extremely difficult to handle the data efficiently. By large we mean data in sizes of terabytes or even petabytes of data. So we have to find a way to store such huge volumes of data. Hadoop uses a distributed File system in order to store the data.
A point to remember is that even a single piece of information is essential as it may be useful in the future. So you cannot throw any data.


Data is being generated in huge volumes and also in different formats. For example, you want to store the call data of a particular user. Now the data can be stored in different formats. The call data may be converted into text and stored or the company might want to store the data in raw audio format so that in future they may develop software which can analyse the audio files and convert them into text. Hence it is very important to handle data coming in different formats.


So far we have seen how important it is to store huge volumes of data in different formats. There is another important thing to consider is a way to handle data coming at great speeds. Data is generated vary fast and there must be a mechanism to store and process data in a quick time.

The 3 V’s formula has to be kept in mind when dealing with Big Data.

In the next post let is understand about the working of Hadoop which is a way to store data and Map Reduce which is a way to process data efficiently.


  1. I have an interest to learn Hadoop technology in online; the online tutorials help me a lot to understand the Big Data concepts well, In that your post provides useful information about Hadoop. Thanks for sharing…
    Big Data Training in Chennai

  2. Great put Good stuff.All the topics were explained quickly understand for me.I am waiting for your next fantastic blog.Thanks for sharing.Any coures related details learn...

    Hadoop Online Training
    Data Science Online Training|
    R Programming Online Training|


Post a Comment

Thanks for your comments!

Popular posts from this blog

What’s the difference between AngularJS, Angular2 and Angular4?

One question that often comes out is “What is the basic difference between AngularJS, Angular 2 and Angular 4 and how to jump from Angular 2 to Angular 4?”

Angular JS was introduced in 2010 as a JavaScript framework for building client side single page web applications. So it gained popularity and the Angular team at google started to add some more features to the core. But the framework was not designed with the needs of today’s applications in mind and moreover it was totally complex. So the Angular team decided to rewrite the entire framework using TYPESCRIPT and as a result Angular 2 was released in mid 2016. The new Angular framework is completely different from the previous version and we can think of it as a completely different framework compared to the earlier one.

The decision was frustrating to most of the developers since a lot of applications have been designed using AngularJS. I personally liked the direction that Angular developers took in rewriting the entire framework a…


Angular JS is an open source framework built on JavaScript. It was built by the
developers at Google. This framework was used to overcome obstacles encountered while
working with Single Page applications. Also, testing was considered as a key aspect while
building the framework. It was ensured that the framework could be easily tested. The
initial release of the framework was in October 2010.

Features of Angular 2:Components: The earlier version of Angular had a focus of Controllers but now has changed the focus to having components over controllers. Components help to
build the applications into many modules. This helps in better maintaining the
application over a period of time.

TypeScript: The newer version of Angular is based on TypeScript. This is a
superset of JavaScript and is maintained by Microsoft.

Services: Services are a set of code that can be shared by different components
of an application. So for example, if you had a data component that picked data
from a database, you …