According to Forbes, about 2.5 quintillion bytes of data is generated every day. Nonetheless, this number is just projected to constantly increase in the following years. (90% of stored data nowadays has been produced within the last two years)

But, what is Big data?

Big data is a blanket term for the non-traditional strategies and technologies needed to gather, organize, process, and gather insights from large datasets.

What makes Big Data different from any other large amount of data stored in relational databases is its heterogeneity. The data comes from different sources and has been recorded using different formats.

While the problem of working with data that exceeds the computing power or storage of a single computer is not new. The pervasiveness, scale, and value of this type of computing has greatly expanded in recent years.

In this article, we will talk about big data technology on a fundamental level and define common concepts you might come across while researching the subject. We will also take a high-level look at some of the processes and technologies currently being used in this space.

Types of Big Data

Three different ways of formatting data are commonly employed:

  1. Structured Data: This type of data can be processed, sorted, analyzed, stored and retrieved in a fixed format. Structured data can be easily accessed by a computer with the help of search algorithms.
  2. Unstructured Data: Unlike structured data, unstructured data is not present in any particular format. This data is a combination of various different types of data like text files, images, videos, etc. It is usually heavier in size.
  3. Semi-Structured Data: It contains both structured as well as unstructured information. That is, it may not be present in a particular format but may have segments which are formatted properly or vice-versa.

Characteristics Of Big Data

  1. Volume - The name Big Data itself is related to a size which is enormous. Size of data plays a very crucial role in determining value out of data. Also, whether a particular data can actually be considered as a Big Data or not, is dependent upon the volume of data. Hence, 'Volume' is one characteristic which needs to be considered while dealing with Big Data.
  2. Variety - The next aspect of Big Data is its variety. Variety refers to heterogeneous sources and the nature of data, both structured and unstructured. During earlier days, spreadsheets and databases were the only sources of data considered by most of the applications. Nowadays, data in the form of emails, photos, videos, monitoring devices, PDFs, audio, etc. are also being considered in the analysis applications. This variety of unstructured data poses certain issues for storage, mining and analyzing data.
  3. Velocity - The term 'velocity' refers to the speed of generation of data. How fast the data is generated and processed to meet the demands, determines real potential in the data. Big Data Velocity deals with the speed at which data flows in from sources like business processes, application logs, networks, and social media sites, sensors, Mobile devices, etc. The flow of data is massive and continuous.
  4. Variability - This refers to the inconsistency which can be shown by the data at times, thus hampering the process of being able to handle and manage the data effectively.

What Does a Big Data Life Cycle Look Like?

So how is data actually processed when dealing with a big data system?

While approaches to implementation differ, there are some commonalities in the strategies and software that we can talk about generally. While the steps presented below might not be true in all cases, they are widely used.

The general categories of activities involved with big data processing are:

  1. Ingesting data into the system
  2. Persisting the data in storage
  3. Computing and Analyzing data
  4. Visualizing the results

Common Tools for Big Data

In the case of big data, those usually involve Hadoop, MapReduce and Spark, 3 offerings from the Apache Software Projects.

Hadoop is an open-source software solution designed for working with big data. The tools in Hadoop help distribute the processing load required to process massive data sets across a few—or a few hundred thousand—separate computing nodes.

MapReduce, as the name implies, helps performs two functions: compiling and organizing (mapping) data sets, then refining those into smaller, organized sets used to respond to tasks or queries.

Spark is also an open-source project from the Apache foundation, it is an ultra-fast, distributed framework for large-scale processing and machine learning.


This article should give you a bit of knowledge about what Big Data is and how it’s being used currently. Big data has a huge scope in the future and the market for this is only going to keep growing exponentially!

If you want to skip the curve & learn Big Data directly from Industry Experts then you should sign up for our new microlearning course. (at just Rs.50)

This course is specifically designed for people who don't have any prior experience with Big Data, but they want to learn more about it.

A few perks of taking this 7-day course:

  • Mentoring Directly from Industry Experts
  • 10+ Hours of Online Learning
  • Get Certified
  • Acquire Knowledge From Your Home

You can check out the course details here: Big Data Certification Course