MyCustomer.com

The big question: What really is Big Data?

by
27th Jun 2012

As part of an ongoing series on analytics and big data, Michael Wuprincipal scientist of analytics at Lithium Technologiesshares his thoughts on the explosion of data due to the social media revolution. 

Although I’ve been talking about Big Data for a while, I realised that I never really defined it? How big is big? What are the precise criteria for a data set to be considered Big Data?
If you ask around, most big data practitioners would probably say that Big Data is any data that is too big to be stored, managed and analysed via conventional database technologies. So the “data” in Big Data can really be anything. It doesn’t have to be social media data, and it is certainly not limited to user-generated content. It can be genomic, financial, environmental, or even astronomical. Although this definition is very simple and easy to understand, I didn’t like it, because its meaning actually changes over time.
According to Moore’s law, the speed and storage capacity of computing devices are increasing at an exponential rate. Many data sets that were once too big can now be stored and analysed easily. So what was once considered Big Data isn’t big anymore. Likewise, Big Data today may not be big in the future as computing power continues to increase.
As you can see, it is difficult to pin point precisely how big the data needs to be for it to be considered Big Data; this criterion is a moving target. Rather than trying to define Big Data, we will take a different approach and try to identify some of their common traits. But keep in mind that these traits are not strict definitions and they do change over time.
The data capturing devices
One of the most obvious characteristics of Big Data is that the devices for capturing those data are either already ubiquitous or becoming ubiquitous. Examples are cell phones, digital cameras, digital video recorders, etc. When any data capturing device becomes ubiquitous, there is a high probability that whatever data those devices are capturing will eventually become Big Data. This is pretty obvious, because more data capturing devices translate directly into a proportional increase in data production rate.
Besides the increase in capturing units, there is also an increase in the variety of data sensor and input devices. The GPS and accelerometer on your smart phone capture very different types of information even though they are really just a bunch of numbers. There is also an increase in the variety of input devices (i.e. different ways for a device to capture the same type of information). For example, search queries used to be captured strictly via a keyboard, now they can also be capture via any camera equipped with OCR, virtual keyboards on your smart phone or tablet, voice recognitions, etc.
The variety of data sensors and input devices not only increases the data production rate, it also produces an explosion of metadata for segmentation. Using the search function as an example, what used to be just queries can now be segmented into queries from computers vs. queries from mobile devices. Those from mobile devices can further be segmented into those that are input via a virtual keyboard vs. camera vs. voice. Likewise, queries can also be segmented according to their geo-location using GPS data. These are all valuable information that tells us how users are using the search function, and they certainly contribute to the size of Big Data.
Increased data resolution
Another major contributor to the bigness of big data is that data resolution is increasing rapidly. This is largely a consequence of the Moore’s Law, which says that the density of integrated circuit (IC) doubles approximately every two years. This means higher density CCDs in cameras and recorder, or equivalently higher image resolution. As a result, images and videos will take up more of your storage volume and make your data even bigger.
Many scientific instruments, medical diagnostics, satellite imaging systems, and telescopes benefit tremendously from this increased of spatial resolution. What used to be a blur due to a lack of resolution is now crystal clear. This can mean the difference between finding a star or a planet in a distant galaxy vs. not. And if it was a tumor that we are looking for, this could mean the difference between life and death.
Higher density IC also means faster CPU, which allows you to capture data at a higher sampling rate. This increases the data resolution in a different dimension: Time. Increased temporal resolution means instead of storing 180 frames of data for a minute of video (30 fps), now you have to store 360 frames for that same minute of video (60 fps). This will certainly make your data bigger, but the benefit can also be huge, especially for time sensitive data, for example, financial data, market reaction data, and audience measurements. The difference of a few seconds can mean the difference between making and losing millions of dollars.
Therefore, any data that is experiencing a rapid increase in data resolution (whether it is spatial, temporal or any other dimension) is likely to evolve into big data.
Super-linear scaling of data production rate
Although there are a few more common traits among Big Data, I will talk about one more here in the interest of time. I call this property “super-linear scaling data production rate.”
When the rate of data production scales super-linearly with the data producer, data created by the data producer will likely grow rapidly into big data. The key concept here is super-linearity. That means for every incremental addition of data producer, there will be a disproportionately greater increment in the rate of data production.
Super-linear scaling is basically the network effect of data production. This property is particularly relevant to social data, because nearly all social media interactions scale super-linearly with the users. For example, if you have four users, the number of possible interactions among them is six (see figure 1a). But if the number of users doubles to eight users, then the number of potential interactions among them increase more than double, in fact it more than quadruples to 28 potential interactions (see figure 1b). This is the power of super-linear scaling (a.k.a. network effect).
Because the majority of the social media data are generated through interactions between users, as more users adopt social media, the data production rate will increase super-linearly. That is why if you start capturing any social media data now, it is very likely that it will grow into big data very soon.
Conclusion
Since the precise criterion for “Big” Data is a moving target, it is useful to examine how “Big” Data were generated and try to identify the common traits that contribute to their “bigness.” There are at least three major factors that contribute to the bigness of Big Data.
  1. Ubiquity and variety of data capturing devices for different types of information
  2. Increase data resolution
  3. Super-linear scaling of data production rate with data producers
Michael Wu, Ph.D. is the principal scientist of analytics at Lithium Technologies. Michael was voted a 2010 Influential Leader by CRM Magazine for his work on predictive social analytics and its application to social CRM. You can follow him on Twitter at mich8elwu.

Replies (1)

Please login or register to join the discussion.

avatar
By Mark Tamis
28th Jun 2012 08:05

Hi Michael,

Nice post. The question that springs to my mind is, are analytics capabilities scaling super-linearly as well?

Best

Mark

-- @MarkTamis http://marktamis.wordpress.com

Thanks (0)