• LOGIN
  • No products in the basket.

Login

What Is Big Data? Hint: You’re a Part of It Every Day

Where should we start a book on Big Data? How about with a definition, because the term “Big Data” is a bit of a misnomer since it implies that pre-existing data is somehow small (it isn’t) or that the only challenge is its sheer size (size is one of them, but there are often more). In short, the term Big Data applies to information that can’t be processed or analyzed using traditional processes or tools. Increasingly, organizations today are facing more and more Big Data challenges. They have access to a wealth of info- motion, but they don’t know how to get value out of it because it is sitting in its most raw form or in a semi-structured or unstructured format; and as   a result, they don’t even know whether it’s worth keeping (or even able to keep it for that matter). An IBM survey found that over half of the business leaders today realize they don’t have access to the insights they need to do their jobs. Companies are facing these challenges in a climate where they have the ability to store anything and they are generating data like never before in history; combined, this presents a real information challenge. It’s   a conundrum: today’s business has more access to potential insight than ever before, yet as this potential goldmine of data piles up, the percentage of data the business can process is going down—fast. We feel that before we can talk about all the great things you can do with Big Data, and how IBM has a unique end-to-end platform that we believe will make you more successful, we need to talk about the characteristics of Big Data and how it fits into the current information management landscape.

Quite simply, the Big Data era is in full force today because the world is changing. Through instrumentation, we’re able to sense more things, and if we can sense it, we tend to try and store it (or at least some of it). Through advances in communications technology, people and things are becoming increasingly interconnected—and not just some of the time, but all of the time. This interconnectivity rate is a runaway train. Generally referred to as machine-to-machine (M2M), interconnectivity is responsible for a double-digit year over year (YoY) data growth rates. Finally, because small integrated circuits are now so inexpensive, we’re able to add intelligence to almost everything.

Even something as mundane as a railway car has hundreds of sensors. On a railway car, these sensors track such things as the conditions experienced by the rail car, the state of individual parts, and GPS-based data for shipment tracking and logistics. After train derailments that claimed extensive losses of life, governments introduced regulations that this kind of data be stored and analyzed to prevent future disasters. Rail cars are also becoming more intelligent: processors have been added to interpret sensor data on parts prone to wear, such as bearings, to identify parts that need repair before they fail and cause further damage—or worse, disaster. But it’s not just the rail cars that are intelligent—the actual rails have sensors every few feet. What’s more, the data storage requirements are for the whole ecosystem: cars, rails, railroad crossing sensors, weather patterns that cause rail movements, and so on. Now add this to tracking a rail car’s cargo load, arrival and departure times, and you can very quickly see you’ve got a Big Data problem on your hands. Even if every bit of this data was relational (and it’s not), it is all going to be raw and has very different formats, which makes processing it in a traditional relational system impractical or impossible. Rail cars are just one example, but everywhere we look, we see domains with velocity, volume, and variety combining to create the Big Data problem.

IBM has created a whole model around helping businesses embrace this change via its Smart Planet platform. It’s a different way of thinking that truly recognizes that the world is now instrumented, interconnected, and intelligent. The Smart Planet technology and techniques promote the understanding and harvesting of the world’s data reality to provide opportunities for unprecedented insight and the opportunity to change the way things are done. To build a Smart Planet it’s critical to harvest all the data, and the IBM Big Data platform is designed to do just that; in fact, it is a key architectural pillar of the Smart Planet initiative.

Three characteristics define Big Data: volume, variety, and velocity (as shown in Figure 1-1). Together, these characteristics define what we at IBM refer to as “Big Data.” They have created the need for a new class of capabilities to augment the way things are done today to provide a better line of sight and controls over our existing knowledge domains and the ability to act on them. The IBM Big Data platform gives you the unique opportunity to extract insight from an immense volume, variety, and velocity of data, in context, beyond what was previously possible.  Let’s  spend some time explicitly defining these terms.

The sheer volume of data being stored today is exploding. In the year 2000, 800,000 petabytes (PB) of data were stored in the world. Of course, a lot of the data that’s being created today isn’t analyzed at all and that’s another problem we’re trying to address with BigInsights. We expect this number to reach 35 zettabytes (ZB) by 2020. Twitter alone generates more than 7 terabytes (TB) of data every day, Facebook 10 TB, and some enterprises generate.

terabytes of data every hour of every day of the year. It’s no longer unheard of for individual enterprises to have storage clusters holding petabytes of data. We’re going to stop right there with the factoids: Truth is, these estimates will be out of date by the time you read this book, and they’ll be further out of date by the time you bestow your great knowledge of data growth rates on your friends and families when you’re done reading this book.

When you stop and think about it, it’s little wonder we’re drowning in data. If we can track and record something, we typically do. (And notice we didn’t mention the analysis of this stored data, which is going to become a theme of Big Data—the newfound utilization of data we track and don’t use for decision making.) We store everything: environmental data, financial data, medical data, surveillance data, and the list goes on and on. For exam- ple, taking your smartphone out of your holster generates an event; when your commuter train’s door opens for boarding, that’s an event; check in for a plane, badge into work, buy a song on iTunes, change the TV channel, take an electronic toll route—everyone of these actions generates data. Need more? The St. Anthony Falls Bridge (which replaced the 2007 collapse of the I-35W Mississippi River Bridge) in Minneapolis has more than 200 embedded sensors positioned at strategic points to provide a fully comprehensive monitoring system where all sorts of detailed data is collected and even a shift in temperature and the bridge’s concrete reaction to that change is available for analysis. Okay, you get the point: There’s more data than ever before and all you have to do is look at the terabyte penetration rate for personal home computers as the telltale sign. We used to keep a list of all the data warehouses we knew that surpassed a terabyte almost a decade ago—suffice to say, things have changed when it comes to volume.

As implied by the term “Big Data,” organizations are facing massive volumes of data. Organizations that don’t know how to manage this data are overwhelmed by it. But the opportunity exists, with the right technology platform, to analyze almost all of the data (or at least more of it by identifying the data that’s useful to you) to gain a better understanding of your business, your customers, and the marketplace. And this leads to the current conun- drum facing today’s businesses across all industries. As the amount of data available to the enterprise is on the rise, the percent of data it can process, understand, and analyze is on the decline, thereby creating the blind zone you see in Figure 1-2. What’s in that blind zone? You don’t know: it might be
something great, or may be nothing at all, but the “don’t know” is the prob- lem (or the opportunity, depending on how you look at it).

The conversation about data volumes has changed from terabytes to petabytes with an inevitable shift to zettabytes, and all this data can’t be stored in your traditional systems for reasons that we’ll discuss in this chapter and others.

The volume associated with the Big Data phenomena brings along new challenges for data centres trying to deal with it: its variety. With the explosion of sensors, and smart devices, as well as social collaboration technologies, data in an enterprise has become complex, because it includes not only traditional relational data, but also raw, semistructured, and unstructured data from web pages, web log files (including click-stream data), search indexes, social media forums, e-mail, documents, sensor data from active and passive systems, and so on. What’s more, traditional systems can struggle to store and perform the required analytics to gain understanding from the contents of these logs because much of the information being generated doesn’t lend itself to trade- national database technologies. In our experience, although some companies are moving down the path, by and large, most are just beginning to understand the opportunities of Big Data (and what’s at stake if it’s not considered).
Quite simply, variety represents all types of data—a fundamental shift in analysis requirements from traditional structured data to include raw, semistructured, and unstructured data as part of the decision-making and insight process. Traditional analytic platforms can’t handle variety. However, an organ- nation’s success will rely on its ability to draw insights from the various kinds of data available to it, which includes both traditional and nontraditional.

When we look back at our database careers, sometimes it’s humbling to see that we spent more of our time on just 20 percent of the data: the relational kind that’s neatly formatted and fits ever so nicely into our strict schemas. But the truth of the matter is that 80 percent of the world’s data (and more and more of this data is responsible for setting new velocity and volume records) is unstructured, or semi-structured at best. If you look at a Twitter feed, you’ll see structure in its JSON format—but the actual text is not structured, and understanding that can be rewarding. Video and picture images aren’t easily or ef- ficiently stored in a relational database, certain event information can dynamic- cally change (such as weather patterns), which isn’t well suited for strict schemas, and more. To capitalize on the Big Data opportunity, enterprises must be able to analyse all types of data, both relational and nonrelational: text, sensor data, audio, video, transactional, and more.

Just as the sheer volume and variety of data we collect and the store has changed, so, too, has the velocity at which it is generated and needs to be handled. A con- conventional understanding of velocity typically considers how quickly the data is arriving and stored, and its associated rates of retrieval. While managing all of that quickly is good—and the volumes of data that we are looking at are a con- sequence of how quick the data arrives—we believe the idea of velocity is actually something far more compelling than these conventional definitions.

To accommodate velocity, a new way of thinking about a problem must start at the inception point of the data. Rather than confining the idea of ve- locity to the growth rates associated with your data repositories, we suggest you apply this definition to data in motion: The speed at which the data is flowing. After all, we’re in agreement that today’s enterprises are dealing with petabytes of data instead of terabytes, and the increase in RFID sensors and other information streams has led to a constant flow of data at a pace that has made it impossible for traditional systems to handle.
Sometimes, getting an edge over your competition can mean identifying a trend, problem, or opportunity only seconds, or even microseconds, before someone else. In addition, more and more of the data being produced today has a very short shelf-life, so organizations must be able to analyse this data

in near real time if they hope to find insights in this data. Big Data scale streams computing is a concept that IBM has been delivering on for some time and serves as a new paradigm for the Big Data problem. In traditional processing, you can think of running queries against relatively static data: for example, the query “Show me all people living in the New Jersey flood zone” would result in a single result set to be used as a warning list of an incoming weather pattern. With streams computing, you can execute a process similar to a continuous query that identifies people who are currently “in the New Jersey flood zones,” but you get continuously updated results, because loca- tion information from GPS data is refreshed in real time.

Dealing effectively with Big Data requires that you perform analytics against the volume and variety of data while it is still in motion, not just after it is at rest. Consider examples from tracking neonatal health to financial markets; in every case, they require handling the volume and variety of data in new ways. The velocity characteristic of Big Data is one key differentiator that makes IBM the best choice for your Big Data platform. We define it as an inclusion shift from solely batch insight (Hadoop style) to batch insight combined with streaming-on-the-wire insight, and IBM seems to be the only vendor talking about velocity being more than how fast data is generated (which is really part of the volume characteristic).

Now imagine a cohesive Big Data platform that can leverage the best of both worlds and take streaming real-time insight to spawn further research based on emerging data. As you think about this, we’re sure you’ll start to share the same excitement we have around the unique proposition available with an IBM Big Data platform.

Data in the Warehouse and Data in Hadoop (It’s Not a Versus Thing)

In our experience, traditional warehouses are mostly ideal for analysing struck- turned data from various systems and producing insights with known and real- lively stable measurements. On the other hand, we feel a Hadoop-based

the platform is well suited to deal with semi-structured and unstructured data, as well as when a data discovery process is needed. That isn’t to say that Hadoop can’t be used for structured data that is readily available in a raw format; because it can, and we talk about that in Chapter 2.

In addition, when you consider where data should be stored, you need to understand how data is stored today and what features characterize your persistence options. Consider your experience with storing data in a traditional data warehouse. Typically, this data goes through a lot of rigours to make it into the warehouse. Builders and consumers of warehouses have it etched in their minds that the data they are looking at in their warehouses must shine with respect to quality; subsequently, it’s cleaned up via cleansing, enrichment, matching, glossary, metadata, master data management, modeling, and other services before it’s ready for analysis. Obviously, this can be an expensive process. Because of that expense, it’s clear that the data that lands in the warehouse is deemed not just of high value, but it has a broad purpose: it’s going to go places and will be used in reports and dashboards where the accuracy of that data is key. For example, Sarbanes-Oxley (SOX) compliance, introduced in 2002, requires the CEO and CFO of publicly traded companies on U.S.-based exchanges to certify the accuracy of their financial statements (Section 302, “Corporate Responsibility for Financial Reports”). There are serious (we’re talking the potential for jail time here) penalties associated if the data being reported isn’t accurate or “true.” Do you think these folks are going to look at reports of data that aren’t pristine?

In contrast, Big Data repositories rarely undergo (at least initially) the full quality control rigors of data being injected into a warehouse, because not only is prepping data for some of the newer analytic methods characterized by Hadoop use cases cost prohibitive (which we talk about in the next chapter), but the data isn’t likely to be distributed like data warehouse data. We could say that data warehouse data is trusted enough to be “public,” while Hadoop data isn’t as trusted (public can mean vastly distributed within the company and not for external consumption), and although this will likely change in the future, today this is something that experience suggests characterizes these repositories.

Our experiences also suggest that in today’s IT landscape, specific pieces of data have been stored based on their perceived value, and therefore any information beyond those preselected pieces is unavailable. This is in contrast to a

Hadoop-based repository scheme where the entire business entity is likely to be stored and the fidelity of the Tweet, transaction, Facebook post, and more is kept intact. Data in Hadoop might seem of low-value today, or its value nonquantified, but it can, in fact, be the key to questions yet unasked. IT departments pick and choose high-valued data and put it through rigorous cleansing and transformation processes because they know that data has a high known value per byte (a relative phrase, of course). Why else would a company put that data through so many quality control processes? Of course, since the value per byte is high, the business is willing to store it on rela- tively higher cost infrastructure to enable that interactive, often public, navigation with the end user communities, and the CIO is willing to invest in cleansing the data to increase its value per byte.

With Big Data, you should consider looking at this problem from the op- posite view: With all the volume and velocity of today’s data, there’s just no way that you can afford to spend the time and resources required to cleanse and document every piece of data properly, because it’s just not going to be economical. What’s more, how do you know if this Big Data is even valu- able? Are you going to go to your CIO and ask her to increase her capital expenditure (CAPEX) and operational expenditure (OPEX) costs by four- fold to quadruple the size of your warehouse on a hunch? For this reason, we like to characterize the initial nonanalyzed raw Big Data as having a low value per byte, and, therefore, until it’s proven otherwise, you can’t afford to take the path to the warehouse; however, given the vast amount of data, the po- tential for great insight (and therefore greater competitive advantage in your own market) is quite high if you can analyze all of that data.

At this point, it’s pertinent to introduce the idea of cost per computer, which follows the same pattern as the value per byte ratio. If you consider the focus on the quality data in traditional systems we outlined earlier, you can conclude that the cost per computer in a traditional data warehouse is relatively high (which is fine, because it’s a proven and known higher value per byte), versus the cost of Hadoop, which is low.

Of course, other factors can indicate that certain data might be of high value yet never make its way into the warehouse, or there’s a desire for it to make its way out of the warehouse into a lower cost platform; either way, you might need to cleanse some of that data in Hadoop, and IBM can do that (a key differ- initiator). For example, unstructured data can’t be easily stored in a warehouse.

Indeed, some warehouses are built with a predefined corpus of questions in mind. Although such a warehouse provides some degree of freedom for query and mining, it could be that it’s constrained by what is in the schema (most unstructured data isn’t found here) and often by a performance envelope that can be a functional/operational hard limit. Again, as we’ll reiterate often in this book, we are not saying a Hadoop platform such as IBM InfoSphere BigInsights is a replacement for your warehouse; instead, it’s a compliment.

A Big Data platform lets you store all of the data in its native business object format and get value out of it through massive parallelism on readily available components. For your interactive navigational needs, you’ll con- tinue to pick and choose sources and cleanse that data and keep it in warehouses. But you can get more value out of analyzing more data (that may even initially seem unrelated) in order to paint a more robust picture of the issue at hand. Indeed, data might sit in Hadoop for a while, and when you discover its value, it might migrate its way into the warehouse when its value is proven and sustainable.

Wrapping It Up

We’ll conclude this chapter with a gold mining analogy to articulate the points from the previous section and the Big Data opportunity that lies before you. In the “olden days” (which, for some reason, our kids think is a time when we were their age), miners could actually see nuggets or veins of gold; they clearly appreciated the value and would dig and sift near previous gold finds hoping to strike it rich. That said, although there was more gold out there—it could have been in the hill next to them or miles away—it just wasn’t visible to the naked eye, and it became a gambling game. You dug like crazy near where gold was found, but you had no idea whether more gold would be found. And although history has its stories of gold rush fevers, nobody mobilized millions of people to dig everywhere and anywhere.

In contrast, today’s gold rush works quite differently. Gold mining is exe- cut with massive capital equipment that can process millions of tons of dirt that is worth nothing. Ore grades of 30 mg/kg (30 ppm) are usually needed before gold is visible to the naked eye—that is, most gold in gold mines today is invisible. Although there is all this gold (high-valued data) in all this dirt (low-valued data), by using the  right equipment, you can economically

process lots of dirt and keep the flakes of gold you find. The flakes of gold are then taken for integration and put together to make a bar of gold, which is stored and logged in a place that’s safe, governed, valued, and trusted.

This really is what Big Data is about. You can’t afford to sift through all the data that’s available to you in your traditional processes; it’s just too much data with too little-known value and too much of a gambled cost. The IBM Big Data platform gives you a way to economically store and process all that data and find out what’s valuable and worth exploiting. What’s more, since we talk about analytics for data at rest and data in motion, the actual data from which you can find value is not only broader with the IBM Big Data platform, but you’re able to use and analyse it more quickly in real time.


Additional Video Material

Attachments2

SEE ALL Add a note
YOU
Add your Comment

Our Students Say..

[grw place_photo=”https://maps.gstatic.com/mapfiles/place_api/icons/school-71.png” place_name=”iStudy” place_id=”ChIJt6n44socdkgRTH6mzrdZ76w” reviews_lang=”en” pagination=”5″ text_size=”120″ refresh_reviews=true reduce_avatars_size=true lazy_load_img=true open_link=true nofollow_link=true]

Validate your certificate

top
Select your currency
GBP Pound sterling