Despite all of the hype surrounding constant advancements in Big Data, the current mindset guiding data architecture is outdated. The landscape has changed considerably in the recent past. The rapid pace of technological development has allowed businesses to capture and store vast amounts of data in far less time and at a much lower cost than ever before. Technology will soon reach the point at which Big Data analytics will become “as easy to use as Excel,” Alok Prasad, President, Cambridge Semantics. Along with the flood of other new technologies in innovative data science, the most exciting of which include advancements in direct consumer connections in real-time and the Internet of Things, it’s difficult not to get caught up in all the fuss. According Neil Jarvis, Chief Information Officer, Fujitsu Americas, “businesses are finding it increasingly easier to collect and store the vast quantity of 1s and 0s that their businesses and the world at large generate. Where companies often get stuck is figuring out how to use all that data – determining what’s relevant, what to discard and, most importantly, what can be used to drive and grow their business.”
A shift in thinking needs to occur in the way that data is viewed. Data is no longer a static disposable resource that loses usefulness once it has served its singular purpose. Its life may be extended through multi-use, multi-purpose data processing. As a renewable resource, its value should be assessed not by the bottom line, but as an asset that not only grows in value but one which further provides value creation opportunities. It is the raw material of business and as with other raw materials it’s ability to be used for a variety of applications makes it profoundly more valuable than the original product itself. Consider IBM’s recent application of data gathered from American Honda Motor Co., Inc. and Pacific Gas and Electric Company (PG&E). Originally, PG&E’s power grid data was collected to manage stability and the data from Honda’s electric cars was collected to address operational efficiencies. IBM was able to take both data sets and integrate them into a system where they are now able to guide Honda electric car owners on when and where to charge their vehicles and energy providers are able to adjust the power load accordingly.
Raj Narayanaswamy, CEO and Co-Founder at Replicon states:
Every business and industry today faces the daunting task of tying data to clear outcomes. The exponential growth means it’s absolutely necessary for organizations to set up the right architecture to maximize the vast data landscape. Rather than continuing to deploy the traditional application-centric model that leads to silos and inefficiencies, a comprehensive data value chain that includes data discovery, integration and evaluation is critical to overall success.
Businesses that understand the importance of data integration have the potential to gain beneficial insight and create new value. The mindset in which data has a defined function and is utilized for a specific purpose limits its application. It causes inflexibility, inadequate levels of data exploitation and places organizations in a poor position to exploit any future opportunities. The most successful data-driven organizations such as Amazon and Salesforce strategically manage data and scale it for growth over time.
The data lifecycle may be broken down into seven steps, discover, ingest, process, persist, integrate, analyze and expose. Each of these steps in vital in gathering quality data, exposing strategies, tools and architectures that an effective strategy may be built upon. Bob Renner, CEO of Liaison Technologies summarizes the current thinking adequately:
Most of the attention (and market value) is spent on the last stage of analytics and visualization where insights are delivered for business decisions. This is indeed where the value is finally experienced. However, the final results are not possible without the other steps. In fact, data scientists spend an inordinate amount of time cleaning or qualifying the data before applying analytics algorithms.
Good data science is simply impossible without good data and the initial steps to ensure quality control. In particular, integration, an often-undervalued step, is where much of the value in Big Data may be found. Organizations will be able to control costs and efficiency if they approach data management with a different mindset from the beginning. Big Data requires a unique foundation, as described by Cortney Thompson, CTO of Green House Data, “Big Data might mean you need serious changes to your IT infrastructure, which can negate some of the savings gained from additional efficiencies. Traditional IT is not configured for Big Data.” Some companies even make “the leap to appointing Chief Digital Business Officers whose primary focus is the synthesis of IT assets and business opportunities,” according to Tom Fountain, CTO of Pneuron. A good digital business manager will have the knowledge to implement platforms that ensure unstructured data can become actionable information.
So how do you overcome the hype? By fully understanding Big Data’s lifecycle without undervaluing the importance of each step, moving away from traditional application-centric thinking and framing data as a flexible, evolving raw material. “Data-driven discovery is fundamentally changing the way our lives work and those who master its manipulation will have an inherent competitive advantage over their peers,” (Peter Pham, The Big Trade: Simple Strategies for Maximum Market Returns. New York: Wiley, 2013). Those best positioned to capture the most value from the explosion of Big Data are the groups that not only focus on all the hype surrounding the functional components but put thought into what they want to accomplish in terms of revenue and profitability as well as other business outcomes.
Original Source: Forbes