Imagine a boy growing up in Texas, living on a farm. Both his grandfathers had gardens and were particularly proud of what they grew. When he helped in planting, weeding and picking those vegetables, he gained an appreciation for the work behind the meal. He thought he did pretty well. That was until he drove across Iowa one day.
The Iowa corn fields grew taller than you could reach. They stretched for as far as you could see – on both sides of the road. They went on for miles. He was introduced to Big Agriculture. He remember the size of the combines – huge tractors used to pick and process the ears of corn. These machines made this production possible (and help explain how our nation not only feeds itself but huge parts of the rest of the world). This guy was my cousin, by the way.
When I hear people talk about Big Data, I think of those corn fields. Just like the wealth produced by that corn, there is information ready for harvest, if you just have the right equipment to economically transform it into something useful. The use of in-memory processing has changed all that to make it possible to harvest that information in ways planners could only have dreamed.
About 10 years ago I went with my dad to meet with executives of Dallas-based Pizza Hut (now part of Yum Brands). I was amazed at the data they were collecting. They could tell you how many pepperoni pizzas had sold the previous Thursday between 6 and 7 pm at all their restaurants in Dallas County. If you wanted it, they could drill down to the specific customers. They were talking about generating terabytes of data (which still sounds like a lot 20 years later). So just like corn, Big Data has been around for quite awhile.
The problem they faced was finding cost effective ways to profitably use that data. The information could be used to answer structured questions, but the structure of the question was critical as most of the data was stored on tapes in large file rooms. Running queries was difficult because it required a lot of loading, reading and unloading of tapes. In short, the processing equipment was slow, cumbersome and expensive.
Fast forward to today. Developments such as SAP’s in-memory processing have compressed and consolidated all that work. Time-consuming tapes loads are now in-memory. Structuring questions to optimize physical data room efforts have disappeared. Leveraging the continued dramatic advances in computing power and storage capabilities, business questions can be readily asked, evaluated and answered. And with that kind of speed, analysts can explore and experiment with the data to a far higher degree.
Big Data isn’t just about volume. It’s also about variety and velocity. Variety really ties into unstructured (i.e., social feeds) which provides primary research at your fingertips. This information can directly impact planning. For instance, if your organization identifies negative Twitter sentiment around battery life on a mobile phone, you immediately signal your product development teams to incorporate changes in product planning. It can adjust your plans, but more importantly, it can improve your results.
In-memory also allows you to run computations and scenarios that would bog down your old systems. In the past, you didn’t ask questions because you were afraid you would crash the system. In-memory computing is taming Big Data. Another example is real-time variance analysis to have an exact pulse on where your business is. Are you on-track to achieve your target? If not, adjust (i.e., change direction before you hit the iceberg or beach yourself on the rocks).
At times Big Data means going small. When necessary you can get granular in ways old systems could not. The even better news is these in-memory storage and compression advances are making these things possible at a lower and lower price point (Moore’s law is working here as well).
For planners, this has opened a bonanza of opportunities to better understand where your business is heading. Usage sensors allow utilities to operate a smarter grid that better manages power consumption, allowing the deferral of costly new plants. Retailers get faster knowledge of customer behavior to adjust supply chains and reduce excess goods. Even sports groups (down to the high school level) are developing better understanding of how their teams can match up to the competition. It is truly harvest time!
Finance is the beneficiary of these planning improvements. Whether they come directly from their lines of business or flow out of finance’s better cash flow tracking and account monitoring or some combination, organizations have abundant opportunities if they have the right equipment.
Let me know what your organization is doing to harvest your data. Next week we’ll examine how in-memory computing can help finance use Big Data to create more powerful planning simulations to “see all and know all.”