Skip to main content Skip to Footer

December 20, 2013
Think outside the logfile
By: Paul Mahler

Little known fact: Two of the biggest retail food services businesses in history grew to global size essentially through the smart use of anomaly detection.

Ray Kroc was selling milk shake machines in the 1950s and noticed one client bought eight machines for a single restaurant. Most restaurants bought only one per location. Curious why one store would need so many machines, he booked a flight to California. He visited and saw the McDonald brothers’ rapid-service restaurant that applied assembly-line efficiencies to the preparation of burgerstand fare. Inspired, Ray convinced the brothers to let him franchise the restaurant outside of California and he opened his first restaurant in Illinois in 1955. In 2012 McDonald’s generated $27 billion in revenue.

In 1981, Howard Schultz was a VP for a US subsidiary of a Swedish housewares company. He noticed that one particular customer in Seattle with just a handful of retail locations ordered more plastic cone filters than giants like the Macy’s department store chain did. The cone filter was a relatively obscure product for the time, used to make coffee by the cup by pouring hot water over ground beans. After visiting the store, Starbucks, he had his first cup of specialty coffee and realized how different it was from the coffee he had his whole life. He knew others would feel the same way. He entered the specialty coffee business, expanding the once niche retailer to the global leader we know today and creating an entire new market segment.1

The spark of inspiration for both of these entrepreneurs was noticing an anomaly in a log. It may seem strange to think about it that way, but a “log” at its most basic is a list of events. Both Schultz and Kroc noticed something was unusual in their data and wanted to know why. Sure, it’s much easier task to eyeball a list of milkshake machine orders than terabytes of online customer history, however there are now a few products on the market now that help administer the big data of logs. Most tools primarily make management of the data tractable and offer only light analysis such as averages and distributions. Accenture’s Technology Labs have developed a new tool that can work with any log data, to spot anomalies in the terabytes of data that business are generating today. It works by analyzing the patterns that already exist in the data, and then flagging ones that are unusual using a new statistical algorithm we’ve developed.

To spot the major opportunities around us, the key is to think outside the log. We talked with Client Account Leads at Accenture about our tool, they naturally think about the log files generated by IT systems. The Labs’ tool can and does analyze data like this. But the same kind of thinking can also be applied to logistics, social media, and consumer behavior of all kinds. Log data can help managers maximize efficiency by better understanding workflows. In fact if some process creates a list of ordered events, mining these logs can tell you how a system of any kind behaves and when something unusual is happening. It’s time to rethink what a log can be, the value trapped inside, and how to unlock it. If you have some ideas, if you’re interested in getting started, or if you’d just like to talk more about mining insights about anomalies from your data, get in touch with Accenture Tech Labs Data Insights Team.

1 http://skellogg.sdsmt.edu/IE354/Supplement/howars_s.pdf

Popular Tags

    More blogs on this topic

      Archive