Accenture Technology Labs Blog
Bold thinking, commentary and application of new technologies that address many of the key business challenges facing organizations today.
Marjan Baghaie, Ph.D.
In this final post in the series about social data, we focus on customer segmentation. In today’s market, retailers need to know their customers well—at the segment level and ideally also the individual level—in order to know how best to serve them.
Social data enables retailers to go beyond the superficial layer of demographic data and get a better understanding of who their customers really are. By analyzing the social profiles of their customers and developing more meaningful segments, retailers could fine-tune their offerings to better address the needs of each segment.
Using social data and interest graphs, they could also hone in on the most profitable customer segments, devising new ways to serve them or executing tactics to recruit similar customers. Retailers could even experiment with the efficacy of promotional offers to various customer segments, then analyze those customers based on social data to further supplement their understanding.
How is your company using social data to know your customer segments or individual customers more deeply? In what ways is your company translating this knowledge into better service offerings?
For more information about how customer segmentation and other areas of value can be enhanced through social data, see Accenture’s point of view, “Unlocking Value from Social Data.”
Marjan Baghaie, Ph.D.
It’s time for our sixth post in a series dedicated to the effective use of social data. This one revolves around the value area of customer service, which for our purposes we’ll define as the provision of service to customers, before, during and after a purchase, designed to enhance customer satisfaction.
A customer’s entire perception of a retailer could change based on the level and quality (or lack thereof) of customer service. That’s one reason it’s imperative to provide the best possible level of customer service—but that goal comes at a high operational cost. In many cases, as retailers expand, the quality and level of customer service can falter.
Fortunately, the intelligent use of social data can help retailers improve customer service without prohibitively increasing the cost of service. As one example, retailers could use social media channels to engage their customers in an open dialogue, logging customer suggestions or enabling peer-to-peer help.
Are you maximizing how your company uses social data to deliver customer service and improve the customer experience?
For more information about how customer service and other areas of value can be enhanced through social data, see Accenture’s point of view, “Unlocking Value from Social Data.”
Marjan Baghaie, Ph.D.
Today’s topic in this series on using social data in the retail industry is related to price and efficiency. Everyone likes lower prices and many retailers offer competitive pricing as a way to differentiate. To make this strategy work, however, retailers must pay attention to how efficiently they run their business. If they can save money through smarter operations, then it’s possible to pass these savings to customers and become more competitive as a result.
One way retailers can achieve operational efficiency is through supply chain management, such as accurately predicting demand for specific products. Case in point, many retailers already prepare for well-known peak demands, such as stockpiling turkey at Thanksgiving.
But social data can amplify this ability to predict demand. Specifically, retailers could use social data to learn more about their customers in each local branch and gain insights to trends that might impact sales. For example, a fashion store with shops located in malls throughout the US could track that its West coast customers have a higher propensity to follow certain celebrity figures, and thus order merchandise endorsed by those celebrities for all California, Oregon and Washington stores.
What is your company doing to improve price and efficiency through the use of social data?
For more information about how price and efficiency, as well as other areas of value, can be enhanced through social data, see Accenture’s point of view, “Unlocking Value from Social Data.”
Marjan Baghaie, Ph.D.
The fourth post in this series is about information and expertise. How can retailers use social data to augment the value they provide to customers? Stores already train staff members to answer questions or introduce customers to new products; they also guide customers to new products through information displays, demonstrations, online reviews and more.
However, what’s missing for mass multinational retailers is the personal touch—the local shopkeeper mentality of yesterday where the store owner knew his customers’ names, as well as their family histories and hobbies, and could provide customized, one-to-one service. Big retailers have had difficulty replicating this experience at scale and a degradation of loyalty has been the result.
The smart use of social data can change this. Retailers can harness the power of advanced tools and technologies to provide customized information and expertise to each of their millions of customers.
The variety of customized services that could be provided, if the retailer is aware of a specific customer’s social life and taste graph, is only limited by imagination. Cross-channel service offerings could include: equipping staff members with customer profiles so they can treat customers accordingly; customizing online channels, recipes or recommendations based on the shopper buying profile; or creating personalized cross-merchandising suggestions based on analysis of what other people with the same taste profile have purchased.
How is your company capitalizing on social data to provide highly customized information and expertise to your customer base?
For more information about how information and expertise, as well as other areas of value, can be enhanced through social data, see Accenture’s point of view, “Unlocking Value from Social Data.”
Marjan Baghaie, Ph.D.
With this post, we continue the series about how companies, and specifically retailers, can effectively use social data. The third area that can differentiate retailers is product selection. Stores carry an assortment of brands and differing qualities of product to match their target customers’ needs and tastes, whether the category is organic foods, high-end European furniture or low-priced household items.
Depending on their mission and strategy, retailers typically creates value by either adding variety to cover a wider range of products (being extensive), or focusing on a certain quality of products or a niche product group (being selective). In either case, the retailer could use social data to source more relevant products that more closely match the needs and tastes of their target customers. This could be done through either observance or inference. Observance could be done by analyzing existing social media data from their customers, such as Pinterest data for a clothing retailer. Inference could be done by using machine-learning tools and data analytics to draw insight about the products customers might be interested by looking at social graphs and combining various data-sets.
Retailers could also use social web sites like Twitter to track trends, helping to ensure that trending fashions are available on the racks at just the right time.
In what ways could you use your company’s social data to fine-tune product selection?
For more information about how product selection and other areas of value can be enhanced through social data, see Accenture’s point of view, “Unlocking Value from Social Data.”
In the previous installment I talked a little bit about how we can do anomaly detection and gave some background to the framework we use to perform anomaly detection on log files. Now it’s time for a working example of how we can detect an anomalous set of behaviors. There are many ways to perform anomaly detection, what I am going to show you is one such method. This method uses events and the likelihood that a series of other events occur after each other, or an event sequence, and compares against a similar set of previously learned behavior or event sequences.
Suppose we have a log file and contained therein are trace entries that relate to each other. The first step is to discover relationships and learn the probability that when one event occurs then there is a measurable likelihood that a certain event will follow it. For simplicity sake, let’s say that our log file has multiple trace entries, and that each trace entry contains one event within it. In this context, a trace entry is a line within a log file that has been written out by some application and is wholly onto itself (see figure 1 for an example).
Figure 1. An example excerpt of traces from a log file where it contains a timestamp, an identifier, event name or type, and a descriptive status message for each trace entry
Within our sample log, events are labelled as Event_1, Event_2, Event_3, Event_4, Event_5, & Event_6. Each of these events have a probability of occurrence within our sample log file, they are also associated to each other with some feature such as an identifier (but not necessarily limited to just one feature that relates one trace entry to another). Within this log file we have multiple traces occurring at different times, having different identifier features, and of course each trace may or may not have a different event type recorded within it. In Figure 1, you can see a small except, but going forward imagine that there are many more traces that have been similarly recorded.
The goal is to mine and learn application behaviors to which we can run comparisons against. Typically we would want to mine out periods of time but for the sake of this example we will mine the entire log file. If we mine the sequence of events for particular identifiers within in our example log file we can pull out a learned model with probabilities. We can aggregate all of the sets of behaviors or events that relate to each other using an identifier to create an event sequence that is tied together using an identifier feature. This identifier uniquely maps one event to a chain of behaviors and distinguishes it from another chain of behaviors.
An example would be a session id for a web browsing session that separates one users set of events from that of another user. When aggregated this model represents the likelihood that given the occurrence of one event then there is a measurable probability that another certain event will follow it in time (see figure 2 for an example aggregate model of all behavior or event sequences learned from a log file). This means that if we see event_1 (represented as the number one on the graph in Figure 2) then there is a 100% chance that the next event in a series for a behavior sequence will be event 2, and so on. You can also see that there are start and stop nodes; these nodes represent the aggregate view of where a set of behaviors begins or ends.
Furthermore, from the information in the model, we can see that there really are only two event sequence types. They either begin with event 1 or event 2 (labelled as “1” and “2” in Figure 2 and have certain probability associated with them) and that if we have a set of behaviors starting with event 1 then it will either end with event 4 or event 5 occurring. Likewise if we are looking at event 3 in a series, then there is a 5% the next event in the sequence will be event 5, 15% chance for event 4, and 80% chance for event 3. What we have done is created an aggregate view of these behavior sequences present in a log file and represented them as a graph.
Figure 2. Directed graph with label, weights, and start-stop nodes where each of the nodes (circles) represent the respective events. The edges have transition probabilities that represent the likelihood of a given event occurring after another given event in time.
Once we have a learned model of previous behaviors we can then test newly logged behaviors against that model and determine the degree to which any one event or series of events deviates from this learned model. This means that if we have an event sequence beginning of event 1, event 2, and then event 4 then we have an event sequence that is somewhat anomalous to what we have seen before. Essentially without going into the mathematics involved, we can take a set of behaviors and map them against this model to provide an intuitive metric that is between 0 and 1 that also provide a significance measure as to how important a finding of anomalousness may be. As an example, take the top sequence of events or a “walk” from Figure 3 (top of graphic) as a newly discovered trace sequence that is mined from a log file while in-flight. To determine how anomalous this newly discovered sequence of events is compared to our past event sequences we can decompose the event sequence into a graph with transition probabilities (Figure 3: bottom of graphic)
Figure 3. An example incoming sequence of events (top) and its decomposed graph representation (bottom)
We can then compare the chain of events (also known as a Markov Chain) to all possible event sequences in our model and determine the degree of match. In this particular case, based on what information we have of the in-flight behavior pattern we can say that it is approximately 28% anomalous. There is some mathematics involved to arrive at this number and it involves measuring overlap, distance, and correlation of this in-flight chain against all other known chains of events within the learned model. Essentially, we can score the anomalous metric against the probability distribution of all of the sequence of event chains contained in the learned model of previously known behavior sequences. Providing a value between 0 and 100% greatly aides an end user, algorithm, or an analyst with a grade value.
Additional statistical methodologies can be applied to determine the significance of the provided anomalousness grade as well. Combining all of the metrics of anomalousness and a significance factor of our findings provide a confidence that is paramount in determining the degree to which a series of events and activities can be considered a threat or at the very least an interesting lead to follow up on or to be ignored.
We have implemented a method just like this in our log content analytics framework and it can be used for discovering typical behaviors as well as for proactive monitoring of emerging in-flight event sequences that aid adjudication. If you are intrigued by anomaly detection and log content analytics then give us a call over in the Data Insights group in the TechLabs and we can show you how it is done!
Marjan Baghaie, Ph.D.
As promised, we are delving into six possible areas of traditional value delivery in retail—and how social data can be used to enhance those values. The first pillar we discuss is convenience, which is a major differentiator for some retailers. For example, customers might choose to shop at a store that has a more limited range of items at more expensive prices, simply because of the convenience of that store being open 24/7 and nearby. Vending machines are examples of one such business model.
Knowing customers and becoming familiar with their social backgrounds and activities can enable retailers to create a more convenient shopping experience by addressing the needs of the individual customers or appropriate segments of customers. For example, a sports retailer could learn which hockey team a specific customer favors or whether the same customer is planning an upcoming ski trip. Then the retailer could use this information to create a more convenient shopping experience—say by recommending team-branded sports gear or scheduling delivery of a new pair of skis to the resort where the customer is staying.
How are you using your company’s social data to increase customer convenience?
For more information about how convenience and other areas of value can be enhanced through social data, see Accenture’s point of view, “Unlocking Value from Social Data.”
Read other blogs in this series.
Marjan Baghaie, Ph.D.
A central problem facing many businesses today is not a lack of data, but the lack of a clear, actionable plan for what to do with the data. This is particularly the case for social data. During the past few years, the attitude towards social data—especially among consumer-facing companies—has gone from “Social data is just hype,” to “Ok, we have it. Now, what do we do with it?” In a series of six blog posts, we highlight an approach to help answer this question, with a focus on social data in the retail industry. The approach highlighted is applicable and can be extended to other industries as well.
One smart way to begin building an approach is to go back to the core of the industry being examined. In retail, there are three main stakeholders--retailers, customers and suppliers. The only way for all to benefit is if additional value is added so that one entity doesn’t have to lose for the other to win. As you may have guessed, when used effectively, social data can provide this win-win situation.
To evaluate how to leverage social data to create additional value, we can start by looking at how retailers have traditionally been delivering value, and then examine how social data can be used to enhance those values.
Some dimensions under which retailers have traditionally been adding value and/or been seeking to differentiate themselves includes:
- Product selection
- Information and expertise
- Price and efficiency
- Customer service
- Customer segmentation
Each retailer might be more focused on one or a few of these to create value and differentiate itself from others.
While there are certainly other ways to deliver value, for illustrative purposes, in this series we will cover how retailers can use social data to enhance these six “traditional pillars of value”.
Many enterprise environments are awash in copious amounts of log files. Sifting through the numerous log file data sources to find errors and anomalies can be a daunting task. However this rigor it is critical to application debugging, anomaly detection, compliance, investigation, error tracking, operational intelligence, and root cause analysis to name a few (see link for reference on more information concerning the activities). Anomalies are those interesting tidbits in data that, when found, provide the electricity to the proverbial light bulb that hovers in our heads as we hunch over and sift through a deluge of data. In short, they help facilitate the insights.
Some time ago a colleague of mine blogged about anomaly detection and why it is important (see link). Continuing along that thread, this blog entry will give insight and provide a basis into how anomaly detection in the first installment and how it can actually be performed with a working example in the second installment.
First, let me provide our motivation and background. Log content analytics (LCA) is the application of analytics and semantic technologies to (semi-) automatically consume and analyze heterogeneous computer-generated log files to discover and extract relevant insights in a rationalized and structured form that can enable a wide-range of enterprise activities. Often data present in the contents of log files is characterized by log traces with unique identifiers, timestamps, events, and actions. These unique attributes can be indicative of underlying behaviors of applications. Through mining and correlation, relevant information contained within log files can be modeled using learning techniques.
Our goal when creating our log content analytics framework (introduced at this link
) was to provide analytics that extend beyond the capabilities of existing technologies by utilizing machine learning techniques to increase data literacy. To that end, it is important to provide a contextual and intuitive metric for anomalous behaviors and patterns that exist within many application logs. With our framework that we introduced in previous blog, we sought to extend its capabilities by providing a methodology that can detect abnormal behaviors and patterns in-flight as they emerge and deliver information that can be of use proactively to provide a metric for the contextual anomalousness for a sequence of events. We do this by comparing newly discovered information against the probability distribution of patterns present within an overall learned model of behaviors that has been seen in the past. Simply put, if a pattern of events doesn’t look like what we’ve seen before than it is probably anomalous.
Within our framework we do this by utilizing machine learning understand behaviors of applications and creating a model represented as a graph and then utilize concepts of graph theory to provide a measure of what is currently being logged compared to that same model we learned previously. Numerous models can be learned for multiple times of day, days of the week, weeks of the months, or times of the year. Adjusting for different time frames allows for us to dynamically adjust the sensitivity of our approach.
Now you have a basis for how we can perform anomaly detection by mining log files for processes. In the next installment I’ll go over how we treat log files as graph structures over which we can apply concepts of graph theory to pull out interesting insights. Additionally, I’ll go over a brief walkthrough of how we can utilize one such graph theory method to detect anomalies by looking at complex behavior chains. Don’t worry, I’ll save you from the complicated math details!
At Accenture Technology Labs, we’ve kicked off an initiative to create a Digital Workforce Platform. Why? Although many companies are beginning to look more digital on the outside, most are still not truly digital on the inside. For instance, they may have begun using social media to deepen relationships with customers, but have not adopted work processes that leverage digital technologies to change the way work gets done.
This is beginning to change. As vendors continue to mature key digital workforce technologies, we see those technologies coalescing into what we call intelligent digital processes (IDPs), which can support a digital workforce that is smarter and more connected, efficient, utilized and engaged. Furthermore, our recent joint research with Accenture’s Institute for High Performance shows that in addition to supporting more effective execution of existing work processes, IDPs can enable a deeper transformation of work processes, making a range of new work design options feasible.
As just one example, IDPs can push awareness, and thus some central decision-making, to the edge through the combination of social technologies, real-time collaboration and mobile. In another example, enterprise social, combined with crowdsourcing technologies, can make it feasible to operate with what we call a liquid workforce, in which expertise and effort are sourced on a task-by-task basis, from anywhere inside or outside the enterprise.
Figure 1 provides an overview of the journey we see leaders in the digital workforce pursuing.
Figure 1: Digital work design transformations are carried out with a combination of key digital technologies supporting intelligent digital processes (IDPs), which enable a more capable digital workforce, in turn enabling a range of more complex digital work design options.
IDPs are designed from the ground up to be information rich, and to make that information useful to the workforce. Digital technologies, operating on rich digital models, are used to make smarter decisions, to better coordinate with each colleague, more quickly react to changing conditions, and more effectively manage complexity.
We’ve identified seven digital technologies that we see playing key roles in creating IDPs:
Enterprise social collaboration technologies, which include enterprise social networking, as well as tools for collaborative authoring, content sharing and coordination of distributed teams.
Digital process management tools (such as BPM, CRM and other workflow and task-management tools) to automate processes, making them more systematic while eliminating paper and manual emails.
Real-time collaboration technologies, such as audio and video conferencing at various bandwidths, as well as telepresence robots for a richer remote presence.
Analytics and intelligent assistants that turn data into insights about internal processes and the external ecosystem. We’re beginning to see digital assistants take on physical form, such as robots that take part in the physical workflow and information flow.
Crowdsourcing platforms that provide alternate ways to source labor, either from within or outside the enterprise.
Gamification and digital behavior-shaping tools that engage employees, helping them track progress toward objectives, build new skills and create a portfolio of achievements.
Mobile and wearable interfaces that work in combination with all of the tools to make information and applications available when and where needed.
Building intelligent processes that operate on digital models, with support from some of these tools, can amplify the cognitive, collaborative and even physical capabilities of the workforce, thereby enabling new work designs that can have significant cost and quality benefits. Companies can advance their digital workforce journey now, by mastering the individual technologies listed above and determining how to effectively weave them into day-to-day applications and processes. However, doing so is still not straightforward. While vendors are rapidly maturing the point technologies, there are still gaps in the technology stack, and companies also struggle to put the available technologies together to improve their processes. Existing products are not generally designed to explicitly support the process re-design options, which leaves companies on their own to grope toward transformational change. As a result, organizations may shy away from investing in this area, or limit their effort to incremental digitization of existing processes rather than deeper transformation.
We’re developing the Digital Workforce Platform to address these issues, and help our clients lead in this area. This platform will provide four distinct layers of functionality:
A plug-and-play architecture that allows companies to mix and match the point technologies from different vendors.
Innovative technology add-ins for addressing some of the issues – such as role-based information routing and filtering – that are not fully addressed by existing technologies.
Reference solutions that bring the underlying technologies together to support key digital workforce functions, such as seamless social process support or digital talent sourcing.
Digital workforce strategy guidance, for instance, in the form of a diagnostic tool that can help organizations determine which work design transformation options make sense for them.
We’ll describe the platform we’re building in more detail in sequels to this blog. To find out more about our view on the digital workforce and the future of work, look here
. We also have an in-depth report and several case studies of digital workforce transformation due out soon, as well as follow-up blogs that will describe issues, approaches and platform components. So check back for updates and let us know if you’d like to exchange ideas about the coming digital workforce!
The Accenture Technology Labs blog will feature the opinions and perspectives from the very people that are driving innovation today for Accentu...