RESEARCH REPORT

In brief

In brief

  • While many organizations have made strides to improve their data and analytics capabilities, internal adoption rates have remained low.
  • The slow adoption is due to internal systems that lack the flexibility necessary to support enterprise-wide access.
  • But a new approach to data consumption models is powering the evolution of business using dynamic, AI-powered enterprise intelligence.
  • Learn more about this model, one designed with user needs and consumption patterns in mind.


Despite the high number of competing priorities that today's businesses must grapple with, there's no denying that real-time access to contextually relevant data is critical for effective decision-making. Supported by the right data insights, these decisions improve business performance across the board—from driving growth to improving employee productivity to delivering better customer experiences.

While many businesses have invested in data and analytics solutions to improve intelligence at scale, our experience—based on assessments and strategy projects, combined with external research—shows that overall adoption rates among employees remain low. One explanation for this is that traditional data and analytics solutions were designed with only one user in mind, and data was processed, analyzed and delivered based on the needs and consumption patterns of that one user group: The data scientist.

Today, data consumption and analysis are no longer done by a select few, but across the enterprise from business analysts to senior executives. This means a "one size fits all" approach is no longer enough.

So, how do you meet the analytical needs of your team when they fall outside of the IT department? What about the needs of the financial analyst? Or the HR executive? Or even the virtual agent in your customer call center? They're unlikely to integrate data and analytics into their day-to-day without a solution that is intuitive to their own data consumption patterns.

Today, data consumption and analysis are no longer done by a select few, but across the enterprise from business analysts to senior executives. This means a "one size fits all" approach is no longer enough.

The answer lies in understanding who your business users are and creating a flexible solution that is supported by an intelligent and automated data foundation (IDF). This foundation will be adaptable to different use cases to facilitate effective data consumption across the enterprise.

Businesses must therefore start with the user needs in mind and work backwards from there.

Build with the user in mind

Different types of users consume data in different patterns in order to help them do their job most effectively. While there is some overlap in user cases, the following user examples give an idea of how needs and data consumption patterns differ from one scenario to the next.

The viewer
These users need to access corporate KPIs and curated datasets to understand what is going on at a glance without the need for too much supporting detail. Responsive dashboards and intelligent alerts, which are typically prepared by IT for large-scale use, enable senior executives to use data to improve business reporting and decision-making.

In one case, a multinational energy and utility company wanted to improve their data visualization experience for both management and operational users, and enable them to make data-based decisions faster. The organization leveraged a data visualization and analytics tool to connect their data source systems, providing near real-time data visualization and data mash-ups between different systems, significantly accelerating the record-to-report process. This led to the development of a single platform for data visualizations, analysis of KPIs coming from different source systems, and an interactive dashboard of its otherwise static table reports.

The navigator
Due to the nature of their role, business analysts often have atypical questions that are operational and granular in nature, the answers to which cannot be found in standardized forms. Traditionally, IT played a role in acquiring and integrating data the analysts needed, but this slowed down the process. Now business analysts can perform self-service data analyses, blending and reporting on top of a foundation layer (which we detail below) made available from IT for an agile outcome.

In one real-life case, a company wanted to enable their business analysts to make faster and better decisions about well planning and design and incident reporting. Previously, extremely valuable information, such as daily drilling reports and mud loss, took the form of unstructured text, and processing meant hundreds of hours of manual work. To speed up this process, the company developed an AI-powered knowledge platform that could analyze this data much faster. Now, the platform provides analysts with years of historical insights instantly, cutting out the manual review process when exploring, developing and constructing wells.

Now business analysts can perform self-service data analyses, blending and reporting on top of a foundation layer (which we detail below) made available from IT for an agile outcome.

The technical analyst
Technical analysts fall halfway between business analysts and data scientists, performing more advanced data analyses than the former such as advanced and predictive analytics. To do so, they require access to operational and new data, as well as metadata.

For example, in an insurance company the technical analyst will be tasked with analyzing and aggregating data to help the business analyst create tailored products for their customers. To do so, the technical analyst needs to create an aggregate dataset using data from various sources over a long period of time – then make this dataset available for this business analyst to report on. To speed up this process, technical analysts can use a data munching tool, eliminating much of the manual work involved in combining and cleaning different data sources to create one final dataset.

The data scientist
As enterprises garner intelligence, many are making the move towards democratized access to data through data distribution mechanisms and services. For data scientists, this opens new opportunities to apply advanced tools and coding skills to develop predictive and statistical models and derive meaning from multiple and complex sets of data. To truly harness this capability, data scientists require access to all data, including raw unstructured datasets and those involving large queries.

For instance, a large European city had an ambitious experiment in municipal governance. The city wanted to create a platform for predictive insights into the displacement of low-income residents as a result of gentrification. To do that, a team of data scientists had to ingest data from various data stores onto a data science sandbox, leverage data preparatory tools to standardize and harmonize the dataset, use advanced statistical methods for data discovery and ultimately use machine learning algorithms to generate insights. With the help of intuitive visualizations, the team was able to show their results and enable the city to solve some of the big challenges of urban gentrification.

As enterprises garner intelligence, many are making the move towards democratized access to data through data distribution mechanisms and services.

Machines
Looking for the most relevant and unique insights from an organization’s data is often like trying to find a needle buried deep in a haystack. Thanks to machine learning and AI-powered tools, machines are able to do just that—comb through data ingested from internal and external sources to spot hidden trends and anomalies. This means the days of sending every data request through the IT team are gone, as decisions that don’t require human intervention are increasingly automated.

In one potential scenario, for example, a railway company could use an automated decision engine to make real-time decisions about trains on the tracks. If a train is going to go rogue, for instance, they need to have the ability to activate the sensors that can decelerate the train and stop it. To do that, the decision engine will need to access real-time data from the train and the wider network, as well as historical data, to be able to accurately predict how likely the train is to go rogue. The automated engine will also perform machine learning analysis on incoming data to improve future decisions. Having flexible access to data to enable the engine to make these decisions is vital, as the decisions it makes can have significant consequences.

As each of these use cases is different, we are seeing a phenomenal shift in the technologies that support flexible modes of data consumption.

Enhance data consumption from foundation up

A robust, intelligent and automated foundation layer is key to unlocking flexible data consumption and can enable multiple consumption patterns for its various business users. It's also a core element in future-proofing the whole data engineering pipeline by enabling easy upgrades to pre-existing technology, allowing new consumption patterns to be added on top of existing ones, and making the system as a whole more tolerant to changes.

This intelligent foundation layer ensures there are separate layers in the data lake to provide security and user access, and control and execute data management policies. The data flowing into the data lake is also kept organized and managed. As part of this, the data profile and catalogue are maintained to make it easy to find, use and re-use when building consumption models, avoiding duplication and ensuring the trustworthiness of the data.

A robust, intelligent and automated foundation layer is key to unlocking flexible data consumption and can enable multiple consumption patterns for its various business users.

This robust foundation also creates an ideal base for the data marketplace. The marketplace enables businesses to find and access the right datasets, which are already processed and compliant within the data lake, to build specific use cases and solutions faster.

In a data-led world, where businesses need to design, develop and deploy thousands of data pipelines, manually working on these is not realistic. This is what makes the foundation layer so invaluable to businesses—all of these functionalities are repeatable, reusable and automated.

Put simply, the foundation layer streamlines access to data and insights across the whole enterprise now and into the future. There are seven key components to the foundation layer that enable it to do all of this:

  1. Governance process to be equipped with the 'right' workflow management, onboarding of data and new users, and pre-ingestion capabilities.
  2. Rules engine to provide an intelligent and automated way to implement data transformation, data standardization, compliance and regulation, and data augmentation rules.
  3. Compliance engine to execute policies and apply actionable rules to ensure data is protected and regulated before sharing with external parties.
  4. Data trust engine to ensure that all data quality, veracity and trust attributes are captured and recorded as KPIs, producing more meaningful insights over time.
  5. Centralized metadata to build profiles and catalog data, powering the rules engine, compliance engine and data trust engine to connect the dots in the pipeline. This is central to the data lake solution.
  6. Policy management to intelligently centralize, manage and execute the policies needed for the data lake concerning, for example, data classification, security and management.
  7. Automated feature engineering to help find the right model for the contextual use case, accelerating development of different consumption patterns.
In a data-led world, where businesses need to design, develop and deploy thousands of data pipelines, manually working on these is not realistic. This is what makes the foundation layer so invaluable to businesses.

Design a flexible data solution

Before deciding to build their own flexible data consumption solution, enterprises need to consider several things. First, they need to identify who in their organization is using data frequently and what needs they might have to make their data consumption easier. Answering the following questions can help with that:

  • What are the business use cases, and which individual consumption cases take priority? Aligning to existing solutions and prioritizing opportunities will help develop a roadmap for implementing the data pipeline at scale.
  • What is the most frequently accessed and time-relevant data in your business? The data that the business turns to most often is usually valuable data that could hold important insights.
  • What are the current data repositories, and do they meet business needs? Understanding this will help to identify where needs are being met, and which gaps need to be filled.

Having decided where the data solution can be applied, businesses must then identify the right machine intelligence models, design and build the access layer, choose the data technology stack for their data and research the optimal deployment methods.

Embracing this approach to flexible data consumption gives businesses a unique competitive advantage when it comes to generating insights and unlocking new growth targets. This, in turn, is empowering users across the entire business, marking the shift from an era of BI reporting and analytics to a realm of AI-powered enterprise intelligence.

Atish Ray

Managing Director – Applied Intelligence


Ekpe Okorafor, PhD

Senior Principal – Applied Intelligence


Prasanna Padihari

Senior Manager – Applied Intelligence

MORE ON THIS TOPIC

The path from data to knowledge
Expand the power of AI with new data sources

Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter