Data science and analytics have become a key competitive advantage—providing new insights to support company growth and operational efficiency. Yet many communications, media and technology companies struggle to balance data quality with scalability and speed.
Eighty percent of the organizations Accenture talked to in a 2017 study are concerned with digital disruption1—and are starting to note that their back office is not quite keeping up with their front office. The front office demands data to fuel decision-making, but the back-office is not prepared for the onslaught. When speed and deadlines trump the quality of data, the result is usually inaccurate insights.
Paying the price for bad data
“Inaccurate insights” probably sounds more benign in theory than it is in practice. Poor data quality cost organizations an average of $15 million in 2017.2 Let’s paint a picture of how that plays out in a real-world scenario. A communications and technology company’s low-quality data led to financial reporting errors—and to decision-making based on flawed numbers. Ultimately, not all transactions could be cleaned and reconciled, leading to millions of dollars in inventory write off.
A flaw like this one at an early point in the data life cycle will cause a cascading effect. What starts as a “small” data problem becomes a large one because of its potential to misdirect important decisions. Errors in underlying data integrity will impact dashboards and models built on top of that incorrect information. It is critical to implement controls in data input to avoid this scenario.
Had this company prioritized a reliable data lifecycle governance structure, it would have experienced a far happier and more productive ending. But it’s not alone in needing to get data governance right. Approximately 85 percent of telecommunications executives report that their organizations are increasingly using data to drive critical and automated decision-making.3
We commonly see key problem areas such as:
- Ingestion and content: Bad data collection, inadequate quality checks, and lack of system integration.
- Architecture & storage: Errors in database setup and storage processes result in unusable or mismatched data, such as missing customer IDs or unreliable provisioning & billing records.
- Model & reporting risk: Analytic research and reporting conducted on suspect data will lead to untrustworthy operational and strategic decisions.
The risks of not addressing these areas are high and move beyond monetary cost to adverse impact on customers, operational efficiencies, delays and rework.
Perfection versus reality
While it is rare to find the perfect data scenario, insisting on certain minimum criteria will help your organization on its data journey. At a bare minimum, companies need:
- Reliable customer and financial data
- Accurate inventory transactions history (e.g. shipments, provisioning, activations), ensuring inventory is properly tracked and managed
To achieve this, data must be transparent and sufficiently accurate to avoid material distortion of analytic models. It also should not have biases that could lead to incorrect decision making or skewed reporting (e.g., incomplete information on a particular billing area).
The figure below illustrates the short-term impact and ripple effects of a tariff on acrylic acid, a key material. If the United States taxes acrylic acid imported from China, it creates opportunities for local producers as well as exporters in other countries to fill the void. Meanwhile, Chinese producers will seek alternative markets for their acrylic acid—say, Europe or Japan. That, at first, creates an oversupply in those markets and tougher competition for producers currently serving them. At the same time, by increasing their U.S. exports, producers in Europe or Japan may create shortages in their home markets. The market essentially realigns supply sources and destinations.