Why a data veracity catastrophe could make the Facebook scandal seem tame

Revelations that data belonging to 87 million Facebook users was improperly shared with Cambridge Analytica have focused the corporate world on data protection and privacy compliance. The fallout from the scandal’s initial reports on 19 March resulted in a disastrous media frenzy for the tech giant. The news went viral, culminating in the #DeleteFacebook protest on Twitter.

As a result, everyone – from consumers to regulators – is paying attention to the issue. No wonder data privacy has leapt to the top of the board’s agenda.

Tip of the iceberg
Yet, while avoiding a data use scandal is clearly a priority, it is just one of a plethora of new, mission-critical risks associated with the data on which companies now depend.

Today, business is more data-driven than ever, with companies increasingly relying on insights from data to make strategic & often automated decisions, identify new business opportunities and predict future trends. Done well, the competitive advantages can be game-changing, but data-driven businesses are also vulnerable to a new kind of risk: inaccurate, manipulated or biased data. Poor-quality data leads to inaccurate business insights, which in turn lead to potentially catastrophic decisions.

The cost of poor data veracity can run into the billions. Recently, United Airlines realised that inaccurate data was contributing to $1 billion a year in missed revenue. Its seating demand forecasts were based on decades-old assumptions about flying habits, resulting in inaccurate pricing models.

I believe this insidious issue is more widespread than many executives imagine. A recent study estimated that 97% of business decisions are made using data that the company’s own managers consider of unacceptable quality.

Garbage in, garbage out
This old adage still holds true now, businesses are spending heavily to determine what they can get out of data-driven insights and technologies, but they also need to invest in getting the best quality input for these tools. Even the most advanced analytics and forecasting systems are only as good as the data they are given to crunch.

Left unchecked, the potential harm from bad data becomes an enterprise-level existential threat. Already, 82% of executives say their organisations are increasingly relying on data to drive critical and automated decision-making on an unprecedented scale. As organisations push toward fully autonomous decision-making and leverage technologies like artificial intelligence to front their brand, the risks around poor data veracity will only grow in the process.

Three steps to data veracity
As a matter of urgency, organisations need to address this new vulnerability by building confidence in three areas:

  1. Provenance– It’s vital to verify the history of data from its origin throughout its life cycle, starting with the critical question: “Where does this data come from – and can we trust it?”

    Amazon faced this issue when product reviews on its website became subject to data manipulation. Third-party sellers were paying people to submit fake reviews to artificially inflate their product and seller ratings. In response, the retail giant now gives more weight to verified reviews from customers who had definitively purchased the item from Amazon. The company also established an invitation-only incentivised review program, banning reviews from people who received free or discounted products outside the program’s curated process.

  2. Context– Many companies are filling up enormous lakes of data for a single purpose, without any consideration of how else they can use this information.

    For example, every single retail company has terabytes of security video footage. Yet only a small fraction of that imagery is ever viewed or analysed. The only time anyone looks at the footage is after an incident occurs. However, security cameras don’t just capture crimes; they track hours of customer behaviour. Retailers should be mining that data to improve the customer experience, store layout and product placement.

    Organisations also inadvertently introduce bias into their decisions by failing to consider its context. One US company collected data on its best performers, so recruiters could target similar employees. But the algorithm created to find these people narrowed the search to a particular zip code, largely populated by white, middle class residents – introducing unwanted bias into the company’s hiring policy.

  3. Integrity – Once data has been verified and collected for the right reasons, it must be secured and maintained using data science and cyber-security capabilities to protect it from manipulation.

Build a data intelligence practice
Four in five executives (79%) agree that organisations are basing their most critical systems and strategies on data, yet many have not invested in the capabilities to verify the truth within it. To mitigate this extraordinary risk, organisations need a “data intelligence” practice whose job it is to grade the accuracy of the data by establishing, implementing, and enforcing standards for data provenance, context, and integrity. By investing in this capability, companies will generate more value from their data, and build a strong foundation for the success of other digital transformation initiatives.




Subscribe to Accenture's Anztrends Blog Subscribe to Accenture's Anztrends Blog