Skip to Main Content
Access your saved content
Many insurance companies are accustomed to dealing with data quality issues, but the larger concept of data management can be more difficult to address, due to the amount of data and the number of processes to assess and monitor.
Accenture has developed a four-step methodology to help insurers improve their data reliability. The report outlines these and considers elements from three major Solvency II consultation papers (CP 33, 43 and 60), cross-referencing each to provide directives on data quality, especially for technical provisions, model designs and computations.
With Solvency II, many insurers look at data quality as primarily a regulatory issue, especially in light of the explicit link between data quality and capital requirements. However, investing in data management can also help enhance decision making, since the availability of more detailed and accurate data can improve the organization’s ability to understand the company’s “health” and identify the levers used for managing essential activities and, ultimately, financial results.
Many insurers are seeking ways to address Solvency II requirements related to data quality, while obtaining real business value from their investments. The Solvency II directive is generic and offers little in the way of specific recommendations. The question therefore, is how to transform the data required for Solvency II into a reliable source of information for decision making?
The Committee of European Insurance and Occupational Pensions Supervisors’ (CEIOPS) Advice for Level 2 Implementing Measures on Solvency II: System of Governance, Standard for Data Quality and Group Solvency Assessment were used as the sources for compiling this document.
Uncertainty over data quality can have real costs for insurers. Such uncertainty can lead to over-coverage of risks, resulting in the additional consumption of capital. In many cases, insurers have added new applications to legacy systems, creating complex, multi-layered and redundant IT architectures. This can lead to data duplication and less-than-optimal processes, raising concerns around data quality and making it difficult for the organization to get a broad view of data.
Specific problems include:
The use of semi-automated solutions can create the risk of erroneous processing as well as incomplete traceability from sources to results. This can also force users to capture the same data more than once within the systems.
The lack of unique and shared definitions as well as weak quality controls in place on data flows and data sources—not only for internal data but also for external data provided by third parties such as brokers or other business partners, including asset managers or reinsurers.
A lack of common repositories to store master data, possibly resulting in difficulties in data matching when performing consistency controls (for example, between accounting and actuarial data).
Due to these and other problems, insurance companies are under considerable pressure to address the data quality issues resulting from their non-integrated features and processes.
In the area of risk and finance, insurers often are called upon to coordinate data initiatives related to projects such as local GAAP (Generally Accepted Accounting Principles), International Financial Reporting Standards (IFRS), and MCEV (Market Consistent Embedded Value), as well as Solvency II. Accenture has developed a four-step methodology to help insurers improve their data reliability:
Identify the Solvency II necessary data as a starting point—the identification of the needs in terms of data, a detailed description of the items that should be collected and the eventual relations between the different items.
Locate the data throughout the system and identifying gaps—the insurer locates the data throughout its functional architecture, identifying source systems, their links (e.g. the types of interfaces) and dependencies, data feeding frequencies and existing controls on the data.
Define data quality standards through controls and key performance indicators (KPIs)—The Solvency II Consultation Papers define data quality management as a continuous process designed to assess the accuracy, the completeness and the appropriateness of the data.
Define a data governance structure—it can be beneficial to plan and sponsor the implementation of data governance at specified levels of the company, with involvement from both business and IT resources.
E. Jeanne is an executive director—Risk Management, based in Paris. Jeanne specializes in risk management and finance for the insurance industry, with a focus on enterprise risk management framework, Solvency II and risk and finance architecture. Jeanne has been with Accenture for more than 15 years. Leading large transformation projects at major insurance and reinsurance companies, Jeanne helps clients transform their risk capabilities and finance processes.
P. Commets is a senior manager— Risk Management, based in Paris. Commets specialized in business intelligence and data management. He has been working for the last 20 years across industries, to help leading firms define and implement their data management architectures. Recently, Commets’ focus and energy has been directed to guiding, preparing and supporting insurance companies in efforts to meet their Solvency II requirements and become high-performance businesses.
F. Dhinaun is a manager— Risk Management, based in Paris. Specializing in Solvency II data matters, Dhinaun works with forward-thinking multi-national insurance companies to assess and improve the quality of their Solvency II data, with a focus on defining a functional architecture and implementing effective data governance within the organization for enhanced risk management capabilities.
November 22, 2012
Skip Footer Links