The Act will impose substantial fines for non-compliance. For the use of banned AI systems like facial recognition, these could be as high as €35 million or 7% of global annual turnover.
It will have critical implications for all multinational organizations. Why? Because the Act's reach will extend globally. Any organization that develops or deploys AI systems within the EU will need to comply.
We also know that numerous critical AI systems currently being used in industries like banking, insurance, healthcare, life sciences, and many more will be affected. These systems will be classified as "high risk" under the Act and will attract a raft of new legal obligations, as well as hefty penalties for non-compliance.
What are the new obligations?
The Act categorizes AI systems into four tiers—minimal/no risk, limited risk, high risk and unacceptable risk—with intensifying obligations and penalties as you move up the scale.
For example, systems that perform narrow procedural tasks, such as email spam filters, will typically be low-risk. But techniques like social scoring and real-time facial recognition will be considered an unacceptable risk and will be banned outright.
Companies developing high-risk AI systems will need to proactively assess and prepare to comply with their new obligations, which may include fundamental rights impact and conformity assessments, operational monitoring, risk and quality management systems, public registration and other transparency requirements.
Companies deploying AI systems developed by external vendors will need to monitor those systems in line with developer instructions. They'll also have additional obligations if they're deploying AI in critical areas like public services or finance.
The Act will impose substantial fines for non-compliance. Failing to conduct adequate risk assessments could carry a fine of €15 million or 3% of global annual turnover.
While 96% of organizations support some level of government regulation around AI, only 2% of companies have identified as having fully operationalized Responsible AI across their organization.
Multinational organizations have no time to lose in getting ready. Unacceptable systems will be banned just six months after the act enters into force in August 2024 and could catch many important systems that companies are currently using.
An example? Take emotion recognition. Emotion recognition in workplaces and educational settings is banned under the Act as an unacceptable risk, unless it's necessary for safety reasons (such as systems that help airline pilots stay awake.)
If your business is using an AI-based emotion recognition system to help employees do their work, you have until the end of January 2025 to help ensure compliance.
We are helping clients look at the Act through the broader lens of their reinvention agenda, with a particular focus on the industry-specific implications of the Act.
Arnab Chakraborty / Chief Responsible AI Officer
What to do
Do you have a set of principles that will help you govern based on the obligations that the EU AI Act has put in place?
Have you assessed where AI is being developed and deployed within your company? And the potential risk of those various use cases?
Have you established capabilities for testing your AI systems—for fairness, explainability, transparency, accuracy, safety and so on?
Do you have the right monitoring and oversight in place to oversee your AI systems and Responsible AI initiatives, while executing mitigation and compliance actions?
Have you engaged leadership across your business functions to provide collective responsibility, and to address factors such as workforce impact, legal compliance, sustainability and privacy/security programs?