The EU AI Act, published by the European Commission in July 2024, introduces the first regulations on AI development and usage across sectors to protect citizens’ rights. The AI Act ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.
The AI Act (Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence) provides AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs).
The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.
The AI Act entered into force on August 1, and will be fully applicable 2 years later, with some exceptions: prohibitions will take effect after six months, the governance rules and the obligations for general-purpose AI models become applicable after 12 months and the rules for AI systems – embedded into regulated products – will apply after 36 months.
Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.
The Act, effective February 2025, restricts certain practices initially and later requires specific conditions for authorized applications. Non-compliance can result in substantial fines of up to 7% of global turnover.
The AI Act is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package and the Coordinated Plan on AI. Together, these measures guarantee the safety and fundamental rights of people and businesses when it comes to AI. They also strengthen uptake, investment and innovation in AI across the EU.
The AI Act is the first-ever comprehensive legal framework on AI worldwide. The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.
The EU AI Act and new rules
- address risks specifically created by AI applications
- prohibit AI practices that pose unacceptable risks
- determine a list of high-risk applications
- set clear requirements for AI systems for high-risk applications
- define specific obligations deployers and providers of high-risk AI applications
- require a conformity assessment before a given AI system is put into service or placed on the market
- put enforcement in place after a given AI system is placed into the market
- establish a governance structure at European and national level
Limited risk refers to the risks associated with lack of transparency in AI usage. The AI Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust.
For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back.
Providers also have to ensure that AI-generated content is identifiable (see how Generative AI Impacts on Cyber Insurance Landscape). Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated. This also applies to audio and video content constituting deep fakes.
The Regulatory Framework defines 4 levels of risk for AI systems
All AI systems considered a clear threat to the safety, livelihoods and rights of people are banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviourб фссщквштп ещ Global Emerging Risks, Insurance & Macro Trends Outlook.
Impact on Insurance Use Cases
The AI Act impacts the insurance industry with risk-based classifications
- Minimal risk: AI with a limited role, such as document classification or search, faces no special requirements.
- Limited risk: AI used in customer chatbots or fraud detection mandates documentation and user awareness of AI usage, ensuring explainable decisions for users and regulators.
- High risk: Core insurance applications, like underwriting, pricing, claims processing, training, and recruitment, must adapt risk and quality management, ensure human oversight, document AI processes for compliance, and establish AI robustness, security, and clarity.
- Unacceptable risk: Certain social scoring and biometric data uses are outright banned.
AI Act introduces further complexities for insurance leaders
Insurance use cases that cross into multiple markets, functions, or data types must adhere to the highest applicable risk category outlined in the EU AI Act. For example, applying voice recognition for fraud detection is classified as high risk due to the biometric data involved.
Most insurers will rely on external large language models, such as OpenAI’s ChatGPT, with responsibility for compliance resting on the developers of these models.
This arrangement influences insurers’ AI procurement and partner evaluation processes.
The EU AI Act also overlaps with existing regulations, including Solvency II, DORA, CPC, IAF, and GDPR, which may cover some of the Act’s requirements. Further, the Act’s reach extends beyond Europe, applying to any non-European insurers whose AI impacts EU citizens.
Insurers should regularly review their AI use cases
To comply, insurers should regularly review their AI use cases to identify and close compliance gaps.
Key steps include:
- Establish a Multidisciplinary AI Governance Team
- Upskill Data Science and Engineering Teams
- Use Standards as Compliance Guides
To ensure compliance with the EU AI Act, insurers should create a multidisciplinary AI governance team. This team should bring together specialists from business, compliance, data, AI, IT, and legal fields to address ambiguities in the Act, such as complex conditions on AI-assisted underwriting. With such expertise, the team can guide the organization through unclear areas, particularly those linked to data usage restrictions.
Equipping data science and engineering teams with advanced skills is also crucial. By developing expertise in tools and techniques necessary for building compliant AI, these teams can ensure systems meet legal standards.
They also need to stay vigilant about new security threats, such as inversion and evasion attacks, which require specific measures to protect sensitive data and maintain model integrity.
Lastly, following standards, especially the anticipated ISO 42001, will provide insurers with a clear pathway to comply with the Act’s requirements in risk and quality management.
Adopting these standards simplifies the alignment process, offering a framework that supports reliable and transparent AI operations.
The AI Act offers insurers guidelines
The AI Act offers insurers guidelines that enhance trust and market adoption. It emphasizes transparency, reliability, and performance, making it a potential framework for responsibly and profitably integrating AI into the insurance industry.
All remote biometric identification systems are considered high-risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle, prohibited.
Narrow exceptions are strictly defined and regulated, such as when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence.
Those usages is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.
FAQ
The EU AI Act, introduced by the European Commission in July 2024, establishes the first legal framework regulating AI development and usage across sectors to protect citizens’ rights. It takes effect on August 1, 2024, with full applicability by February 2025 and certain provisions phased in over 12 to 36 months.
The Act classifies AI use cases by risk level, requiring insurers to follow specific guidelines for each. Minimal-risk applications face no extra requirements, while limited-risk cases, like chatbots, need transparency. High-risk uses, such as underwriting and claims, require documented compliance, human oversight, and robust risk management. Unacceptable-risk applications, such as social scoring, are banned.
Insurers must regularly review their AI use cases to identify and fix any compliance gaps. This involves forming multidisciplinary AI governance teams, upskilling data science and engineering staff, and adhering to standards like ISO 42001 for a structured compliance approach.
AI applications that span multiple markets, functions, or data types face stricter standards, with the highest applicable risk rating taking precedence. Voice recognition for fraud detection, for example, is high-risk due to biometric data usage.
The Act aligns with and sometimes overlaps existing EU frameworks, including Solvency II, DORA, CPC, IAF, and GDPR, which may already cover some of the Act’s requirements. Non-European insurers must comply if their AI impacts EU citizens.
The AI Act mandates transparency to build trust. For example, people must be informed when they interact with AI, such as in chatbots. AI-generated content used in public interest must be labeled, and deep fakes in audio or video must also be clearly identified.
…………….
AUTHORS: Vlad Flamind – Lead Data Consultant at Zühlke, Peter Sonner – Lead Editor at Beinsure