Skip to content

AI Liability Directive. 8 Key Highlights and Impact on the European Insurance Market

    Insurance Europe has urged the European Commission (EC) to withdraw its proposed Artificial Intelligence Liability Directive (AILD), arguing that it could lead to legal uncertainty and hinder innovation.

    The insurance industry body has expressed concerns that the directive, as currently drafted, may increase compliance burdens for businesses and leave consumers confused about their rights (see How Does AI Technology Help Insurers?).

    While recognizing the EC’s intention to ensure fair compensation for AI-related damages, Insurance Europe believes the AILD introduces more problems than it solves.

    Insurance Europe notes that the directive’s overlap with other existing and upcoming regulations, such as the EU’s AI Act and the revised Product Liability Directive (PLD), will create a complicated and uncertain legal environment. Beinsure analyzed the report and highlighted the key points.

    8 Key Highlights of the Artificial Intelligence Liability Directive (AILD)

    8 Key Highlights of the Artificial Intelligence Liability Directive (AILD)
    1. Purpose: The AILD aims to harmonize liability rules across the EU to facilitate compensation for damages caused by AI systems. It introduces measures to make it easier for victims to claim compensation, particularly for damages related to high-risk AI systems.
    2. Legal Uncertainty: Concerns have been raised that the directive’s unclear scope and overlap with the EU’s AI Act and the revised Product Liability Directive (PLD) could create significant legal uncertainty for businesses and consumers.
    3. Scope and Overlap: The AILD applies to non-contractual, fault-based liability claims related to AI systems. However, it overlaps with the PLD, which covers product liability and has a no-fault approach, potentially creating confusion over which rules apply in different scenarios.
    4. Impact on Innovation: The directive places a higher evidentiary burden on AI providers and users. This could increase the risk of litigation, discourage innovation, and deter insurers from providing coverage, as the requirements may make AI-related operations more costly and risky.
    5. Mandatory Insurance: There is a provision for a review five years after transposition to assess the need for mandatory insurance. Concerns exist that mandatory insurance could be problematic for the diverse and evolving AI market, as it may stifle innovation and lead to higher premiums without necessary preconditions like market maturity.
    6. Disclosure of Evidence: The directive introduces rules for disclosing evidence in claims involving high-risk AI systems. Critics argue that vague evidence thresholds may lead to unnecessary litigation and discourage AI development.
    7. Rebuttable Presumption of Causality: AILD includes a provision to help victims establish a causal link between AI system failures and damages. This could make it harder for AI providers to defend themselves in court and increase legal costs.
    8. Concerns Over Litigation: The directive, combined with the revised PLD and the Representative Actions Directive, is expected to lead to a rise in litigation, including potential mass consumer claims, which may further increase costs and risks for AI companies and insurers.

    These highlights reflect both the intended goals of the AILD and the concerns raised by industry stakeholders about its potential impact on legal certainty, innovation, and the insurance landscape.

    Key Arguments Against the AILD

    Key Arguments Against the AILD

    1. Potential Legal Uncertainty

    The AILD framework may generate legal uncertainty rather than enhance consumer protection. The directive’s unclear scope, especially in relation to the AI Act and the revised PLD, could lead to a tangled regulatory landscape (see Global Landscape of Insurance Digital Transformation).

    Businesses may face overlapping and conflicting requirements, which would increase their compliance costs. At the same time, consumers might be unsure about how to pursue claims, leaving them confused about their rights in cases of AI-related damage.

    2. Negative Impact on Innovation

    The directive’s enhanced evidentiary requirements could stifle innovation. The proposed rules introduce vague thresholds for evidence, which could lead to a surge in litigation (see 10 Key Risks of the European Insurance Sector).

    This uncertainty might discourage AI providers and users from developing new technologies. Insurers, faced with a higher risk of lawsuits, may also become reluctant to provide coverage, or they may raise premiums significantly to compensate for the increased risk. This could further slow technological advancements in AI.

    3. Concerns Over Mandatory Insurance

    If the AILD remains in place and undergoes a review in five years that leans toward mandating insurance, it is crucial to maintain contractual freedom.

    Insurance Europe argues that mandatory insurance only works in mature, homogenous markets, which is not the case for AI.

    The AI sector’s varied and evolving nature makes it unsuitable for one-size-fits-all insurance mandates. Without the necessary preconditions—such as sufficient market data, a competitive insurance environment, and ample reinsurance capacity—mandatory insurance could do more harm than good.

    Issues with the Directive’s Scope

    The AILD aims to harmonize certain national, non-contractual, fault-based liability rules to aid compensation claims for AI-caused damages.

    However, Insurance Europe highlights several problems:

    • The revised PLD already extends product liability laws to software, AI, and digital processes. It broadens consumer rights and makes litigation easier, even allowing collective lawsuits. This shift will likely result in more claims, higher claim values, and greater legal costs.
    • The AILD’s scope is particularly problematic when dealing with non-high-risk AI systems. It’s unclear whether claims should fall under the no-fault regime of the PLD or the fault-based AILD, adding complexity for both producers and victims (see How Can AI Technology Change Insurance Claims Management?). The differing liability approaches could create an uneven playing field and confusion over compensation methods.
    • The directive’s obligations on AI providers and users seem to translate the AI Act’s requirements into a form of strict liability, even in cases where national laws do not define fault. This could lead to even stricter compliance demands, increasing operational burdens.

    Disclosure of Evidence and Causality Rules

    The AILD proposes stringent rules for disclosing evidence and a rebuttable presumption of causality. These measures could backfire:

    • The directive’s vague disclosure thresholds might set up a quasi-discovery process, spurring unnecessary or abusive litigation. AI providers, faced with higher risks, might become less innovative. Insurers may respond by either withdrawing coverage or significantly raising premiums, creating a chilling effect on AI development.
    • Presuming a causal link between AI system failures and damages could complicate liability cases. Courts may struggle with assessing causality, further exacerbating legal uncertainties. The increased risk of litigation could negatively impact the EU’s competitiveness and discourage investment in AI technologies.

    Future Evaluation and Recommendations

    The proposal includes a future review to determine whether stricter measures, such as mandatory insurance, are necessary. Insurance Europe stresses that mandatory schemes require a stable and predictable environment, which AI currently lacks. Without sufficient data, adequate competition, and interested insurers, such mandates could lead to higher premiums and reduced innovation.

    The insurance market would also struggle to accommodate the diverse range of AI applications under uniform terms.

    In summary, Insurance Europe calls for the AILD’s withdrawal or significant revision. If the directive is maintained, its scope should be restricted to high-risk AI systems and only to actual system failures.

    The focus should be on ensuring legal certainty, reducing red tape, and fostering an environment where AI innovation can thrive.

    Insurance Europe is the European insurance and reinsurance federation. Through its 37 member bodies — the national insurance associations — it represents all types and sizes of insurance and reinsurance undertakings.

    Insurance Europe, which is based in Brussels, represents undertakings that account for around 95% of total European premium income. Insurance makes a major contribution to Europe’s economic growth and development.

    European insurers pay out over €1 000bn annually — or €2.8bn a day — in claims, directly employ more than 920 000 people and invest over €10.6trn in the economy.

    FAQ

    Why is Insurance Europe urging the European Commission to withdraw the AI Liability Directive?

    Insurance Europe believes the AI Liability Directive would create significant legal uncertainty and hinder innovation. The insurance industry body argues that the directive, as currently drafted, may increase compliance burdens for businesses and confuse consumers about their rights.

    What are the main concerns raised by Insurance Europe regarding the Artificial Intelligence Liability Directive?

    The main concerns include the directive’s unclear scope, potential overlap with existing regulations (like the EU’s AI Act and the revised Product Liability Directive), and the increased litigation risk for AI providers. These issues could discourage innovation and deter insurers from offering coverage.

    How does the Artificial Intelligence Liability Directive aim to address AI-related damages?

    The directive could stifle innovation by imposing higher evidentiary requirements on AI providers and users. Vague thresholds for evidence could lead to increased litigation, making AI development riskier and more expensive. This could, in turn, discourage companies from investing in new AI technologies.

    What is the issue with the directive’s approach to liability?

    The AILD overlaps with the Product Liability Directive (PLD), which follows a no-fault liability approach. In contrast, the AILD uses a fault-based system. This discrepancy could create confusion over which rules apply in different scenarios and lead to inconsistent levels of consumer protection.

    What are the concerns about mandatory insurance requirements?

    Insurance Europe argues that mandatory insurance should not be imposed on the AI sector, as the market is not mature or homogeneous enough for such measures. Mandatory insurance could lead to higher premiums, limit contractual freedom, and negatively impact innovation.

    What does Insurance Europe recommend for the AI Liability Directive?

    Insurance Europe calls for the AILD to be either withdrawn or significantly revised. If the directive is maintained, it should focus on high-risk AI systems and provide clear criteria to avoid legal uncertainty. The organization stresses the importance of fostering innovation and reducing regulatory complexity.