Skip to content

Armilla AI expands standalone AI liability cover to $25 mn limits

Armilla AI expands standalone AI liability cover to $25m limits

Armilla AI, is a managing general agent (MGA) and Lloyd’s of London coverholder focused exclusively on AI liability insurance, has expanded capacity on its standalone AI Liability Policy, increasing limits to $25 mn per insured and extending coverage across an all-risks framework that explicitly addresses generative AI and autonomous agent exposure.

The move comes as insurers across global markets tighten language or introduce exclusions for AI-related risk across cyber, E&O, and general liability programmes.

As generative AI adoption accelerates, enterprise risk managers face widening protection gaps within existing insurance towers. Traditional policies increasingly narrow coverage or defer interpretation when AI systems misadvise customers, generate incorrect outputs, or disclose sensitive information.

According to Beinsure, this shift places growing pressure on brokers to identify affirmative AI-specific solutions to preserve programme completeness, claims clarity, and governance defensibility.

Armilla’s policy is structured as primary coverage rather than a retrofit. It is not adapted from cyber, professional liability, or general liability wordings.

Instead, it is built around observed AI failure modes and is designed to respond ahead of conventional policies when losses arise from AI behaviour rather than human error.

Karthik Ramakrishnan, chief executive of Armilla AI, said most legacy insurance products were not designed for generative AI or autonomous agents, despite those systems now operating at scale across industries.

He said the expanded policy reflects two years of focused underwriting development aimed at giving risk managers a clearer path through a rapidly changing liability landscape.

Most insurance policies weren’t designed for generative AI or AI agents. But companies are already deploying these systems at scale. After two years of focused underwriting development, we believe our expanded policy gives risk managers a clear path forward.

Karthik Ramakrishnan, CEO of Armilla AI

The revised policy broadens scope beyond legacy structures by incorporating traditional general liability elements while introducing explicit coverage for risks now excluded, sub-limited, or ambiguously treated elsewhere.

These include financial loss tied to AI model error, hallucination, drift, or measurable underperformance, as well as claims arising from harmful or misleading outputs such as defamation, trade secret exposure, and confidentiality breaches.

Insurance coverage also extends to failures of AI agents, including incorrect decision-making, improper tool use, and escalation errors.

Non-breach privacy incidents and unintended data leakage through AI outputs are addressed, alongside AI-driven third-party property damage caused by automated or generative systems.

The policy further includes defence costs and insurable fines linked to investigations under emerging AI regulation, including the EU AI Act and Colorado AI Act.

Armilla became the first Lloyd’s coverholder dedicated exclusively to AI liability in 2024 following participation in Lloyd’s Lab.

The company launched its standalone AI liability product in April 2025 and now serves clients ranging from AI scale-ups to large technology platforms and Fortune 1000 enterprises embedding generative or agentic AI into core operations.

Each policy includes independent AI system certification and risk reporting informed by more than 500 AI evaluations conducted across regulated sectors.

Priority industries include financial services, healthcare, human resources, telecommunications, retail, professional services, and customer support, where AI-driven decision-making increasingly carries direct financial and regulatory exposure.

Armilla AI operates as a managing general agent backed by A-rated global insurers, providing AI performance warranties and standalone AI liability insurance designed to support secure deployment as regulatory scrutiny and policy exclusions around AI continue to expand.