Skip to content

Insurers push into AI-agent failures as demand surges for new coverage

Insurers push into AI-agent failures as demand surges for new coverage

The insurance sector, a trade that’s been around long enough to watch entire technologies rise and crash, wants to shape the next wave.

A handful of carriers – some small, some global – now sell coverage built for AI agents, those autonomous systems that answer support tickets, screen job applicants, plan travel or, occasionally, go off the rails.

They’re chasing a young market that feels wide open, and they’re also hoping to force some structure onto tech that still behaves more like a moving experiment than a finished tool, NBC reports.

Michael von Gablenz runs the AI insurance arm at Munich Re and likes to draw a line back to the safety belt. Insurance pressure helped push its adoption, he says, and there’s no reason insurers can’t influence AI in the same way.

Maybe that sounds ambitious, but the failures of generative AI are already messy enough to make his argument credible.

The headlines bounce between courtroom fights, personal tragedies and corporate fallout, and users get stuck choosing between trusting vendors or waiting for governments to catch up.

Some insurers want in. Plenty still avoid AI exposures altogether, and others fence them off with exclusions, but a few now offer policies that pay out when an AI agent misbehaves.

They’re betting history repeats itself: when insurance enters a new space, it often nudges companies toward safer behaviour because nobody wants to pay claims on preventable disasters.

Rajiv Dattani, who co-founded the Artificial Intelligence Underwriting Company, says voluntary promises won’t cut it. Insurance, in his view, works as a third-party check that doesn’t rely on regulators to set every rule.

He argues that carriers, because they’re the ones paying when things break, have a built-in reason to track how failures happen, how often, how severe and what practices actually reduce damage.

We think he’s right that insurers, maybe reluctantly, will end up funding a chunk of the research.

The risks run all over the map: data leaks, jailbreaks, disinformation, hallucinations, discriminatory decisioning, reputational hits and, in the worst cases, self-harm prompts, Beinsure noted.

More than 90% of companies want insurance that covers generative AI failures, according to the Geneva Association. The snag is that insurers can’t price what they can’t audit.

That’s why the AIUC launched AIUC-1, the first certification designed specifically for AI agents. It grades systems across security, safety, reliability, data and privacy, accountability and societal risks.

Companies can volunteer to get tested, and insurers can use the results to decide whether a model is even insurable.

Losses are happening now, Dattani says, and many carriers have already started excluding AI from existing policies. Someone needs skin in the game, so the pitch goes, and that’s where insurance steps in.

AIUC brought together a consortium of 50 backers from tech, industry groups and universities – everyone from Google to Meta to Anthropic – to help refine the standard.

The company is also building a self-harm assessment to gauge whether an AI system is likely to push dangerous content.

That work could later feed into underwriting models because pricing risk correctly is the only way an insurer stays solvent. Cristian Trout at AIUC says carriers want clarity, and right now the fog is thick.

AIUC sells policies that cover up to $50 mn in losses from hallucinations, IP infringement and data leakage. Dattani looks back to Benjamin Franklin’s fire-insurance experiments in the 1700s for inspiration, arguing that clear standards followed naturally once insurers demanded them.

Armilla, a Toronto-based AI risk firm, entered the insurance side of the business in April. Its coverage spans performance failures, legal exposure and financial losses tied to enterprise-scale AI usage.

AI adoption cuts across retail, manufacturing, banking – pretty much everywhere – so the calls for protection come from all directions.

Many traditional carriers exclude AI because they’re spooked by how unpredictable it still feels. Armilla hopes its technical work and model-assessment data can fill that void.

Ramakrishnan expects AI insurance to split off into its own booming market. Deloitte pegs the potential at $4.8 bn by 2032, though he says that’s conservative.

Munich Re saw this coming early and launched its AI insurance back in 2018. Demand accelerated once generative AI went mainstream.

Von Gablenz says their aiSure product mostly covers hallucinations right now, with more work under way to handle IP claims and other exposures. Because historical loss data barely exists, Munich Re leans heavily on direct model testing instead of traditional actuarial history.

Both he and Ramakrishnan compare the moment to the early days of cyber insurance. A tiny niche back then; a multibillion-dollar engine now. AI insurance, they say, feels like it’s walking the same road – faster, maybe, and with a lot more noise.