AIG, WR Berkley, and Great American have each asked regulators to approve new exclusions letting them deny claims tied to the use or integration of AI systems, whether that’s a chatbot, an agent, or something buried deep in a workflow, according to The Financial Times.
Major insurers are scrambling to wall off their exposure to AI failures as a string of expensive, very public mistakes pushes fears of systemic losses straight to the top of risk models.
Companies everywhere have rushed into generative tools, and the fallout has already been ugly. Google is staring at a $110 mn defamation suit after its AI Overview wrongly claimed a solar company got hit with action from a state attorney general.
Air Canada had to honor a discount invented out of thin air by its service chatbot. UK engineering firm Arup lost £20 mn after staff got fooled by a digitally cloned executive on a video call. Stuff like that makes it tougher for insurers to draw clean lines around liability.
Mosaic Insurance told that LLM outputs remain too unpredictable for traditional underwriting. They called the models a black box.
Even though Mosaic sells specialist coverage for AI enhanced software, it still refuses to underwrite LLM-driven risks, including systems like ChatGPT.
Some of the proposed exclusions are sweeping. One version from WR Berkley would block claims tied to “any actual or alleged use” of AI, even if the tech is only a tiny part of a product. AIG told regulators it doesn’t plan to flip its exclusions on right away, but wants them ready as claim frequency and severity keep rising.
The worry isn’t just big single-company losses. It’s the nightmare scenario: one upstream model or vendor misfires, and suddenly a thousand insureds take hits at the same time.
Kevin Kalinich, who leads cyber at Aon, said the market can handle a $400 mn or $500 mn loss tied to one company’s agent. What it can’t take is a wave of correlated failures rolling through the entire system.
Some carriers have tried partial fixes with endorsements. QBE added one offering capped protection for fines under the EU AI Act, limited to 2.5% of the insured limit.
Chubb agreed to cover certain AI incidents while excluding anything capable of sparking widespread simultaneous damage.
Brokers say these endorsements need close reading because several look like added protection but narrow coverage instead.
As carriers and regulators redraw the boundaries, companies may discover the risk of deploying AI sits far more heavily on their own balance sheets than they expected.









