The insurance sector is staring at a fast-moving cyber threat tied to Model Context Protocol technology, the connective plumbing that lets AI models interact with an organization’s internal systems in real time, according to KYND.
MCP adoption keeps rising as firms bolt generative-AI tools onto everyday workflows, and according to our data the exposure spreads quietly through digital supply chains long before underwriters notice anything is off.
Andy Thomas, KYND’s CEO and founder, said the shift is happening faster than security frameworks can adjust. MCP gives AI engines direct hooks into tools, data and applications, which sounds great for productivity but also builds an attack surface that sits between systems.
As generative AI changes the way companies do business, it is creating new risks and new causes of loss that impact not only the companies themselves but also their business partners such as third-party vendors and digital supply chains.
The analysis highlights AI’s potential to amplify systemic risks, such as through polymorphic malware or AI-targeted data breaches, while also providing a framework to quantify these emerging threats.
Leveraging the cyber kill chain model, the report underscores the urgency for insurers to adapt to AI-driven threats, balancing innovation with robust risk mitigation strategies.
Generative artificial intelligence is considered one of the most important technological breakthroughs of the last few decades. Munich Re Group sees great opportunities for insurers – if they explore the possibilities of the new technology and understand its risks.
There are many ways organizations could revolutionize their respective industries by applying Gen AI to routine business functions. For example, in insurance, Gen AI can assist underwriters evaluating risks by analyzing vast amounts of data, including historical claims, customer information and internal/external cybersecurity factors.
A single crack in that layer can hit multiple insureds at once, turning one misconfiguration into a portfolio-wide headache.
The AI boom is happening fast, and security frameworks are still catching up. As MCP usage accelerates, with more companies adopting generative-AI solutions, MCP exposure is spreading quietly through digital supply chains.
Andy Thomas, CEO and founder of KYND
Researchers have already logged MCP-related incidents where AI models were manipulated through overly broad permissions or sloppy access controls. Attackers can slip in queries that look legitimate and siphon sensitive data or alter records.
And when MCP infrastructure isn’t hardened, intruders can ride that link straight into connected systems, triggering leaks that insurers may not catch until the breach has spread.
For underwriters, the challenge plays out at two levels. Individual organizations carry shifting MCP-driven exposures because the tech evolves faster than standard security reviews.
At the same time, shared dependencies across vendors, cloud setups and AI integrations mean a single vulnerability can compromise multiple policyholders.
That’s the scenario that keeps cyber insurers uneasy: systemic exposure that mutates quicker than traditional models can price.
KYND suggests continuous portfolio monitoring, more granular data in risk selection, and sharper policy wording around AI-driven incidents.
Thomas said insurers need to watch not only the security posture of a single client but the overlapping technical links that bind portfolios together.
Spotting those dependencies early, he argued, may be the only way to keep MCP-related risks from turning into the market’s next blindside event.
Recognizing the potential exposure accumulation risk arising from AI, it is important for the (re)insurance industry to look ahead and forge an analytical pathway to measure the risk, while embracing the positive side of AI (see how Artificial Intelligence Promises to Revolutionize P&C Insurance Industry).
Partnering with leading cyber risk modeling vendor CyberCube, our study discusses a framework for systemic risk quantification, then investigates 2 counterfactual examples as blueprints for an AI-empowered cyber attack.









