Skip to content

Generative AI, Insurance Risks & Cyber Catastrophe Events

    Generative artificial intelligence (Gen AI) is no longer a futuristic concept. It is here, transforming systems to generate unique content, including text, images and even music. According to the report, Outlook on AI-driven Systemic Risks and Opportunities, Gen AI is undoubtedly transforming the way we work.

    In this report, Guy Carpenter and CyberCube explore the transformative impact of AI on cyber risk, detailing its dual role in enhancing both defensive and offensive cyber capabilities. Beinsure Media selected the most important points from the report.

    As generative AI changes the way companies do business, it is creating new risks and new causes of loss that impact not only the companies themselves but also their business partners such as third-party vendors and digital supply chains.

    The analysis highlights AI’s potential to amplify systemic risks, such as through polymorphic malware or AI-targeted data breaches, while also providing a framework to quantify these emerging threats.

    Leveraging the cyber kill chain model, the report underscores the urgency for insurers to adapt to AI-driven threats, balancing innovation with robust risk mitigation strategies.

    Generative artificial intelligence is considered one of the most important technological breakthroughs of the last few decades. Munich Re Group sees great opportunities for insurers – if they explore the possibilities of the new technology and understand its risks.

    There are many ways organizations could revolutionize their respective industries by applying Gen AI to routine business functions. For example, in insurance, Gen AI can assist underwriters evaluating risks by analyzing vast amounts of data, including historical claims, customer information and internal/external cybersecurity factors.

    By summarizing risk profiles, Gen AI can help underwriters develop the appropriate coverages and make more informed decisions quickly. However, artificial intelligence technology in insurance also presents new cybersecurity risks.

    While Gen AI can be used to improve operational efficiency, it also opens doors for malicious actors to exploit its capabilities for cyberattacks.

    AI deployment can lead to cyber aggregation risk

    AI deployment can lead to cyber aggregation risk

    4 new dynamics by which AI deployment can lead to cyber aggregation risk:

    1. AI as a software supply-chain threat. Organizations that deploy AI may seek third-party solutions such as ChatGPT, in which the compromise of the vendor model can become a single point of failure for all customers using the model (see How AI Technology Can Help Insurers).
    2. AI presents a new attack surface. Once AI is deployed, users can interact with the model. Whether it is a chatbot, a claims processing tool or a customized image analysis model, the model receives input and sends outputs. This process is subject to malicious and sometimes accidental manipulation.
    3. AI presents a data privacy threat. A model is only as good as the data on which it is trained. To train these models, they must be given access to relevant datasets–often large, sensitive datasets. A compromise to the centralized storage for these datasets can have dramatic downstream effects.
    4. AI in security roles. One of the highly touted use cases for AI is in cyber security operations, the type of procedures that require high-level privileges, such as those present in CrowdStrike’s recent faulty software update. With such critical response decisions given to AI, the potential for errors or misconfigurations may increase, resulting in additional risks.

    While that paper explores these risks from a conceptual, forward-looking perspective, this paper serves as a complement, focusing on the evolving technical and analytical aspects of AI impacts.

    Recognizing the potential exposure accumulation risk arising from AI, it is important for the (re)insurance industry to look ahead and forge an analytical pathway to measure the risk, while embracing the positive side of AI (see how Artificial Intelligence Promises to Revolutionize P&C Insurance Industry). Partnering with leading cyber risk modeling vendor CyberCube, our study discusses a framework for systemic risk quantification, then investigates 2 counterfactual examples as blueprints for an AI-empowered cyber attack.

    Implications for Cyber Catastrophe Events

    Implications for Cyber Catastrophe Events

    AI technologies are expected to significantly impact cyber catastrophe events. This analysis uses the kill chain model to align areas of AI research with components of CyberCube’s catastrophe models. Initial findings highlight AI’s potential role in various stages of the kill chain. This discussion focuses on areas with proven proof of concept to emphasize relevance.

    Frequency and Footprint Impacts

    The frequency and scope of cyber events are influenced at both pre-intrusion and post-intrusion stages of the kill chain. Research shows AI’s ability to enhance threat actors’ speed and capabilities, increasing the likelihood of large-scale attacks.

    Large language models (LLMs) enable higher-quality, scalable social engineering, including phishing and deep fakes. They also allow quicker vulnerability identification, potentially expanding the initial attack footprint.

    These capabilities could accelerate attack escalation, raising global cyber event frequency through smaller incidents reaching material thresholds. Existing material events may also impact more companies due to AI-driven expansion of attack footprints.

    Recorded Future reports demonstrate LLMs improving efficiency in reconnaissance, weaponization, and delivery stages of attacks. Additionally, adversaries manipulate LLMs through prompt injection to execute post-intrusion activities, exposing companies deploying customer-facing LLMs to insider threats (see about Generative AI — Emerging Risks & Insurance Market Trends).

    Frequency Impacts

    Generative AI (Gen AI) enhances various attack vectors, potentially increasing attack frequency. AI’s automation capabilities and adaptive learning during attacks present a significant edge over traditional tools. However, its net effect on attack frequency depends on defenders’ ability to deploy AI-based defense strategies.

    Defenders with access to advanced AI technologies and robust training data should have an advantage. However, organizations with fewer resources or less inclination to invest in these technologies may face higher risks.

    This disparity could result in increased variation in cyber event impacts across firms of similar size and industry.

    Data indicates cyber event frequencies follow a “wave” pattern, where novel attack methods drive initial increases, followed by declines as defensive measures counteract them. For instance, ransomware attacks were mitigated through improved port security and backup protocols. AI-driven advancements may accelerate this cycle, reducing the time between peaks as attackers and defenders adapt at faster rates.

    While AI’s rapid development mirrors trends like Moore’s Law, it may face physical and technological limitations over time. These constraints could result in periods of slower innovation, reducing both offensive and defensive advancements.

    Such periods may bring temporary stability to the cyber threat landscape as both sides reach temporary equilibrium.

    This evolving dynamic underscores the need for continuous investment in defensive strategies, particularly for smaller organizations. As AI technologies mature, the cyber risk landscape will likely reflect a balance of heightened threats and enhanced mitigation capabilities.

    Malware Threats and AI Advancements

    Malware Threats and AI Advancements

    Novel malware has been developed using large language models (LLMs) to exploit known vulnerabilities more efficiently. These advancements can impact various types of cyberattacks, including outages, data breaches, malware infections, and ransomware.

    A significant application of AI in cyberattacks involves polymorphic malware—malware that evolves throughout the attack lifecycle to evade pattern recognition or heuristic defenses.

    Research since 2019 has showcased polymorphic malware concepts capable of rewriting themselves to avoid heuristic-based anti-malware tools. LLMs enhance this process, enabling malicious operations at scale. Threat actors can automate malware mutations to:

    • Prolong their presence in systems, increasing potential damage.
    • Evolve frequently enough to bypass signature-based detection.
    • Streamline learning, command, and control (C2) operations, accelerating propagation within and across networks.

    These advancements improve current mutation algorithms, making them more dynamic. LLMs at the core of these processes adapt similarly to defensive systems, maintaining an advantage in the arms race of cyber offense and defense.

    Impact on Ransomware Campaigns

    Enhanced lateral movement and propagation capabilities are particularly relevant for ransomware. AI-driven automation in C2 operations and negotiation processes can expand the scale and speed of attacks, increasing profitability.

    For instance, HYAS Labs demonstrated a proof of concept named “BlackMamba,” which leverages LLMs to create polymorphic keylogger functionality dynamically.

    Such innovations in malware design are expected to boost effectiveness and propagation rates significantly.

    Defensive Applications of AI

    Cybersecurity vendors and threat intelligence services are also leveraging generative AI to counteract these threats. Enhanced AI tools help differentiate malware from normal operations and detect malicious activities faster and more accurately. Ongoing testing against real-world scenarios is essential to refine these defensive capabilities and measure their effectiveness.

    Data Breaches and AI-Driven Exfiltration

    Data exfiltration remains a key challenge for attackers seeking extortion opportunities or resale profits. Machine learning can enable faster, stealthier data theft by reducing file sizes and automating data analysis to identify valuable information amid irrelevant data.

    This increases breach efficiency, allowing attackers to locate and extract critical assets more rapidly. Consequently, these advancements could lead to higher ransom demands, more frequent legal liabilities, and greater financial impact.

    Implications for Cybersecurity

    The use of AI by threat actors underscores the need for equally advanced defensive measures. While LLMs and machine learning bolster attack strategies, they also empower cybersecurity teams to counter these evolving threats. Maintaining a balance between innovation and defense is vital to minimizing risks in an increasingly AI-driven cyber landscape.

    Examining AI Implications on Historical Events

    Having examined the theoretical ways in which AI can alter the frequency, footprint and impact of a cyber attack, we now investigate 2 counterfactual examples as blueprints for an AI-empowered cyber attack.

    Ryuk Ransomware

    From 2018 to 2019, Ryuk was a type of ransomware used in many campaigns12 targeting large, public entities with the goal of financial gain through encryption and ransom payments.

    During that period, Ryuk accounted for 3 of the top 10 largest ransom demands: $5.3 mn, $9.9 mn and $12.5 mn.

    Ryuk spread via very targeted means, which included using tailored spear-phishing emails and exploiting compromised credentials to remotely access systems via the Remote Desktop Protocol (RDP).

    The delivery method for Ryuk was through spam emails, often sent through spoofed addresses, to avoid raising suspicion. Emotet malware, a banking Trojan Horse, was typically used in combination with Ryuk. With RDP, a cybercriminal could
    install and execute Ryuk directly on the target machine or leverage their access to reach and infect other, more
    valuable systems on the network.

    The Emotet loader contained a lot of benign code as part of its evasion techniques and could manipulate security systems to avoid security detection.

    With machine learning capabilities, a polymorphic malware can be designed to recursively generate new code variants without human intervention as it calls out to a Gen AI model such as ChatGPT or some more purpose-built utility.

    The malware itself can periodically create an evolved version of its own malicious code that is more evasive and difficult to detect, utilizing techniques that security tools often are not equipped to handle.

    Equifax Data Breach and AI Implications

    In 2017, the Equifax data breach ranked as the second-largest in history, compromising 163 mn records globally, including nearly half of the U.S. population. It trailed only the 2016 Yahoo breach, the most significant data breach to date. However, the nature of the data stolen from Equifax—sensitive and complete—underscored its severity.

    Larger breaches (e.g., Microsoft Exchange, Facebook) have emerged, demonstrating the persistent threat of data exfiltration, particularly as attackers increasingly leverage AI and large language models (LLMs).

    A critical factor in the Equifax incident was the hackers’ ability to exploit unsecured credentials, accessing 48 databases. They executed 9,000 queries, but only 265 yielded personally identifiable information (PII). With AI, the attackers could have significantly increased efficiency by pinpointing PII more accurately, extracting more valuable data in less time.

    AI’s Role in Cyber Risks

    This report examines how AI can intensify traditional cyber risks, particularly through its use in attack campaigns. An emerging concern is AI itself becoming a target, functioning as a single point of failure (SPoF).

    AI’s complexity, unpredictability, and dependence on data amplify these risks, especially given its critical role in business operations.

    Addressing these issues requires further research and detailed analysis to quantify the potential impact of AI as a SPoF effectively.

    CyberCube has integrated generative AI into its Attritional Loss Model (ALM) updates. However, assessing AI-driven SPoF events as cyber catastrophes requires continuous evaluation. The evolving nature of AI integration demands updates to cyber catastrophe risk models.

    AI’s Dual Role in Cybersecurity

    While concerns about AI’s misuse dominate discussions, its defensive potential is equally significant. Advances in malware detection, data tagging, and loss prevention show promise.

    AI-driven models, such as those in endpoint detection and response (EDR/XDR) platforms, benefit from training on defenders’ environments and threat intelligence, creating robust security frameworks.

    Attackers, in contrast, rely on limited, outward-facing information.

    Incorporating AI’s defensive benefits into future cyber modeling frameworks is crucial to avoid overstating risks. The (re)insurance industry plays a key role in helping policyholders prepare for AI-related threats. Using CyberCube’s kill chain methodology, Guy Carpenter and CyberCube are working to refine risk assessment models. Ongoing research will further address AI’s implications for cybersecurity and insurance.

    FAQ

    What is the role of Generative AI in cybersecurity?

    Generative AI (Gen AI) enhances cybersecurity by enabling advanced threat detection, data analysis, and risk management. However, it also poses risks, such as facilitating sophisticated cyberattacks, including polymorphic malware and AI-targeted breaches.

    How does Gen AI impact systemic cyber risks?

    AI amplifies systemic risks through capabilities like scalable social engineering, faster vulnerability detection, and enhanced attack footprints. These risks necessitate robust mitigation strategies from insurers and cybersecurity professionals.

    What are the key dynamics of AI-induced cyber aggregation risks?

    Four critical dynamics include: AI as a supply-chain vulnerability, New attack surfaces created by interactive AI models, Data privacy threats from sensitive training datasets, Security risks due to AI misconfigurations in privileged operations.

    How does Gen AI affect the frequency of cyberattacks?

    Gen AI increases attack frequency by automating and refining various vectors, including phishing, reconnaissance, and malware propagation. While defenders can counteract with AI-driven tools, resource-limited organizations face heightened risks.

    What defensive applications of AI are proving effective?

    AI aids in malware detection, real-time threat analysis, and proactive response mechanisms. Tools like endpoint detection and response (EDR) platforms use AI to adapt to evolving threats, creating a stronger defense against cyberattacks.

    How does the insurance industry address AI-related risks?

    The (re)insurance sector is partnering with cyber risk modeling experts, like CyberCube, to quantify and mitigate AI risks. Analytical frameworks and kill chain methodologies help insurers adapt to the evolving cyber threat landscape.

    What future trends are expected in AI-driven cybersecurity?

    AI will continue to influence both offensive and defensive cybersecurity strategies. Periods of rapid innovation may alternate with slower advancements, reflecting equilibrium between attackers and defenders. Continuous investment in AI defenses is crucial for long-term security.

    …………………..

    AUTHORS: Jess Fung – Managing Director and North American Cyber Analytics Lead, Guy Carpenter, Joshua Knapp – Cyber Risk Modeling Team Lead, CyberCube, MJ Teo – Vice President and Senior Cyber Actuary, Guy Carpenter, Richard McCauley – Vice President and Senior Cyber Catastrophe Advisor, Guy Carpenter, Andrew Kao – Director of Product Marketing, CyberCube, Richard DeKorte – Cyber Security Consultant, CyberCube