Skip to content

Generative AI’s Impact on Cyber Threat & Insurance Landscape

    Amid rising tensions and shifting priorities between global powers in cyberspace, the main threat to businesses is the growing professionalism of cybercriminals who carry out sophisticated attacks with impunity from hostile nations, according to Howden’s Cyber Report.

    Whilst ransomware and systemic risk continue to dominate the cyber threat landscape, another major and relatively new development has been the explosion of Gen AI.

    Gen AI offers opportunities for both cyber attackers and defenders, poised to significantly impact the threat landscape by enabling more advanced attacks from skilled actors and lowering entry barriers for novice hackers.

    Fortunately, new AI-driven defenses are rapidly developing, and effective use of traditional risk controls can enhance resilience.

    Despite the highly fluid threat environment, the foundations for a mature cyber (re)insurance market are now in place with the prospect of steady exposure-led growth, ongoing profitability and innovation.

    Competition is returning to the market as improved cyber hygiene has mitigated losses and delivered strong underwriting performance.

    Consensus within the cyber security community and insurers

    Consensus within the cyber security community and insurers

    Despite broad consensus within the cyber security community and insurance market about the transformative potential of this new technology, for both offensive and defensive capabilities, there is far less clarity on which use cases will prove the most important and when they are likely to gain traction.

    Two emerging conclusions on how Gen AI will reshape the threat landscape over the next few years are becoming increasingly clear, according to Cyber Insurance & Risk Management report.

    Gen AI will push up the potential aggregation, severity and frequency of claims in predictable areas.

    • First, given the geopolitical and financial incentives, sophisticated, state-backed threat actors will use Gen AI to sharpen their tactics, techniques and procedures (TTPs) with increasing effectiveness and scale. In February 2024, Microsoft and Open AI disclosed that nation state threat actors have been using ChatGPT to make established hacking activities easier, with one Russian gang, for example, conducting reconnaissance on satellites.
    • Second, and more importantly for the insurance market, Gen AI will push up the potential aggregation, severity and frequency of claims in predictable areas by enhancing the capabilities of commercial hackers.

    The degree to which AI will improve threat actors’ capabilities between 2024 and 2026 draws out the implications for claims.

    All types of attackers, ranging from highly capable state actors to organised crime groups and less skilled hackers, will see AI enhance their capabilities. Beyond this, the impact will be highly nuanced.

    Cyber global gross written premium

    Cyber global gross written premium
    Source: Howden

    Cyber insurance has been one of (if not) the fastest growing areas of insurance for the best part of a decade.

    Annualised growth of 30% during this time compares to the single-digit percentage range of the broader P&C commercial insurance sector.

    Premium volumes are driven by a blend of exposure and pricing, and whilst both helped to drive growth up to 2020, the pricing environment precipitated a notable shift in 2021/22, when high double-digit annualised price increases more than offset underwriting actions and the ensuing reduction in overall exposures.

    2023 marked the slowest rate of growth since the market’s inception (+5%). Underwriting targets were not met last year, with several major players missing income goals.

    Absent any shocks, pricing from here is unlikely to drive market expansion to the extent it did during the 2020-2022 correction, requiring ambitious plans for exposure growth.

    Loss ratio and premium for U.S. standalone cyber insurance

    Loss ratio and premium for U.S. standalone cyber insurance

    Marginally reduced premium flow into U.S. domestic admitted and surplus lines last year (down 3% year-on-year, the first decline on record) had a limited bearing on results, as the quantum of losses and defence costs remained relatively stable.

    Shifting growth dynamics Improved cyber hygiene and a more stable underwriting environment puts the cyber market in a strong position to restore the trajectory of growth after the premium base stabilised in 2023.

    This projection is down on our USD 50 billion estimate made last year due in large part to flat growth in the United States.

    Global cyber gross written premium projections up to 2030

    Source: Howden, MarketUs, Munich Re, S&P, Swiss Re

    Even accounting for the slowdown in 2023, the market could still be on course to achieve a premium base of close to USD 40 billion by the end of the decade.

    The realisation of this potential will inevitably be tied to external factors such as macroeconomics and geopolitics, but by focusing on key issues within its control – including SME penetration, geographic expansion and continued model development – the market can secure long-term relevance.

      Enabling Cybercriminals

      The cyber threat from AI systems remains largely unknown. Amid speculation and sensationalism about how Gen AI will affect the cyber risk landscape, it’s crucial to understand the actual threat. At this early stage of development, we identify three primary use cases businesses need to prepare for:

      Enabling Social Engineering

      LLMs allow criminals to quickly and easily create plausible content. They correct the language, tone, spelling, and grammar of phishing emails, making them more targeted and credible.

      Deep Fakes and Voice Cloning

      Criminals and hacktivist groups clone genuine voices and images for online fraud, social engineering, and spreading disinformation on social media, according to Global Cyber Risk Insurance Report. High-profile cases of deep fake videos deceiving targets and extracting large payments demonstrate a significant increase in sophistication among threat actors.

      AI as an Enabler

      Entry-level cybercriminals use services like OpenAI to quickly learn attack execution, while experienced individuals focus on maximizing efficiency. This includes debugging code, translating documents, generating scripts, retrieving and collating publicly available information about targets, and researching ways to compromise systems.

      Less skilled hackers

      Enabling Cybercriminals

      This group will see the biggest uplift to their capabilities. Most importantly, many novice threat actors will gain access to tools, code and intelligence that will enable them to start hacking.

      This will be driven by sophisticated hackers with deep expertise in AI monetising their skills by selling capabilities online, a relatively low risk business model.

      AI will accelerate the trend of recent years for the democratisation of hacking, which is visible in the rise of outsourced hacking such as RaaS.

      The main implication of AI-driven technology democratisation of hacking will be a rise in the frequency of low- level claims. Novice threat actors will find it easier to carry out phishing, which was the vector used in 84% of UK business attacks in 2023.

      They will have also access to chatbots to help draft high quality phishing content, akin to ChatGPT without guardrails, AI-generated reconnaissance on which businesses to target and even AI-generated ransomware code.

      Howden’s Global Cyber Insurance Pricing Index

      Howden’s Global Cyber Insurance Pricing Index
      Source: Howden

      After a period of upheaval marked by a deteriorating loss environment, constrained insurance capacity, rising global demand, and a significant pricing correction, market conditions have improved over the last 12 months (see Cyber Risk Insurance Market Global Trends 2024).

      Pricing is now falling, and competitive forces are leading to more tailored underwriting that reflects companies’ risk profiles.

      Howden’s Global Cyber Insurance Pricing Index shows a rapid transition from triple-digit rate increases in 2021/2022 to double-digit reductions in 2023/2024.

      The index is now down 15% from its peak in mid-2022. Besides price decreases, which vary by sector, region, and risk profile, capacity is up, and insurers are willing to increase limits, remove ransomware-related cover restrictions, and lower retention levels.

      The post-hard market performance of cyber insurance, along with underwriting targets for 2024/2025, indicates carriers’ satisfaction that the cost of cover matches attritional loss costs.

      This suggests that favorable conditions will persist into the second half of 2024, although specific tightening in the healthcare sector could lead to broader adjustments if the elevated threat environment results in higher claims activity.

      Organised crime groups

      Organised and technologically advanced cybercrime groups will see their capabilities enhanced in ways that point to a significant increase in the severity of a small number of claims. These groups will increasingly focus on the most lucrative hacking, i.e. targeting companies most likely to pay big ransoms with sophisticated attacks.

      One such vector is social engineering via deepfakes, where AI generates convincing fake voice and even video calls to dupe employees into transferring funds or sharing login details.

      Furthermore, Gen AI will improve exfiltration capabilities by enhancing the speed and accuracy with which high value data can be identified and stolen.

      Over time, large language models (LLMs) will be trained on stolen datasets to learn what to look for. This will in turn lead to more extortion by forcing companies to pay to maintain the confidentially of exfiltrated data.

      Highly capable state actors

      The most sophisticated hackers are backed by nation states and they remain closely focused on conflict and geopolitical goals rather than making money.

      Gen AI could nevertheless be used by these actors to enhance malware capabilities should priorities shift, which would in turn present increased risk of spillover and loss aggregation.

      Cyber threat at both ends of sophistication spectrum

      Cyber threat at both ends of sophistication spectrum

      As alluded to above, we are already seeing the sophisticated use of audio and video deepfakes in highly targeted authorised push payment fraud. These are currently low likelihood, high impact attacks. As capabilities continue to increase, we expect the barriers to entry for this type of sophisticated and targeted fraud to be lowered.

      They will nevertheless still require a degree of sophistication (and effort) that should prevent their deployment on a mass scale, but it is important for businesses to understand that the realism of fakes is ever increasing, as is the likelihood of successful attacks.

      At the other end of the sophistication scale, cybercriminals are using LLMs to improve the quality of written communications in phishing attacks and other financially motivated scams.

      This is likely to increase the chances of success of high volume (rather than targeted) attacks. AI also lowers the barriers to cyber criminality, hacking-for hire and hacktivism. This easier access will likely result in an increased volume of financially motivated cyber activity such as ransomware, and broader and more impactful hacktivist activity.

      From what has been recorded so far, we expect to see an increased impact and volume
      of attacks, mostly because of an uplift in capability across research, reconnaissance and
      social engineering.

      Attacking AI

      NCC Group research has previously highlighted the potential of threat actors attacking
      AI systems to deny service or incur heavy costs for victims. These sorts of attacks are
      unlikely to be motivated by financial gain for the attacker, but instead to gain notoriety or for ideological motivations.

      Trained models represent a significant investment in intellectual property (IP) by AI developers.

      Unscrupulous groups could launch attacks to extract training data and model weights, thereby securing access to IP to gain a competitive advantage. NCC Group recently published an advisory for a Domain Name System rebinding vulnerability in the Ollama LLM framework, showing that traditional application security is still just as relevant, even when that application is AI.

      AI defences developing at pace

      AI defences developing at pace

      Machine learning models have been built into cyber defence tools for many years now,
      providing automation capabilities that allow organisations to amplify their cyber defence efforts.

      Natural language interfaces have also been added for search, and now the availability of LLMs means that tool developers are able to integrate conversational interfaces providing access to data and functionality.

      Using AI to detect and defend against AI attacks is in its early stages but rapidly developing.

      For instance, the UK Home Office Accelerated Capability Environment recently launched a deepfake detection competition. Large tech companies and AI developers are also exploring watermarking and provenance techniques to identify AI-generated media or verify its authenticity.

      Despite the rapid pace of change, companies should focus on employee awareness and practical tips. These include avoiding actions under pressure, being cautious of scenarios that seem too good to be true, verifying information with an independent colleague or another medium, and asking potential scammers to prove their authenticity, such as by wearing sunglasses or a hat.

      ……………

      AUTHORS: Matt Hull – Director Global Threat Intelligence at NCC Group, Jon Renshaw – Deputy Director of Commercial Research at NCC Group