The increasing use of artificial intelligence could trigger claims across many lines of business. Insurers will need to develop an understanding of intended and unintended effects, and design products that mitigate the risks, according to Swiss Re Institute’s SONAR report.
Insurers must embrace Artificial Intelligence technology to successfully navigate today’s emerging transformative trends that are shaping the insurance landscape.
Adopting available AI and preparing for future iterations, is critical for insurers to address emerging transformative trends that shape insurance industry proactively and with the greatest impact possible.
Developing a comprehensive claims AI strategy, which reimagines an organization’s plan for people, process, technology and risk, is critical to achieve some of the estimated $100 bn in gross written premium, as well as associated expense savings.
SONAR fosters better understanding of new or changing risks, their interactions and dependencies. The aim is to stimulate engagement among all relevant stakeholders, including re/insurers, to help best prepare society for tomorrow’s risks.
Artificial intelligence transforms insurance industry
AI has transformed insurance company operations and will further revolutionize global business practices. Generative AI, which produces text, images, videos, and other outputs, offers numerous benefits but also presents risks. As AI incidents rise, C-suite awareness of these dangers increases.
AI will significantly impact the insurance industry. Enhanced underwriting, AI-supported customer services, claims processing automation, and predictive analytics for fraud detection are some potential benefits.
However, the primary function of insurance remains risk transfer, protecting clients from various risks associated with their increased AI usage.
Potential impacts of artificial intelligence for insurance sector
AI system malfunctions causing operational shutdowns could lead to business interruption insurance claims:
- Professionals may face claims for incorrect advice or misinterpretations, and unexplainable AI-driven decisions that negatively impact end users.
- AI-enhanced product manufacturers could face property damage and bodily injury claims due to AI failures or malfunction, or violations of product liability regulations.
- Corporate leaders might be accused of failing to oversee or mitigate risks from AI processes, leading to financial losses or reputational damage, potentially triggering D&O and liability claims.
- AI-driven hiring practices introducing bias could result in discrimination lawsuits and claims of unfair employment practices against employers.
- Copyright and patent infringement claims may arise from the use of training data or AI models, impacting liability coverages.
- Increased AI use in healthcare diagnostics could alter insurance demand and create potential coverage gaps.
- Insurers may face claims for incorrect advice or misinterpretations from AI-driven underwriting tools, with biased models potentially leading to discrimination lawsuits and claims against insurance companies.
According to Overview of Catastrophic AI Risks, the immense potential of AIs has created competitive pressures among global players contending for power and influence. This “AI race” is driven by nations and corporations who feel they must rapidly build and deploy AIs to secure their positions and survive.
By failing to properly prioritize global risks, this dynamic makes it more likely that AI development will produce dangerous outcomes.
Importantly, these risks stem not only from the intrinsic nature of AI technology, but from the competitive pressures that encourage insidious choices in AI development.
AI risks: use traditional or new insurance policies?
Harm caused by AI can be “material or immaterial, including physical, psychological, societal or economic”, according to EU Parliament and the Council of the EU Proposal. It is unlikely that a single insurance policy will cover all potential risks that AI presents.
Generative AI and other foundation models are changing the AI game, taking assistive technology to a new level, reducing application development time, and bringing powerful capabilities to nontechnical users.
The latest class of generative AI systems has emerged from foundation models—large-scale, deep learning models trained on massive, broad, unstructured data sets (such as text and images) that cover many topics (see How Will Generative AI Change the Cyber Insurance Landscape?).
AI risks are neither explicitly mentioned, limited nor excluded in policy language, and different exposures may be covered by different policies already in existence.
Insurers need to analyse how AI-risks are dealt with in existing policies, and how to best deal with them in the future.
Which risks are already covered, and which perhaps only silently so (in other words unintentionally, due to perhaps ambiguous language). It could be that the latter require alternative costing and risk assessment approaches to those inherent in existing policies, and/or even completely newly designed insurance products.
Lessons learned from “silent cyber”
With silent cyber, the industry has already learned lessons. There have been instances where some risks have been covered in non-cyber insurance policies, even though that was not the intention.
With silent AI, it is time to prevent repetition of the same mistakes by understanding which risks traditional policies already (silently) cover.
With the fast development of AI and associated regulations, some of today’s assumptions may turn out to be wrong or incomplete. However, that should not prevent discussions around silent AI from starting now.
AI technology has revolutionized the way organizations do business; now, with proper guardrails in place, generative AI promises to not only unlock novel use cases for businesses but also speed up, scale, or otherwise improve existing ones.
Companies across sectors, from pharmaceuticals to banking to retail, are already standing up a range of use cases to capture value creation potential, according to McKinsey report.
Much of the work involved in managing these areas of the insurance claims process requires extensive human resources, in addition to manual, often repetitive tasks that are prone to duplication and error.
Impact of AI events on traditional insurance policies
A scenario-based approach can help build understanding of AI risk use-cases. The scenarios and their impacts can be assessed following a structured process and set of questions, to understand how current policy wording/coverages would apply to specific risk cases.
With these insights, insurers will be better placed to devise AI-risk transfer solutions that match customers’ future protection needs.
For what purpose is AI used and how do policy wordings respond?
- Enhanced decision-making support, Predictive analytics, Customer service, Intelligent health diagnostics, Asset management and Content creation
What kind of data is processed?
- Personal data, Payment data and Machine data
What regulation needs to be considered?
- EU AI Act, EU Product Liability Directive, other upcoming regulations globally
Which risks need to be considered?
- Hallucinations, Malicious use, Data breach, Copyright infringement and Software error
What are the consequences?
- Data is misinterpreted, Violation of regulation, Breach of contracts
What is the impact on the insured and/or on third parties?
- Due to data misinterpretation, manufacturing systems have been shut down and Biased treatment of employees
As the insurance market recognizes the potential of artificial intelligence, there remains uncertainty about how to effectively apply the technology to enhance customer engagement and drive sales.
Swiss Re examined how insurers can optimize the use of AI-powered tools to retain customers and improve interaction quality. They highlighted the importance of leveraging multiple AI models to achieve a higher return on investment.
The use of behavioral approaches, rather than demographic-based ones, yields superior results. Responsible AI usage can help insurers attract and retain customers.
Most insurers use AI primarily to identify the customers most likely to let their policies lapse. Single-purpose propensity models are highly effective when it comes to identifying a specific subset of customers at risk of being lost.
………………
AUTHORS: Patrick Raaflaub – Chief Risk Officer at Swiss Re, Rainer Egloff – Senior Risk Manager at Swiss Re Institute