With artificial intelligence practically impacting all aspects of everyday life, the number of insurance gaps when using AI has staggered in recent years. Munich Re highlighted this in a recent whitepaper that showcased how AI exposures within traditional insurance policies possess the ability to become a significant unexpected risk to insurers’ portfolios.
There are two key gaps that insureds need to be aware of when using AI. One example is pure economic losses. For example, if a company utilises an AI internal operations.
Let’s say a bank utilises AI for extracting information from documents, but then if the AI essentially produces too many errors, then what has been extracted is a lot of incorrect information
This would mean that people would need to do the job again, which would cause a lot of extra expenses.
The second area of coverage gaps can be AI discrimination. An example would be with credit card applications and credit card limits.
The AI might be used to determine what is the appropriate credit limit for the applicant, and with that discrimination could occur, which would not be covered under other insurance policies.
How AI exposures within insurance policies possess?
AI exposures within traditional insurance policies possess the ability to become a significant and unexpected risk towards an insurer’s portfolio.
The insurance market is undergoing a remarkable transformation, thanks to the exponential growth of artificial intelligence in insurance.
With AI comes this kind of systematic accumulation risk potential, especially if one model is being utilised across similar use cases across different companies.
Another area to consider is in the domain of copyright infringement risks with generative AI models.
Users might make use of a generative AI (GenAI) model to produce texts or images, but the model could potentially produce texts or images that are very similar to copyrighted texts or copyrighted images.
If the user decides to use this content, then they may face copyright infringement claims and lawsuits against them.
- Artificial intelligence (AI) can help insurers assess risk, detect fraud and reduce human error in the application process. The result is insurers who are better equipped to sell customers the plans most suited for them.
- Customers benefit from the streamlined service and claims processing that AI affords.
- Some insurers think that, as machine learning progresses, the need for human underwriters could become a thing of the past – but that day might be years away.
This rapid change means big things for insurers and applicants alike. Here’s how AI is on the frontier of the insurance industry and where it might be heading in years to come.
According to Beinsure, global generative AI in insurance market size will be worth $5,5 bn by 2032 from its current size of $346.3 mn, and growing at a CAGR of 32.9% through the next decade.
Insurers are harnessing the power of artificial intelligence to optimise their operations, improve risk assessment models, and deliver personalised customer experiences.
The revolutionary capabilities of generative AI, which generates new and valuable information, are poised to reshape this industry sector.
Generative AI risks for Insurers: copyright and intellectual property
Too many companies may choose to build their own AI models, not from scratch, but by building on big GenAI models and taking them further.
They might use these models as foundational models. But if the foundational model has a certain risk of producing copyright infringing assets, if it’s used as a foundational model then the risk will carry through even though it’s just being used as a basis for training their own application.
The kind of foundational model use raises the potential for systemic accumulation in the copyright infringement area.
With AI technology making a major impact on many aspects of life, there is a lot of partial coverage from existing insurance policies, which is ultimately making it difficult for both insurer and insured to have full confidence on the extent of the coverage.
There are coverage gaps already with the pure economic losses and the AI discrimination. But there is a need from a protection perspective to design suitable insurance coverage for those gaps.
But then there are also concerns surrounding silent AI exposure. There might be potentially partial coverage, but the coverage might also be potentially silent on it.
AI can also determine an individualized price based on consumer behavior and historical data (see how Using AI, Analytics & Cloud to Reimagine the Insurance Value Chain).
As insurers modernize their legacy core systems, freeing siloed data, they’re able to automate their underwriting workflows to provide a faster digital buying experience, while connecting to additional data sources that help them apply the appropriate level of risk management.
The expansion of the generative AI market in the insurance industry can be largely attributed to its significant impact on operational efficiency. Insurers are increasingly adopting AI algorithms to streamline critical processes such as claims processing, underwriting, and policy administration.
The insurance industry has been slow to adopt new technologies, but that’s starting to change. In particular, insurers are beginning to explore the potential of artificial intelligence.
AI can be used in a number of ways, from pricing insurance products and figuring out how to get clients for insurance business to streamline claims processing. By harnessing the power of AI, insurers can gain a competitive edge and improve the customer experience.
Insurance coverage for liabilities with usage of AI
As an industry, it might make sense to structure one bundled insurance product which provides clarity that there is insurance coverage for certain liabilities which emerge out of the usage of AI. This would address the problem in a really proactive way.
There are any limitations to the guarantees that Munich Re offers within insuring and addressing the risks that are inherent within AI.
There are technical limitations because there are different forms of AI risks. Because of this, for certain AI risks we can only offer coverage if certain technical preconditions are met.
Munich Re can cover the risk of copyright infringement if certain statistical techniques are used that modify the generative AI model such that insurer can estimate the probability that it will produce a similar output with a high degree of confidence.
It’s not possible to avoid the fact that a generative AI model will produce outputs which might be copyright infringing. However, there are certain tools that at least mitigate the probability that something like this could happen.
It’s the same on the error side. Even if a company has the most well-built AI model, it will never be error-free.
Any AI model will produce errors with a certain probability, and that all comes down to a testing process perspective.
Are the testing procedures statistically robust enough to allow us to estimate this probability? If they are not, then they will not be insurable.
We require certain technical preconditions in order to really estimate the risk with confidence and insure it. If those are not given, then insurer will not be able to provide insurance for these kind of risks.
FAQ
Using infringing assets as the foundation for AI models can carry inherent copyright risks. Even if these models are only a basis for developing new applications, the initial copyright infringement risk may still be present, potentially impacting the final AI product.
Foundational models bring systemic risks, as their widespread use can lead to accumulated copyright infringements across applications. This systemic accumulation makes it essential for insurers to understand and address the associated risks in AI coverage.
With AI evolving rapidly, traditional policies often provide only partial coverage, leading to gaps. These include coverage for economic losses and AI discrimination risks. Insurers need tailored policies to cover these emerging and complex risks fully.
Silent AI exposure refers to risks that may be covered in part or not explicitly addressed in existing policies. This ambiguity leaves both insurers and insureds unsure about coverage extent, highlighting the need for more clarity in AI-related policies.
AI-driven tools streamline underwriting and claims processes, enabling insurers to modernize workflows, access additional data sources, and assess risk more accurately. This results in faster, more efficient services for customers and better risk management.
AI can personalize pricing based on consumer behavior and historical data. While this improves accuracy, it also raises questions about fairness and transparency, which insurers must consider to maintain customer trust.
Insuring AI involves technical limitations, such as the need for robust testing procedures to ensure reliability. Some AI risks are insurable only if specific technical preconditions are met. Insurers may offer limited guarantees, especially for complex AI risks, to manage exposure effectively.
……………………
AUTHOR: Michael Berger – Head of Insure AI at Munich Re. Source: Reinsurance News