As generative AI changes the way companies do business, it is creating new risks and new causes of loss that impact not only the companies themselves but also their business partners such as third-party vendors and digital supply chains.

According to AON report, recent events and court cases highlight the developing forms of risks associated with generative AI, including copyright, trademark and patent infringement, discrimination, and defamation. We see the impact of Generative AI on the insurance industry to be slow at first, then all of a sudden.

The generative AI market could grow to a value of $1.3 trln over next 10 years, up from $50 bn in 2023

The excitement about the potential impact of Generative AI in insurance should be balanced with a practicality.

Generative AI Change Insurance Business Landscape

There are several considerations insurers (and investors) should take into account when exploring LLMs in insurance:

  • Safety and data security – especially with personally identifiable information (PII) and claims data,
  • misplaced confidence – LLMs are very capable of providing authoritative sounding but inaccurate answers. Thesehallucinations can be especially dangerous when answering policy and coverage related questions that have a definitive answer (and expose carrier/broker to liability if answered incorrectly),
  • the pace of change- do organizations have the ability to continuously maintain and upgrade to the latest tech,
  • compute cost – cost and availability of GPUs will remain a gating factor for adoption across industries,
  • data interoperability across legacy systems and between different stakeholders in the value chain will ultimately limit the most grandiose use cases from coming to fruition until more modern underlying infrastructure is in place.

Generative AI will transform the insurance industry

Recent technological breakthroughs in Generative AI have the potential to transform the insurance industry. At itscore, insurance is the ultimate data-driven business. Every day, insurers and brokers receive and generate massive quantities of data via email, call centers, PDFsand spreadsheets, accordig to MTech Capital research.

This data, often granular and proprietary to the insurer, has never been comprehensively analyzed. But the emergence of large-language-models (LLMs) presents an opportunity for the industry to finally change that.

Widespread commercial access to foundational LLMs like OpenAI’s GPT-4 is still very new and innovation is accelerating, with breakthroughs appearing on an almost weekly basis.

Up to now, training machine learning algorithms required enormous amounts of data to perform a specific task. With the recent breakthroughs in foundational LLMs,the additional training or ’fine-tuning’of LLMs to perform the same tasks can be accomplished with much less data, speeding up deployment.

The insurance market is undergoing a remarkable transformation, thanks to the exponential growth of generative artificial intelligence (see How AI Technology Can Help Insurers).

AI is likely to become the next big issue to increase earnings volatility for companies across the globe, and will become a top 20 risk in the next three years, according to Aon Global Risk Management Survey.

Adopting available artificial intelligence today and preparing for future iterations, is critical for insurers to address emerging transformative trends that shape insurance industry proactively and with the greatest impact possible.

Generative AI in insurance market size will be worth $5,5 bn by 2032 from its current size of $346.3 mn, and growing at a CAGR of 32.9% through the next decade.

Insurers must embrace Artificial Intelligence technology to successfully navigate today’s emerging transformative trends that are shaping the insurance landscape. An aging population, reliance on AI, and new technological, environmental, financial and social risks, are top of mind issues for many claims leaders.

Generative AI is creating new risks & loss

Generative AI is creating new risks & loss

Bearing in mind that there is an important difference in the risks – and risk management approaches – associated with model creation versus model usage and different approaches, some examples in this emerging risk field include:

  • Data Privacy and Confidential Information
  • Unreliable Model Training
  • Unintended AI Actions
  • IP/Confidential Information/Trade Secrets

Data Privacy and Confidential Information

The training of large language models (LLMs) like ChatGPT and Bard requires the digestion of vast amounts of data, which may – depending how the model is trained – include sensitive data such as personal data or proprietary client data (see How AI Technology Helps Insurers Enhance the Customer Experience?).

Generally, LLMs do not have the ability to “unlearn”, meaning that, if sensitive information is input into these models, it is very difficult to remove or correct this information.

For example, an anonymous group of plaintiffs filed a suit in the United States against a leading AI company and a leading technology firm alleging that the AI company misappropriated private and personal information belonging to millions of people by using publicly-available data from the Internet to develop and train its generative AI tools.

The plaintiffs allege that this use case constituted theft, misappropriation, and a violation of their privacy and property rights. The complaint also includes claims for violations of the Electronic Communications Privacy Act, the Computer Fraud and Abuse Act (CFAA), various state consumer protection statutes, and a number of common law claims.

Unreliable Generative AI Model Training

Generative AI is only as good as the information on which it was trained. Data used in AI learning may be from questionable sources or quality, which may lead to inaccurate or otherwise unreliable outputs (see how AI can optimize insurance claims management).

Plausible sounding but entirely fictitious outputs can be generated. For example, it was reported that a New York lawyer asked AI to write a brief for a dispute his firm was handling.

The AI model invented authoritative sounding – but actually non-existent – case law to support the brief. Ultimately, the court sanctioned the lawyer for the error-riddled brief.

Generative AI is creating new risks & loss

Unintended AI Actions

From hiring decisions to healthcare and loan application vetting, AI may make incorrect conclusions or decisions, and where human oversight is ineffective, create risk to organizations either directly or through their subcontractors.For example, the US Equal Employment Opportunity Commission recently settled a case wherein a firm they prosecuted elected to pay $365,000 to more than 200 job applicants who alleged age discrimination after being rejected by AI hiring software.

IP, Confidential Information and Trade Secrets

As generative AI models bring forward new concepts, ideas and designs, they may have borrowed heavily from other sources without permissions and may infringe patents or bury others’ protected work product in their learning.

For example, a leading image-generating media company filed a complaint against an AI firm alleging that the firm illegally used images from the media company’s library to train its own model, which would compete with the plaintiff’s.

The media company claimed copyright infringement, trademark infringement, trademark dilution, and unfair competition, amongst other assertions. The case, which remains pending as of this writing, seeks damages and an order to destroy models related to the allegations. The media company has since launched a competing image generating model.

The insurance market's understanding of generative AI-related risk

The insurance market’s understanding of generative AI-related risk is in a nascent stage. This developing form of AI will impact many lines of insurance including Technology Errors and Omissions/Cyber, Professional Liability, Media Liability, Employment Practices Liability among others, depending on the AI’s use case.

Insurance policies can potentially address artificial intelligence risk through affirmative coverage, specific exclusions, or by remaining silent, which creates ambiguity.

Insurers are defining their strategies around this rapidly changing risk landscape including:

  • Clarifying coverage intent/addressing “silent AI coverage” through revised policy language related to AI risk.
  • Building out their underwriting requirements which are, already very robust. While underwriters are just beginning to ask questions, the process has the potential to become burdensome and prolonged with the many potential applications that could be created and deployed.
  • Developing creative AI products and solutions (e.g., a leading insurer has developed a product that provides a performance guarantee based on an AI risk assessment).
  • Expanding their technology-based talent competencies – either organically or through partnerships and/or acquisitions – to support underwriting and pricing through technical assessments and monitoring.

Managing AI Risk

The insurance market's understanding of generative AI-related risk

While the productivity gains of generative AI are easily recognizable, organizations should take great care and conduct regular risk assessments as they embrace this new world.

Aon suggests that organizations work with team as well as technology experts, attorneys and consultants to set policies and establish a governance framework that aligns with regulatory requirements and industry standards.

As respects the organizations’ use of AI, some components of that framework may include:

  • Routine audits of your AI models to ensure that algorithms or data sets do not propagate unwanted bias.
  • Ensuring an appropriate understanding of copyright ownership of AI-generated materials.
  • Developing and implementing this same framework into a mergers and acquisitions checklist.
  • Mitigating risk through the implementation of B2B contractual limitation of liability, as well as vendor risk management.
  • Insertion of human control points to validate that the governance model used in the AI’s development aligns with legal and regulatory frameworks.
  • Conducting a legal, claims and insurance review and considering alternative risk transfer mechanisms in the event of the insurance market begins to avoid these risks.

Bloomberg Research forecasts the generative AI market will grow to $1.3 trillion over the next 10 years. As firms race to share in that growth, they would do well to stay focused on the potential risks and issues that will arise along the journey.

The insurance market's understanding of generative AI-related risk

As the insurance industry continues to navigate the pace of change, complexity and uncertainty in our world, consumers continue to respond, expecting companies to be more responsive to their needs. This year’s underwriting predictions offer guidance on how carriers can respond faster.

With Data-Driven AI models, insurance companies can do more personalized recommendations to consumer as well as to build the appropriate products for segments of clients by optimizing earnings and customer satisfaction

AI can also determine an individualized price based on consumer behavior and historical data (see how Using AI, Analytics & Cloud to Reimagine the Insurance Value Chain).

As insurers modernize their legacy core systems, freeing siloed data, they’re able to automate their underwriting workflows to provide a faster digital buying experience, while connecting to additional data sources that help them apply the appropriate level of risk management.

The expansion of the generative AI market in the insurance industry can be largely attributed to its significant impact on operational efficiency. Insurers are increasingly adopting AI algorithms to streamline critical processes such as claims processing, underwriting, and policy administration.

Edited by Peter Sonner  Peter Sonner

You May Also Like