Artificial intelligence promises to revolutionize how property and casualty insurance business gets done. The exact ways AI will influence business are unknown. Disruption and regulation will certainly result, but it remains to be seen whether property and casualty insurers will follow or use AI as a means to lead. Will the industry simply comply with regulations and operate at the floor, or will it reach for the ceiling and promote the ethical use of artificial intelligence?
P&C insurers have experience working with AI technologies
According to Insurance Information Institute (Triple-I) report “Pioneering Ethical AI: The Crucial Role of Property and Casualty Insurers”, P&C insurers have centuries of experience with disruptive technologies.
The internet makes research and communication swift and inexpensive, but hackers leave people and businesses vulnerable to million-dollar ransom demands.
In all these technological advances, P&C insurers have helped people and businesses maximize opportunity while managing the attendant risks.
P&C insurers are uniquely positioned to advance the conversation for ethical AI – not just for their own businesses, but for all businesses; not just in a single country, but worldwide.
Doing so will help AI tools evolve in a manner that lets it deliver on its promises while standing on the shoulders of a mature industry the world has trusted to safeguard all sectors of society.
Business leaders and insurers quickly learned the best technology was the most responsibly developed technology (see How Insurers Can Get Best Results from Artificial Intelligence Tools?). Internet companies know customers need an environment that protects them from ransomware. And the risk-based pricing behind insurance enabled businesses to manage the risks each technology presented.
Artificial Intelligence regulation
AI regulation has already started and will continue to evolve. Current actions show a fragmented, region-specific approach, highlighting the global scope of the issue.
This will likely lead to diverse regulatory frameworks, with some countries and regions imposing stricter rules than others. Leaders with a solid grasp of risk and regulation are in high demand.
Use of Artificial Intelligence Systems by Insurers
Insurers Are Uniquely Qualified
Largest P&C insurers understand risk. They assess risks and help their customers manage them. Plus, they must manage the risks of their own operations.
Together, these give the industry several core competencies:
- Imagining the Unimaginable: Insurers leverage data but also understand the importance of imagination innavigating a hard-to-predict world. Insurers anticipate risks that have yet to materialize. Their forward-looking approach is crucial in the emerging AI landscape. AI’s implications are vast and not fully understood. Developing methods to predict AI risk requires this kind of creativity.
- Historical Data Mastery: AI depends on data. Data feed AI models, and its output – whether it is an image, a chator an essay – is data, too. Insurers understand the power of data (see
Generative AI Provides Great Opportunities for Insurers). For centuries, leading insurers have grown proficient in collecting, maintaining, and deriving insights from data. They also know data has limitations, and they have strategies for when these limits are reached. - Regulatory Acumen: Insurers navigate a complex web of regulatory environments – the USA’s 50 states andsix other regulatory jurisdictions, plus approximately 200 countries and territories across the globe. They have a nuanced understanding of how different regulatory frameworks affect technological advancement. This knowledge is vital in shaping AI regulations that are effective, adaptable, and implementable.
A 2024 study commissioned by SAS found that nearly 90% of insurance decision-makers worldwide indicated they had a moderate to complete understanding of the potential impacts of generative AI on business processes.
This reflects a confidence on the part of insurers that they understand the value of generative AI on their and their customers’ businesses.
Risks of Inaction
Failure to actively participate in artificial intelligence in insurance evolution could leave insurers—and, by extension, their clients—at a disadvantage. The risks include reactive posture and biased regulations.
Without proactive engagement, insurers will likely find themselves adapting to practices thatmay not fully consider the unique needs of their industry or their clients. Failing to be proactive can provoke a backlash.
P&C insurers have seen that with innovations like predictive models and insurance credit scores.
An ethical AI approach developed without insurer input could favor other parties or lacknecessary context for the complexity of the insurance domain, potentially leading to guidelines that are less effective or equitable.
Involvement of insurers in developing ethical AI
The involvement of insurers in developing ethical AI would be multifaceted, including:
- Regulatory Collaboration
- Advisory Capacity
- Advocacy for Ethical AI
- Provide Insurance Products
Insurers should collaborate closely with regulators to develop emerging rules that are robust, flexible, and inclusive.
They must proactively engage with regulators at all levels, both domestically and internationally, offering innovative ideas and challenging overly restrictive or unproductive proposals.
By acting as advisors, insurers can leverage their risk assessment expertise to support other industries in developing safer, more efficient practices. Existing risk management surveys and processes can serve as tools to promote ethical AI practices.
Insurers should advocate for AI systems that prioritize human welfare, aiming to prevent harm, avoid unfair discrimination, and close protection gaps, particularly for underserved groups.
As AI grows, individuals and businesses will require insurance products that offer risk transfer solutions for AI-related losses. The insurance industry’s role in the development of cybersecurity provides a relevant example of how it can contribute to underwriting ethical AI.
Next steps insurers to begin developing a strategy for ethical AI
Insurers should start by creating detailed plans to implement ethical AI within their own operations. This establishes them as credible leaders who can guide the broader business and regulatory sectors in ethical AI practices.
The model focuses on Oversight, Operations, Compliance, and Culture within organizations. This collaborative governance approach aims to foresee and prevent unintentional harm, especially to vulnerable groups.
Oversight
Oversight actions develop an implementation strategy that aligns with business objectives. These activities establish a vision and ethical framework for AI initiatives and set the foundation for subsequent tasks. Key actions in oversight include crafting a vision for AI in products and services and identifying and educating executive sponsors for AI governance within the organization.
Operations
Operations integrates strategy into technology and workflows. This includes developing AI technologies with ethical considerations in mind and implementing robust testing mechanisms that ensure market readiness before deployment. Standard operating procedures should be reviewed and adjusted as necessary to ensure consistency across all related infrastructure and vendors. New procedures, such as employee training on ethical AI, should be introduced.
Compliance
Compliance is a familiar concept for insurers, and much of this work will be about adjusting the current approach for ethical AI considerations. For example, market conduct preparation and internal audit procedures will need to consider external compliance requirements. Special consideration must be given to compliance of AI systems sourced from third parties. Contractual agreements, especially regarding ownership and protection of intellectual property, require careful review.
Culture
Culture embeds the ethical AI strategy across the organization’s ecosystem of creators, contributors, and consumers. This is achieved by adjusting incentives to promote AI governance participation, clearly communicating objectives and frameworks, and evaluating the talent pool for necessary skills or gaps.
It should be added that AI governance is not always a sequential exercise. Some of these steps may overlap and depend on each other – for example, new employee training on ethical AI would both contribute to the goal of an AI literate culture in the organization as well as the need for operational readiness.
Each insurer should adjust the order of operations to best fit their needs and current context.
……………………
AUTHORS: Sean Kevelighan – President and CEO at Insurance Information Institute (Triple-I), Peter Miller, CPCU – President/CEO of The Institutes / Chair of the Board at RiskStream Collaborative, Mike (Fitz) Fitzgerald – Advisory Industry Consultant (Insurance), SAS