According to a panel of experts convened by Reuters, the insurance industry is grappling with four primary ethical considerations in the adoption of artificial intelligence. These challenges, which include built-in bias, the need for human talent, the importance of transparency, and the evolving landscape of regulation, require immediate and strategic attention. Panelists emphasized the urgency of addressing these concerns, noting that AI is rapidly becoming a critical component – “table stakes” – in the industry, likely within the next 12 to 24 months.
“I would say put your principles in place, set the guardrails, and then go build a thing. Get out there, build something, make sure you evaluate it deeply and understand exactly how it’s going to perform in the real world, but get out there and build something,” stated Patrick Miller, head of data & AI at Newfront. Richard Wiedenbeck, Chief AI Officer at Ameritas, echoed this sentiment, adding, “Look, you don’t want to wake up three years from now and not move the needle. That is absolutely the wrong answer. You have to keep that top of mind and keep pushing forward.”
Challenge 1: Built-in Bias
A significant ethical dilemma stems from the potential for existing biases to be embedded within AI tools and the data or language models that feed them. As Suzanne Grover, VP of underwriting at Coastal Wealth, pointed out, historical biases, such as the disproportionate targeting of people of color in law enforcement or medical studies based primarily on men, can be inadvertently replicated by AI systems.
Wiedenbeck cited an example of a financial services AI system trained on 70 years of data that exhibited bias against women, who were only legally permitted to own credit cards starting in 1974. He urged organizations to adopt responsible AI framework models. “There are so many responsible AI framework models. Everybody should go get one and look at one. They’re very comprehensive,” said Wiedenbeck. “You should have an overarching frame of how you are looking at your responsible use of AI. You should have that as your backdrop, and then…dig into which parts of that you really want to lean into.”
Challenge 2: Human Talent
Another key concern revolves around the role of human talent in the AI-driven insurance landscape. Panelists agreed that the industry is not necessarily about replacing humans with AI, but rather about training or recruiting the right talent to work alongside AI systems. The insurance industry, they noted, has struggled to train underwriters, managers, and other professionals to understand the underlying technologies of AI. Insurers must proactively prepare their workforce for this transition.
Grover emphasized the need for immediate action in this regard: “We’ve been talking about looking for that talent and finding that talent, but I think it’s going to start with training the talent we already have. It’s going to be different skill sets within these same jobs. That’s a massive leap, and that training really needs to start now.”
Challenge 3: Transparency and Explainability
Transparency and explainability emerge as crucial factors in gaining consumer trust and establishing a competitive advantage. The ability to clearly communicate when, why, and how AI is being used will differentiate companies in the market. “That’s going to be where we’re seeing the competitive advantages [and] early wins in this technology as it kind of continues to evolve,” stated Grover. Miller added that, “You have people who don’t understand the systems and there’s a lot of fear and uncertainty with AI. Explainability is one of our core AI principles [and] I think it is one of the clearest paths towards actually building trust with your users.” He believes explainability is essential to building AI products today.
Challenge 4: Regulation and Governance
In the absence of comprehensive federal AI regulation, companies are advised to establish robust internal structures to govern the ethical use of AI. Frank Neugebauer, VP of generative AI at the Cincinnati Insurance Companies, stated, “We govern it daily… We have to be ready because we don’t want to implement AI that is going to be either illegal at some point, prohibited or otherwise heavily changed.”
Panelists agreed that a one-size-fits-all solution does not exist. However, they recommended a centralized approach initially, which can be federated over time. Wiedenbeck commented that, “You’ve got to have the rules that you’re going to play by as a company. That’s step one, and that isn’t something that you decentralize and say everybody plays by a different set of rules.” He suggested that once the foundation is set, companies might expand the framework to include co-leadership, co-sponsorship, or partnerships with IT.
Getting it Right
The panelists agreed on the urgency of responsible AI adoption in the insurance sector, which has historically been cautious about embracing new technologies due to its risk-averse nature. “I think that this type of disruptor really only comes along maybe generationally. So, the people who do this well are not only just going to have a significant competitive advantage, but the people that do not do this well, or choose not to do it at all, lose the game completely,” said Grover.
The Reuters panel, “AI and Ethics: A Collision Course for Insurance?” was moderated by Jennifer Kyung, president of NextGen Underwriting, a consulting business. Newfront is a global business insurance company founded in 2017 and based out of San Francisco, California. It has offices throughout the United States. Ameritas Life Insurance Corporation is an insurance company founded in 1887 and based out of Lincoln, Nebraska. It offers financial services with a focus on life insurance and annuities. Coastal Wealth is a financial services firm founded in 2016 and based out of Fort Lauderdale, Florida. The Cincinnati Insurance Company specializes in P&C and life insurance, while offering other services. It was founded in 1950 and is based out of Fairfield, Ohio.
