American insurers are being urged not to drag their feet on ensuring their use of AI is “explainable,” as regulators and consumers alike begin to demand it.
“It’s not like this is a future issue. The data that we see, and then also the qualitative aspect of that, indicate that the investments going on now, which are largely in things that are being tested and tried out, are going to move into production, and we need to be ready for that,” insurance technology expert Mike Fitzgerald, insurance industry advisor, SAS, said.
Fitzgerald said explainable AI, which relates to an artificial intelligence tool, model or solution’s decisions or output being readily understood by humans, is key to preparing for this.
“The need exists now, but it’s coming because we’re really at the front end of that investment curve and the production is coming. So, the regulator is going to be looking for things; the consumers are going to be expecting to not see bad results from the use of these tools; and insurers overall are going to want to have more control than they’ve ever had around their analytic models — including AI, including predictive models, including advanced analytics,” he said.
What is explainable AI?
Explainable AI (XAI) is about the clarity of the reasoning of how an AI tool makes a decision. It can refer both to an actual AI system, or a larger concept of AI technology designed in a way for its logic to be easily understood by humans.
For example, in an insurance context, if an AI tool was used in claims handling and resulted in a denial, XAI would have a clear and transparent reasoning behind that denial.
“Explainable AI is why an analytic model reached a result that it did. In other words, if it’s using weights, certain data or certain pre-scripted rules or values that it’s been directed to, then all of those things go into why it came up with a result,” Fitzgerald explained.
Why XAI must not be overlooked
According to Peter McMurtrie, partner and insurance practice lead, West Monroe, many insurers have somewhat overlooked the explainability of AI in the race to roll out the new technology.
“A lot of AI is generating outcomes, but there’s a bit of a black box methodology in terms of how the outcomes are being created. That can be through a combination of complex algorithms [or] the myriad or the variety of data sources that it’s being trained on. Companies that are using those solutions [can be] really focused on the reliability of the outcome but not the transparency around showing your work or showing the math,” McMurtrie said.
However, Fitzgerald noted that the need for XAI increases as AI becomes more prominent in insurance and use cases expand.
“In the ‘early days’ of AI, this wasn’t really necessary because there weren’t that many uses. But it’s kind of been democratized to where the usage is ubiquitous, and that has opened up a number of different issues and concerns,” he said.
To this end, Fitzgerald suggested explainable AI can be thought of as an “exercise in change management.
“A lot of things are still in the laboratory, but as they move into production, as they move into general use, as insurers look to get value out of their investments and they move into the everyday workflow of an insurer, then you have to have different tools in order to make sure that it’s being used and that you understand what it’s doing appropriately,” he said.
Demand for explainable AI
McMurtrie noted that the two largest demands for explainable AI come from insurance regulators and everyday consumers, both of whom want transparency and to understand how AI is being used in an ethical, unbiased way.
“Not always, but in some instances, customers are more comfortable when they can understand or have that transparency of how AI is coming up with the recommendations. And then the regulators clearly want visibility into what is the AI being trained on, what are the models being used [and] how are the outcomes being derived,” McMurtrie said.
“Insurers not only want to comply with the law, but they want to make sure that they’re doing the right thing by their customers. So, it’s not only just the regulation but it’s also because it protects the customers, it protects the brand of the insurer to understand and be able to explain and know why — that’s the AI explainability — their analytic models reach the conclusions that they do,” Fitzgerald added.
Additionally, Fitzgerald said explainability builds trust among end-users in the company who will have to use the model, helping them feel more comfortable adopting it and using it with clients.
“That adoption piece is tied directly to trust, and trust is tied to being able to explain to that technical underwriter, claims adjuster, loss control expert, you name it, why a model did what it did, and that’s why it’s so important,” he said.
SAS is a software development company founded in 1976 and based in Cary, NC. With over 400 offices around the world, SAS provides business analytics and AI solutions to clients.
West Monroe Partners is a business and technology consultancy based in Chicago, Illinois. Founded in 2002, the firm has more than 2,000 employees in 10 offices around the world.
© Entire contents copyright 2025 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.