Policy discussions on the use of artificial intelligence in insurance are “unfounded” and “detrimental to policyholders,” according to a analysis from the National Association of Mutual Insurance Companies.
The use of AI in insurance underwriting and rate making has led to concern from some regulators, advocates, and policymakers over whether AI would led to proxy discrimination, an algorithmic bias and eventual changes to the affordability and availability of insurance products in certain areas or for certain classes.
NAMIC said 18 states are currently debating “flawed” AI-related legislation. Guidance from the National Association of Insurance Commissioners (NAIC) has added to the “nebulous concept of algorithmic bias,” NAMIC said.
“Contrary to what may be perceived as well-intentioned social efforts by regulators, policyholders will be harmed by growing efforts to elevate concepts of ‘fairness’ divorced from actuarial science,” wrote Lindsey Klarkowski, NAMIC’s policy vice president in data science, AI/[machine learning], and cybersecurity. This will result in “an inevitable break of the insurance product at its core,” she added.
Klarkowski authored the report, meant to dispel five myths about the use of AI and Big Data in the insurance industry.
“In setting rules of the road, policymakers must recognize that insurance is distinct in function and pricing from many other consumer products,” Klarkowski added in a statement. “Insurance classifies based on risk, and insurance law requires those risk classifications to be actuarially sound and not unfairly discriminatory.”
Any regulation aimed at the industry’s use of AI in pricing has to be unique to the industry, and any restriction on an insurer’s ability to price a policyholder’s risk will lead to more availability and affordability issues, NAMIC concluded, adding that the notion AI will lead to bias or disparate impact is in conflict with the risk-based foundation of insurance.
“The data insurers use for risk-based pricing is data that is actuarially sound and correlated with risk and does not include nor use certain protected class attributes,” Klarkowski wrote. “To argue that insurer use of data, algorithms, or AI in risk-based pricing is biased or skewed would be to say that the actuarially sound data is not representative of the risk the policyholder represents, which insurance laws already prohibit.”
Separately, if a disparate impact standard were applied to insurance, the industry’s pricing approach would no longer be based on underlying insurance costs and result in rates that are unfairly discriminatory. The industry already, NAMIC said, adheres to the legal standard of unfair discrimination .
Topics
InsurTech
Legislation
Data Driven
Artificial Intelligence
Interested in Ai?
Get automatic alerts for this topic.