Testing for AI Bias
Colorado Division of Insurance proposes first-of-its kind regulation requiring life insurers to test their underwriting process for racial and ethnic bias.
Colorado Division of Insurance proposes first-of-its kind regulation requiring life insurers to test their underwriting process for racial and ethnic bias.
The Geneva Association recently published a comprehensive report on the use and regulation of AI in the insurance industry. AI is a key part of the digital transformation of insurance, the report says, and its associated risks are already familiar to insurers.
The Colorado Division of Insurance has approved a closely watched regulation that could affect how life insurers use outside vendors’ artificial intelligence systems, and any other outside sources of data or analytical technology, in life insurance underwriting.
The introduction of artificial intelligence (AI) into the insurance industry is transformative, heralding a new era marked by improved efficiency, accuracy and personalization. However, with this technological shift comes a significant yet often overlooked concern—AI bias.
Unfair discrimination takes place when insurers consider factors that are unrelated to actuarial risk while determining whether to provide insurance to particular individuals or groups, and if so, at what price and with what terms.
Colorado is leading the way on regulation around insurers’ use of data with its first rules requiring life carriers to ensure the use of third-party data in life insurance underwriting does not discriminate against protected groups.
State insurance regulators are finishing up an industry survey to learn how home insurers are using artificial intelligence and machine learning.
Recent announcements in Colorado, Louisiana and New York addressing the use of not only big data but also more traditional “scoring” models signal a continued focus on this area by state insurance regulators.
The emergence of consumer-focused regulations regarding data privacy, the use of big data/artificial intelligence (AI), the use of genetic test results, and the Right to be Forgotten (RTBF), among other factors, challenge the ability of insurers to profile risk and provide differentiated offerings. This article explores this evolving landscape, including trends and outcomes by jurisdiction, and offers suggestions for continued monitoring and engagement.
After looking at Disparate Impact Testing in our previous blog, where we provided background on the concepts of algorithmic accountability and proxy discrimination, this blog will describe the current, evolving regulatory environment and make some projections about the future.