Insurance Industry Corporate Governance Newsletter

22 February 2022
View PDF
To Our Clients and Friends,

Last month, our inaugural edition of the Insurance Industry Corporate Governance Newsletter focused on the emerging role of private capital in the insurance sector.

This month, we discuss how developments in artificial intelligence (“AI”) are affecting insurers, with a particular focus on the concept of “proxy discrimination” and how companies can deal with the emerging patchwork of AI-related laws and regulations across the United States.

The Emerging Concept of Proxy Discrimination

AI has evolved rapidly in recent years and is changing the way companies make decisions, underwrite policies, handle claims, market and advertise, and strategically invest in emerging products. As companies increasingly use AI for everything from back-office operations to core business lines, insurance regulators have kept a sharp eye on the use of these models. Chief among their concerns is the potential risk that the novel and expansive data sources used to train or operate these AI models might result in unfair discrimination, including through the use of “proxies” for protected class characteristics. Thus, “proxy discrimination” generally refers to the use of an otherwise non-prohibited, facially neutral variable as a “proxy”—or stand-in—for a protected class characteristic.

For example, both credit scores and credit-based insurance scores have drawn particular scrutiny as potential proxies for race, with several states restricting the use of these scores, particularly in automobile and homeowners insurance. Additionally, beginning next month, the Washington Insurance Commission will temporarily prohibit the use of credit history to determine premiums and eligibility for coverage in private passenger auto, homeowners and renters insurance for three years after the conclusion of the COVID-19 public health emergency. The Commissioner reasoned that the pandemic and the CARES Act had resulted in the collection of “objectively inaccurate” credit histories for some consumers, particularly people of color, thereby leading to unfair discrimination. In 2017, the New York Department of Financial Services (“NY DFS”) prohibited the use of education and occupation criteria by private passenger auto insurers in setting rates. Regulators have started more broadly scrutinizing apparently neutral attributes, including social media data that might reveal where applicants buy coffee or which type of laptop they use.

Regulators are also underscoring the importance of testing input variables in AI models to determine whether they serve as proxies for protected class characteristics—even shifting the burden on insurers to prove they are not unfairly discriminating. In light of this, a critical question is arising: How should companies go about disproving unfair discrimination? Increasingly, regulators expect companies to back-test their models to ascertain whether they are using impermissible proxies. Recognizing that insurers generally do not collect data on protected classes, some regulators have suggested that insurers should identify protected classes for their back testing by leveraging certain data, like zip codes, that might previously have appeared precariously close to prohibited redlining practices. But the regulators we have spoken with, or whom we have heard speak publicly on these issues, are saying this is not a defense against the need to back-test AI models, and they have pointed to the Consumer Financial Protection Bureau’s “BISG” approach to using census data, including consumers’ surnames and addresses, to test for proxy discrimination. We are happy to discuss this further and to use our regulatory knowledge and instinct to help companies create a defensible testing and compliance framework.

Recent AI Regulatory and Legislative Developments

Predictably, over the past three years, a patchwork of laws and regulations has emerged that focuses on the risks of AI-driven unfair discrimination across a diverse range of insurance practices, including underwriting, rating, claims handling and marketing. While no one rule has yet to emerge, we believe the trend is clearly developing that the burden will be on insurers to disprove, or otherwise be able to demonstrate, that any given attribute used in the underwriting or claims process (especially a novel one) is not a proxy for an otherwise prohibited characteristic.

For example, in 2019, NY DFS issued its “Circular Letter No. 1,” which prohibited life insurers from using external data sources, algorithms or predictive models for underwriting or rating purposes unless they can establish that the data source does not use and is not based in any way on a protected class characteristic—noting that the burden remains on the insurer at all times to make this showing. In 2021, Connecticut notified insurers that their technology must fully comply with antidiscrimination laws. Colorado’s legislature was next in prohibiting insurers from using external consumer data, algorithms or predictive models that unfairly discriminate based on protected class characteristics. The new Colorado law also mandated adoption of rules, still forthcoming, which will require insurers to attest to their adoption of risk mitigation measures for AI, as well as provide detailed disclosures concerning their use of consumer data and promptly remedy any unfair discrimination.

Looking ahead, Oklahoma and Rhode Island both have bills pending patterned on Colorado’s law, while a Washington, D.C. bill would not only prohibit companies from using algorithms to make decisions that impact “important life opportunities”—including insurance—but also require detailed audit results to be reported to the D.C. Office of the Attorney General. We will keep our clients and readers apprised of developments, but we believe they herald an emerging de facto approach that would flip the burden on insurers to demonstrate up front that their AI models are not unfairly discriminatory.

What Companies Should Do Now

The question is not whether there will be prescriptive regulation coming for AI, but when. Starting to think through AI compliance now will help insurers future-proof their AI models to deal with forthcoming regulatory and reputational risks. The writing on the wall is that insurers are going to have to implement a defensible compliance program that includes policies concerning when and how AI models and external data should be tested for unfair discrimination.

Finally, boards of insurers or their parent companies should also take note of how their oversight obligations for technology and risk models may extend to the uses of AI by their companies. The UK Financial Conduct Authority and Bank of International Settlements have both recently underscored that “[b]oardrooms are going to have to learn to tackle some major issues emerging from AI,” and that existing corporate governance standards assign ultimate responsibility for AI risk to the board and senior management. Just as NY DFS has required each New York domestic insurer to designate both “a member or committee(s) of its board” and one or more members of its senior management to be responsible for oversight of the insurer’s management of climate risks, we could see a world in which a similar responsibility is afforded to boards of directors with respect to discrimination and the other regulatory, operational and reputational risks of AI.

Conclusion

As we’ve said on our recent webcasts concerning AI and discrimination in the insurance industry (available on demand here and here), an ounce of prevention on AI is worth a pound of cure. By taking appropriate measures now, insurance companies can avoid investing significant resources in data and AI models that might later draw regulatory scrutiny or run afoul of emerging laws or regulations.