Developing Regulatory Scrutiny of Underwriting Factors

5 August 2020
View Debevoise Update
Key takeaways:
  • With the ongoing attention on the movement for racial justice, there are signs insurance regulators have accelerated their focus on potentially discriminatory underwriting practices and the industry’s use of big data and artificial intelligence.
  • As the national conversation surrounding potential discrimination and implicit bias in financial services and, more specifically, the insurance industry continues to grow, the risk of federal intervention looms.
  • While there is growing consensus among regulators that external data and algorithms should not be used in ways that have a disparate impact, another observable trend is the narrowing of traditionally acceptable underwriting factors.

With increased regulatory scrutiny on both prohibited and permissible insurance underwriting factors, are reports of underwriting’s potential demise greatly exaggerated?

NAIC activity. For some time now, the National Association of Insurance Commissioners (the “NAIC”) and various state insurance regulators have been looking into the issue of discrimination in insurance underwriting, in particular as an increasing amount of underwriting is being conducted by artificial intelligence and algorithms, frequently provided by third-party vendors. With the ongoing national attention on the movement for racial justice, there are signs insurance regulators have accelerated their focus on potentially discriminatory underwriting practices and the industry’s use of big data and artificial intelligence (“AI”). As a result, we expect that action by the NAIC and state regulators on these issues may come sooner than previously expected.

Within the last year, the NAIC created two new working groups—the Artificial Intelligence Working Group and the Accelerated Underwriting Working Group—and charged them with examining insurers’ use of external data, data analytics and AI in underwriting. Consumer groups have argued during the proceedings of these working groups that algorithmic techniques like data mining do not eliminate human biases from the underwriting process and, on the contrary, data mining can inherit prejudices of prior decision-makers or reflect widespread biases that persist in society at large. More recently, the AI working group in June 2020 introduced the term “proxy discrimination” in the working group’s AI principles, which in the current draft (being considered by its NAIC parent task force) cautioned insurers to take proactive steps to avoid using such proxies to the detriment of protected classes. Trade groups and even a few regulators have questioned whether the inclusion of the term proxy discrimination would preclude even the use of proxy variables for legitimate and acceptable business purposes, and one regulator (North Dakota) has proposed language to explicitly carve out such legitimate use.

State activity. In addition to the NAIC, state regulatory actions to limit various underwriting standards have already begun. A prime example is the New York Department of Financial Services (“DFS”) Circular Letter No. 1 (2019), which prohibited life insurers from using external data and algorithms in underwriting that have a disparate impact on protected classes and required that life insurers be able to demonstrate that their underwriting models do not produce a disparate impact. However, based on a longstanding DFS legal opinion, insurers in NY are not legally permitted to collect data on certain protected classes, including race, which has lead to questions concerning how insurers will be able to demonstrate the absence of such a disparate impact. The DFS, however, has indicated in informal communications that the use of proxy variables (e.g., zip code as a proxy variable for race) might be helpful to insurers in making these types of evaluations.

Federal activity. As the national conversation surrounding potential discrimination and implicit bias in financial services and, more specifically, the insurance industry continues to grow—as evidenced by ongoing coverage in the mainstream media, including a recent Wall Street Journal article on discrimination in auto insurance—the risk of federal intervention looms. A recently introduced bill by Senator Sherrod Brown, the Data Accountability and Transparency Act of 2020, bears watching. This bill would establish a new federal agency, the Data Accountability and Transparency Agency, to protect individuals’ privacy, which would have rulemaking, supervisory and enforcement authority and would prohibit discriminatory outcomes in various sectors including insurance. Notably, the bill currently contains a private right of action and provides that a court may award damages of $100 to $1,000 per violation per day, or actual damages, whichever is greater, as well as punitive damages and other forms of relief. Additionally, insurers (and other companies) using algorithms would have to provide accountability reports. The bill would also ban facial recognition technology. While Brown’s proposed legislation is unlikely to pass under the current administration, the chances of passage may increase if there is a change in administration and composition of the Senate, and the bill is likely indicative of an increased interest in these issues at the federal level.

Narrowing of permissible factors. While there is growing consensus among regulators that external data and algorithms should not be used in ways that have a disparate impact, another observable trend is the narrowing of traditionally acceptable underwriting factors.

Several states have recently increased prohibitions on the use of certain factors in underwriting automobile insurance. For example, a number of states, including California, have prohibited the use of gender as an underwriting factor. Other states, including New Jersey, have prohibited the use of credit history as an underwriting factor, and Michigan legislators in 2019 voted to prohibit a number of factors in the underwriting of auto insurance, including gender, education and credit score. Other states are considering further bans. Maryland and New Jersey have considered prohibiting the use of education and occupation as underwriting factors, which New York already bans. Illinois and Maryland are even considering prohibiting the use of ZIP codes in underwriting.

As this movement gains momentum, and especially in light of Circular Letter 1 from the DFS, there is increasing speculation that these restrictions will spread further across the insurance industry, including into the underwriting of life, health and disability insurance. For example, Florida’s state legislature recently banned the use of genetic data from life insurance underwriting, with the policy justification that it would result in improper premium hikes for insureds with certain genetic markers. Likewise, Massachusetts in 2019 banned disability insurers from charging women more than men. Insurers in Florida are now restricted to using only genetic information present in the insured’s medical record. In addition, the Connecticut Insurance Department has prohibited life insurance applications from including medical or other questions related to COVID-19, including questions about being quarantined.

Future developments. These trends may portend challenges for insurers in the future in demonstrating that their underwriting formulas do not result in disparate impact to protected classes, and they leave some uncertainty for the industry as to which heretofore permissible underwriting factors will, regardless of actuarial justification and established correlation, and even causation, be banned based on shifting public policy. Insurers may also face external pressures from competitors, activist shareholders and other stakeholders to change their underwriting standards, although we have yet to see any significant actions in this regard. In this environment, insurers will need to holistically consider their approach to traditional underwriting ethos.