The Increasing Risks of AI Washing and Securities Fraud Class Actions

4 April 2024
View Debevoise Update
Key Takeaways:
  • Many public companies are starting to face increased risk of securities class action litigation based on statements about their use of AI that are alleged to have been false or misleading. Such AI-related securities class actions are likely to become more frequent as public companies increasingly start disclosing how they use AI in their public filings.
  • Shareholder plaintiffs can scrutinize these disclosures in hindsight to contend that the company did not properly characterize its AI technologies or use by, for example, failing to disclose an AI use case that actually existed or omitting references to an associated risk of generative AI such as quality control, privacy, IP, data-use limitations, cybersecurity, bias, or transparency.
  • Given the likely enhanced scrutiny of AI disclosures by future shareholder plaintiffs, companies should carefully consider whether to make such AI-related disclosures and, if so, how to frame them to avoid claims that those disclosures are misleading.

Many public companies are starting to face increased risks of securities class action litigation based on statements about their use of AI that are alleged to have been false or misleading. We have previously written about the legal risks that companies face if they oversell the capabilities of their AI systems, which is known as “AI washing.” In particular, the SEC has stated that AI is one of its examination priorities for 2024, and recently brought its first AI-related fraud cases.

Now, AI-related securities class actions are beginning to emerge. For example, on February 21, 2024, shareholders brought a securities class action against Innodata Inc., its CEO, and other corporate officers, for allegedly violating Sections 10(b) and 20(a) of the Securities Exchange Act of 1934 and Rule 10b-5 thereunder. The complaint alleges that Innodata falsely represented to investors and advertised that it used AI-powered operations for data preparation, when it actually relied on off-shore manual labor—not proprietary AI technology—to digitize medical records and insurance data, and underfunded its AI research and development. The complaint is based on assertions in a short seller’s research report that corresponded with an over 30% drop in the company’s stock price, which undoubtedly drew attention from the plaintiffs’ bar.

Moreover, as we previously wrote, Zillow is facing a securities class action lawsuit for allegedly misleading shareholders with overly optimistic claims regarding its house-pricing Zillow Offers tool. That tool used AI to estimate home prices and make cash offers for certain properties. However, it allegedly turned out to be unreliable in forecasting home prices, partly because of changes in market dynamics due to the pandemic, which allegedly resulted in significant losses for the company, the wind-down of the Zillow Offers business, and a decline in the company’s stock price. Lead plaintiff’s motion for class certification is pending, and the case is currently set for a 10-day jury trial in June 2025.

AI-related securities class actions are likely to become more frequent as public companies increasingly start disclosing how they use AI in their public filings. Shareholder plaintiffs can scrutinize these disclosures in hindsight to contend that the company did not properly characterize its AI technologies or use by, for example, failing to disclose an AI use case that actually existed or omitting references to an associated risk of generative AI such as quality control, privacy, IP, data-use limitations, cybersecurity, bias, or transparency. Given the likely enhanced scrutiny of AI disclosures by future shareholder plaintiffs, companies should carefully consider whether to make such AI-related disclosures and, if so, how to frame them to avoid claims that those disclosures are misleading.

The current excitement over AI has many similarities to the rise of dot-com stocks in the late 1990s. When that bubble burst in the early 2000s, it resulted in a wave of class action securities cases against tech companies, as well as other market participants who had publicly promoted them. Like many of the dot-com companies, some publicly traded AI companies today have significant valuations without substantial revenues. Should the AI bubble also burst, companies, officers, and analysts may face a similar spate of securities fraud class action lawsuits from shareholders.

What Might Be Considered Misleading?

To state a claim for securities fraud, a private plaintiff must allege (among other factors) an intentional or reckless misstatement or omission of material fact. In considering what kinds of statements about AI use could be viewed as misleading within the meaning of federal securities laws, companies should focus on recent statements by Gary Gensler, Chair of the Securities and Exchange Commission, at Yale Law School:

As AI disclosures by SEC registrants increase, the basics of good securities lawyering still apply. Claims about prospects should have a reasonable basis, and investors should be told that basis. When disclosing material risks about AI—and a company may face multiple risks, including operational, legal, and competitive—investors benefit from disclosures particularized to the company, not from boilerplate language.

Chair Gensler further stated that AI washing may violate securities laws, signaling a focus on statements that may oversell a company’s AI capabilities or practices.

The FTC’s recent guidance related to AI disclosures is also instructive. The FTC stated that it may use Section 5 of the FTC Act to bring enforcement actions against companies making deceptive AI-related claims, including companies that:

  • exaggerate what their AI systems can actually do;
  • make claims about their AI systems that do not have scientific support or apply only under limited conditions;
  • make unfounded promises that their AI systems do something better than non-AI systems or a human;
  • fail to identify known likely risks associated with their AI systems; or
  • claim that one of their products or services utilizes AI when it does not.

Takeaways for Mitigating Securities Fraud Class Action Risk

Public companies may want to consider embedding the following AI governance practices into their existing disclosure practices to limit the risk of possible securities fraud class actions:

  • Define AI Consistently and Truthfully. To avoid claims of misrepresenting AI or AI usage, consider creating a definition of AI that is used for both internal and external purposes and aligns to the company’s actual AI capabilities and use cases. Doing so will mitigate the risk that the company will characterize something as AI externally that is not considered AI internally – a misalignment that could be interpreted as misleading.
  • Ensure Appropriate Technical and Legal Review of All Current and Proposed Public Statements About AI. This review should involve individuals with AI expertise and be focused on ensuring that disclosures are accurate, can be substantiated, and do not exaggerate or overpromise.
  • Maintain Robust Risk Disclosures. Precautionary risk disclosures regarding AI or the use of AI may reduce securities litigation risk, such as by disclosing the risk that the AI will periodically hallucinate and fail to work properly. For example, in securities class actions arising from cyber incidents and data loss, companies have successfully argued that past statements regarding their cybersecurity programs were not misleading because their SEC risk disclosures cautioned that their systems were vulnerable to theft, loss, or fraudulent use of company and customer data and were susceptible to breaches, including by experiencing security incidents in the past (such as in In re Marriott Int’l, Inc., 31 F.4th 898, 903 (4th Cir. 2022)).
  • Conduct AI Risk Assessments. For high-risk AI systems, consider conducting impact assessments to determine foreseeable risks and how best to mitigate those risks, and then consider disclosing those risks in external statements about the AI systems.


This publication is for general information purposes only. It is not intended to provide, nor is it to be used as, a substitute for legal advice. In some jurisdictions it may be considered attorney advertising.