30 Days to Form ADV: Have You Reviewed Your AI Disclosures?

26 February 2024
View the Debrief

Registered investment advisers (“RIAs”) have swiftly embraced AI for investment strategy, market research, portfolio management, trading, risk management, and operations. In response to the exploding use of AI across the securities markets, Chair Gensler of the Securities and Exchange Commission (“SEC”) has declared that he plans to prioritize securities fraud in connection with AI disclosures and warned market participants against “AI washing.” Chair Gensler’s statements reflect the SEC’s sharpening scrutiny of AI usage by registrants. The SEC’s Division of Examinations included AI as one of its 2024 examination priorities, and also launched a widespread AI sweep of RIAs focused on AI in connection with advertising, disclosures, investment decisions, and marketing. The SEC previously charged an RIA in connection with misleading Form ADV Part 2A disclosures regarding the risks associated with its use of an AI-based trading tool. 

Early AI disclosures by registrants typically only included generalized references to the use of aggregated or “big” data, algorithmic analysis, and machine learning. But with the rapid and widespread adoption of AI, disclosures have started to include more specific references to AI tools and models and the use of AI to make predictions, anticipate trends, develop investment themes, and inform trading decisions. As AI tools continue to multiply and AI adoption continues to expand, we expect to see more Part 2A disclosures specifically addressing AI—as well as an increase in the SEC’s interest in testing the accuracy of the statements in these disclosures.

 

Accordingly, RIAs preparing to file their annual Form ADV amendments on March 30 should prepare for enhanced examination and enforcement scrutiny of their Part 2A disclosures about AI. RIAs considering how to make Part 2A AI disclosures should consider the following best practices:

  • A version of this article also appeared in NYU Compliance & Enforcement on March 9, 2024. Access that version here.
  • Be clear on what you do (and don’t) use AI for: AI usage varies widely among RIAs as a function of each adviser’s business model, investment strategy, and asset allocation. For instance, a private equity manager’s use of AI may differ significantly from the AI used by a hedge fund manager for more liquid investments. For this reason, there is no “one-size-fits-all” AI disclosure and RIAs must be able to accurately articulate their AI use cases—and to avoid understatement or overstatement. For example, if an adviser’s use of AI is limited to operational efficiency enhancements and it is not used for investment-related decisions, an RIA should not aspirationally overstate its use of AI to cover unrelated and nonexistent uses such as trading or investment research. Similarly, if an RIA starts deploying AI in any way to support trading or investment decisions, it may wish to consider updating existing disclosures relating to its investment process.
  • Avoid using hypothetical language for actual AI practices: Using hypothetical language to indicate the possibility of a certain AI use case can give rise to both examination and enforcement scrutiny. RIAs should avoid using hypothetical or qualifying language like “may” to describe AI use cases that actually exist. For instance, firms that use AI to help make investment decisions should avoid purely hypothetical descriptions of AI usage in these disclosures. The SEC has brought numerous cases against RIAs for using hypothetical language to describe actual practices, and these enforcement actions will serve as a template for future SEC inquiries involving AI practices. Given the SEC’s likely enhanced scrutiny of AI disclosures, RIAs should carefully consider whether to include such disclosures and, if so, how to frame them to avoid claims that those disclosures are misleading. In addition, an RIA that uses such hedging language cannot “set-it-and-forget-it,” and should consider updating such language in future filings if AI use transforms from theoretical to actual.
  • Understand and accurately disclose the risks associated with AI use: As more firms adopt AI (including generative AI) as part of their core business functions, certain well-known risks associated with generative AI persist, such as quality control, privacy, IP, data-use limitations, cybersecurity, bias, and transparency. Accordingly, disclosures should be clear, comprehensive, and precise about such risks. As seen in matters involving the improper use of hypothetical language, the SEC has charged firms for using hypothetical language to describe risks that have materialized, and RIAs should similarly exercise caution and accuracy in disclosing AI risks that have emerged.

This publication is for general information purposes only. It is not intended to provide, nor is it to be used as, a substitute for legal advice. In some jurisdictions it may be considered attorney advertising.