Generative AI: Risks and Considerations for Private Equity

May 2023

The last few months have seen a rapid increase in the availability of AI tools, such as ChatGPT, Bard and Claude, that can generate content including text, images, video and code. These generative AI tools include models that have been trained on large datasets of existing content, learning the features of that content to create something new.

Within private equity firms and their portfolio companies, generative AI can be applied to an enormous range of use cases: analyzing data, drafting content and code, creating marketing materials, conducting research or diligence, creating efficiencies in operations and analyzing financial performance, to name just a few. The range of use cases presents, in turn, a range of risks that may vary significantly based on context. For example, generative AI used to translate an internal communication poses a very different risk than if it is being used to translate an investor communication.

It is therefore important for private equity firms to consider and manage the risks associated with generative AI when assessing their own operations, as well as when considering the risks and value propositions of their current and prospective portfolio companies. Acknowledging these risks and establishing policies, procedures and effective controls to mitigate them will benefit firms seeking to make the most of this new technology.

Risks and Considerations

Regulatory Risk

The AI regulatory landscape is changing quickly as lawmakers and regulators work to keep pace with this rapidly evolving technology. There are already some AI-specific regulations in place that could be implicated by the use of generative AI. For example, New York City’s Automated Employment Decision Tool Law, which becomes effective on July 5, 2023, imposes onerous audit and disclosure requirements on employers that use certain types of tools in employment-related decisions. Some privacy laws also address AI through provisions regarding automated decision-making. For example, the Virginia Consumer Data Protection Act requires individuals to be provided with a right to opt out of “profiling in furtherance of decisions that produce legal or similarly significant effects.”

Privacy, Confidentiality and Intellectual Property

Sharing information with generative AI tools can pose many of the same risks that are associated with sharing confidential, sensitive or personal information with any third party.

  • Privacy Risk: Depending on the nature of the personal information being shared with generative AI tools, firms and portfolio companies may be required to update privacy policies, provide notices to clients or investors, obtain their consent and/or provide them with opt-out rights, etc. Privacy laws that provide limitations on disclosure of personal information to third parties, including state privacy laws like the California Consumer Privacy Act and sector-specific privacy laws like Regulation S-P, should be considered when inputting data into generative AI tools. It may also be important to consider the privacy policies and terms of conditions of the companies that offer these generative AI tools, like OpenAI, to ensure compliance with any obligations.
  • Disclosure Risk: For private equity firms using generative AI, fund governing documents and agreements may limit how client or investor data can be used or shared and the ability of the firm to input that data into a generative AI tool. For example, governing documents and agreements may impose restrictions on the firm’s ability to share investors’ or clients’ confidential information with third parties or the sharing of certain client or investor data with ChatGPT may exceed stipulated purposes for which collected data may be used. Additionally, any use of AI for investment decision-making or modeling should be adequately disclosed in fund documents and should be consistent with the adviser’s stated investment approach.
  • Confidentiality Risk: Some generative AI models may use input data to further train the AI. Therefore, inputting confidential investor or client data or other proprietary information runs the risk of that information becoming available to other users of the same tool (including, perhaps, competitors).
  • Intellectual Property Risk: Content created by generative AI may not be protectable by copyright. Additionally, users should consider any intellectual property restrictions on training or input data.
Output Issues
  • Quality Control: As impressive as it is, generative AI can produce inaccurate results. For example, ChatGPT may provide incorrect information on potential investments (such as portfolio companies), sectors and market trends. Because it is a language model, it may struggle with computational tasks depending on how the prompt is phrased. These risks are magnified where firms or companies use generative AI for critical business operations but can be mitigated by ensuring that a human with appropriate expertise reviews any output prior to use.
  • Transparency: Using content created by generative AI without clear disclosures may pose litigation (e.g., claims of unfair or deceptive practices) and reputational risk.

Vendor Management

Many of the risks discussed above also apply to third-party service providers who may seek to use generative AI to compete and control costs where possible. For instance, quality control risks may arise where a vendor uses generative AI to produce deliverables without human intervention. Likewise, confidentiality risks may arise where vendors have privileged access to data and use generative AI tools to process that data. Firms should consider the need to diligence their third-party vendors’ use of generative AI and contemplate taking measures to control such risks in vendor agreements, where possible.

Policies, Procedures, and Guardrails

Because of the growing availability of generative AI technology such as ChatGPT, it will be important for private equity firms to understand how AI is being used both in their organizations and at their portfolio companies, including under what circumstances, and with what guardrails. The risks posed by using ChatGPT to draft trivia questions for team-building events are very different from the risks of using ChatGPT to generate investment advice for clients. Higher-risk use cases should receive more scrutiny and may require revisions or expansions to disclosures.

An effective AI risk management program will allow firms to safely adopt and oversee the use of new AI technologies as they become available. Risk management programs may include creating a cross-functional committee that oversees an AI program or implements other means for establishing overall accountability; providing appropriate policies, procedures and training to personnel using AI, particularly for higher-risk uses; documenting uses of AI and labeling content generated with the assistance of AI; and ensuring the use of AI is fully disclosed as needed in regulatory filings. With respect to portfolio companies, firms should consider assessing and risk-ranking the companies’ uses of generative AI. For higher-risk uses (e.g., where a company’s use of generative AI is central to its business or may receive heightened regulatory interest), firms may want to consider providing benchmark policies, procedures and guardrails for the companies’ uses of generative AI.

However, establishing a comprehensive AI risk management program is time-consuming and resource-intensive. Even implementing a ChatGPT policy may be difficult without (1) adequately assessing which use cases should and should not be allowed (and, if allowed, what restrictions, if any, should apply) and (2) developing the policies and procedures needed to administer their desired policy.

While working on a longer-term approach to AI, there are guardrails that firms and their portfolio companies can implement as a first step, such as:

  • Monitoring input. To address privacy and confidentiality concerns, firms and portfolio companies may consider implementing a proxy server to monitor what information is being shared with generative AI tools. Inputs could then be reviewed to ensure no sensitive or confidential information is being shared and, if needed, take appropriate action to block access.
  • Using beta testers. One way to limit risk is to allow only a designated set of individuals to have access to generative AI tools. These beta testers could be provided training on risks and considerations like confidentiality risks, prohibited inputs, quality control and reputational risk. All proposed use cases could then be sent to the beta testers to review and assess. The beta testers could then make recommendations to a committee as to whether the use case should be approved based on the benefits or risks posed. Firms and portfolio companies could also consider establishing an internally accessible resource that documents approved or prohibited uses, which would allow employees to know which uses have been approved.
  • Licensing. For use cases that require input of sensitive or confidential information, firms and portfolio companies should consider licensing a closed-loop instance of a generative AI tool, whereby data inputs are not accessed by the licensor or added to the tool’s general training set. By setting up a private instance, firms and portfolio companies may be able to reduce many of the confidentiality risks associated with use of the public versions of these tools.

Generative AI tools have the potential to create many efficiencies across business lines—in investing, marketing and more. Implementation of policies, procedures and guardrails now can allow a firm to take advantage of these benefits without exposure to undue risk both for current technology and for new tools as they become available.

Private Equity Report Spring 2023, Vol 23, No 1