Five Reasons to Use Generative AI (and Five Reasons Not To)

May 2025

As interest in enterprise use of artificial intelligence continues to build, private equity firms and their portfolio companies are increasingly experimenting with large-scale generative AI projects in an effort to save time, cut costs, improve processes, and gain a strategic advantage over competitors. However, despite significant improvement in the reliability and capabilities of generative AI tools over the last year, there are still many tasks that are not well suited for generative AI integration. To assist firms in finding high-value/low-risk AI use cases, we offer five circumstances that, based on our experience, favor generative AI adoption and five counterexamples in which using generative AI, at least in the near term, may not be worth the associated costs and risks. 

Five Circumstances Favoring Generative AI Use

1.        When you are just getting started.

Example: Brainstorming on new investment opportunities.

Generative AI tools can be very helpful for devising new ideas and for kickstarting a complicated project. While the probabilistic nature of these tools can lead them to hallucinate, in the brainstorming process they can be, on balance, valuable sources of ideas and approaches that otherwise might not have been considered.

2.        When you are almost finished. 

Example: Reviewing a near-final draft of an investor report.

Many firms are now using generative AI as a final check before finalizing reports. In addition to checking facts, these tools are very good at catching grammatical errors, inconsistent use of capitalization and defined terms, unnecessary repetition, and overly complicated sentence constructions.

3.        When the AI can do something well that you cannot do well without it.

Example: An auto insurer using AI to process weather and geographic data to generate text messages warning customers of impending hailstorms in their neighborhood so that customers can shelter their cars.

If a company cannot do a particular task well at scale—or at all—the benefits of having generative AI complete that task are obvious, and there can be a higher tolerance for small errors because no viable alternative exists.

4.        When there is a fit-for-purpose tool that is being used by peer firms. 

Example: A firm using a tool that is specifically designed to translate financial services documents and communications between English and Japanese that is being widely used in the industry. 

Using an AI tool that is designed for a specific purpose and provided by a vendor allows a firm to pool the risks associated with confidentiality, privacy, cybersecurity, IP rights, and overall regulatory compliance with the vendor and the other users.

5.        When there is sufficient time and a method to properly test the AI tool before deployment.

Example: An asset manager has designed a lengthy pilot program to test different AI tools for accuracy, consistency, and completeness, with specific benchmarks that need to be met before moving into production.

Some AI use cases initially achieve acceptable results but involve risks (such as model drift or loss of skills) that only present themselves gradually over time. In those circumstances, the AI tools can achieve acceptable results for several days or even weeks before problems arise. Staying in the pilot phase for longer can provide the time needed to identify and mitigate these risks so that they don’t arise after the model is in production.

Five Circumstances That Disfavor Generative AI Use 

1.        When the acceptable error rate is essentially zero.

Example: A law firm using AI to draft the legal briefs to be presented in an important court filing.

The error rates for generative AI tools have dropped significantly over the last year, but these tools still make mistakes, and those mistakes are getting harder to find through human review. This small-but-irreducible error rate may be prohibitive for some use cases. When submitting a brief in court, for example, there is no tolerance for fabricated cases, regulations, or quotations. The need to carefully double-check AI-generated content for accuracy, completeness, and applicability may erase any efficiencies gained from the use of generative AI. 

2.        When a basic automation tool will achieve the same results.

Example: Using AI to generate one of 10 possible NDA contracts when a simple automation decision tree can achieve the same results without any risk of hallucinations, drift, or bias.

Many problems for which generative AI is currently being used can be solved using traditional AI or automation algorithms, methods which often are less expensive, are better understood, garner less attention, carry fewer risks, and are more reliable. It can be helpful to think of generative AI as a last-resort solution, when all other solutions are unable to achieve the desired scale or efficiency.

3.        When learning the subject is as important as the content being created.

Example: A compliance associate using AI to draft an internal analysis of an important new SEC regulation.

Generative AI is increasingly able to handle complex tasks with speed and efficiency. But doing tasks slowly and inefficiently at first can sometimes be an important part of risk management and professional growth. For example, it can be crucial for professionals to digest the intricacies of a new development so that they can better understand the nuances of its application and limitations. Asking a large language model to summarize the development and reviewing the results often will not produce the same level of understanding or strategic analysis. 

4.        When human authenticity is important or when use of the AI may seem “icky.” 

Example: Using AI to replace the internal monthly video address from the CEO with a deepfake video of the CEO that is customized for each office location. 

While AI can tackle many corporate tasks, there are some interactions that still benefit from human-to-human connection and a sense of authenticity. Likewise, some AI use cases have a high “ick” factor as well as more concrete risks. For instance, using AI to watch a job interview and determine whether a candidate is trustworthy or has leadership skills based on their body language would carry regulatory and reputational risks that likely outweighs any benefits.

5.        When the false positive rate is too high.

Example: Using AI to identify investment professionals who may be involved in illegal trading activity when the AI is not very good at separating legitimate conduct or communications from potentially improper activities.

For AI systems involved in making classification decisions, the accuracy and precision of the systems must be sufficient to justify their costs. If the use case produces enough false positives, there is a risk that either (a) any efficiencies gained will be outweighed by the need to review and confirm that the false positives are not true hits, or (b) there will be such a loss of confidence in the output of the tool that the hits simply may not be reviewed at all. 

Conclusion

We expect generative AI adoption to continue to accelerate, but how it does will be different for different firms. Attempting to use generative AI tools for less-appropriate tasks or without implementing proper safeguards can lead to project failures that in turn result in setbacks for the firm’s overall AI program. Carefully choosing which projects to tackle can stack the odds in your favor and build firm-wide momentum for AI adoption.

Private Equity Report Spring 2025, Vol 25, No 1