Artificial intelligence has become one of the most powerful terms in a company’s marketing arsenal. Phrases like AI-powered, driven by machine learning and intelligent automation can meaningfully lift a company’s valuation, attract customers and differentiate a business in a competitive sale process.
But as AI-related claims have proliferated, so too has scrutiny of the veracity of those claims. Regulators and private plaintiffs are now aggressively challenging “AI washing”—claims that overstate the role, sophistication or capabilities of artificial intelligence in a product or service.
Inflated AI claims can generate substantial legal liability, disrupt exit timelines and, in the worst cases, expose sponsors themselves to scrutiny. This article summarizes the recent trend in cases involving AI washing and offers practical guidance on how to minimize risk.
What Is AI Washing?
AI washing, in its simplest form, is the use of AI-related language in marketing, investor communications or product descriptions that is misleading, unsubstantiated or outright false.
Claims susceptible to charges of AI washing sit along a spectrum. At one end sits defensible puffery, like describing a product as “intelligent” when it uses basic automation. At the other end are fraudulent claims—such as saying that a product is “powered by a proprietary machine learning model” when, in fact, the work is done by human operators—that can give rise to serious potential liability, including criminal charges.
Most AI-washing cases fall somewhere in between these extremes, making risk more difficult to assess. Companies may use AI-adjacent language (such as “algorithmic” or “data driven”) in ways designed to imply capabilities that do not exist or refer to models as “proprietary” when they are largely built on a third party’s foundation model.
Companies seeking to attract new customers—or new investment—may be most at risk of hyperbolic or exaggerated claims, but the problem of AI washing is not confined to startups. Established, private equity-backed businesses across healthcare, fintech, legal tech, insurance and consumer products have faced scrutiny for the same categories of claims.
Federal and State Regulators Have Been Paying Attention…
The Federal Trade Commission has general oversight authority over deceptive advertising. The FTC has warned that AI-related marketing claims are subject to the same legal standards as any other advertising claim: they must be truthful, not misleading, and substantiated. In a series of enforcement actions, the FTC under the prior administration made clear that it considers AI washing a deceptive practice. While the current administration has removed some of the FTC’s prior guidance documents, a pause in enforcement is not a safe harbor from private or state-level litigation (discussed further below) or a change in federal regulatory priorities down the line.
Marketing claims made by investment advisors fall under the jurisdiction of the Securities and Exchange Commission. The SEC has settled charges against two registered investment advisers—Delphia (USA) Inc. and Global Predictions Inc.—for making false and misleading statements about their use of artificial intelligence. Delphia claimed to use client data to train its AI models for investment decisions; the SEC found those claims were unsubstantiated. Global Predictions represented itself as the “first regulated AI financial advisor,” a claim the SEC determined was false. Both firms settled, paying civil penalties to resolve the SEC’s allegations. The SEC is also focused on promises made when soliciting investments, charging a founder with fraudulently soliciting investments by claiming the company used AI when human contract employees were actually doing the work.
While federal regulators have been less active under the current administration, most states have “Little FTC Acts” empowering the state’s attorney general to police deceptive practices. Several states have initiated investigations into businesses making unsubstantiated AI claims, particularly in healthcare and financial services contexts where AI representations may carry heightened consumer reliance.
…And the Plaintiffs’ Bar Is Following Their Lead
Regulatory scrutiny is often followed by private litigation, and AI washing has been no exception. Three categories of private claims are emerging with increasing frequency.
Consumer Class Actions. Where consumers purchase products or services based on AI-related representations that prove false, class-action litigation under state consumer protection statutes (such as California’s Unfair Competition Law and Consumer Legal Remedies Act) has followed. Plaintiffs have alleged, for example, that they paid a premium for an “AI-powered” product that was, in substance, no different from (or even worse than) non-AI alternatives. Damages theories could include the price premium attributable to the AI claim, restitution and injunctive relief.
Competitor Claims Under the Lanham Act and NAD. The federal Lanham Act prohibits false advertising in commercial contexts. A company that falsely claims its product is
AI driven could face claims from competitors with genuinely AI-powered products, as well as from competitors who (accurately) do not claim to use AI. These suits can be particularly disruptive for portfolio companies engaged in competitive sales processes or seeking to establish market leadership narratives, not just because of the time and attention it takes to deal with litigation, but also because successful Lanham Act plaintiffs can seek injunctions, disgorgement of profits and attorney fees.
Industry self-regulation is also emerging as an important source of scrutiny for AI-related marketing claims. The National Advertising Division (NAD) has recommended that advertisers modify or discontinue claims that overstate whether AI features were currently available, the extent of cross-platform functionality, the productivity benefits users could expect or whether “smart” product attributes were appropriately characterized as AI. The Interactive Advertising Bureau has issued a series of guidance documents focused on transparency, governance and risk management in AI-driven advertising.
Securities Fraud Litigation. For portfolio companies that are public, or those approaching an IPO or SPAC transaction, AI-related misstatements in investor-facing materials can give rise to securities fraud claims under Rule 10b-5 or Section 11 of the Securities Act, following the roadmap of the SEC’s actions discussed above. Even in the private company context, representations made to investors in connection with a financing round may be actionable if AI capabilities are materially overstated.
Why Sponsors Should Care About Portfolio Company Advertising
It might be tempting to view AI washing as a problem for founders, marketing teams or management. But sponsors have important reasons to be aware of this issue at multiple points in the investment life cycle.
Acquisition. Due diligence on AI-related claims is a necessary component of technology-focused transactions. Buyers who rely on a target’s AI narrative to justify a valuation premium do not want to find, post-closing, that the AI
is largely illusory. Further, legal exposure from the target’s past AI practices can transfer with the business and may generate indemnification disputes or post-closing purchase-price adjustments. Prudent diligence includes technical assessments of AI capabilities, substantive review of claim substantiation and scrutiny of customer-facing AI representations.
Ownership. Sponsors are often involved—particularly though representation on portfolio company boards—in the company’s strategic direction and go-to-market narratives. A sponsor that encourages, facilitates or ratifies an AI-washing marketing strategy runs a nontrivial risk of being drawn into litigation or regulatory proceedings as a secondary party.
In addition, board members and controlling shareholders can find themselves named in shareholder or derivative actions.
Exit. AI capabilities have become a standard element of management presentations and confidential information memoranda in sale processes. This is the other side of the coin from acquisition risk: a buyer who discovers, post-signing or post-closing, that AI representations were materially false may seek to rescind, reduce the purchase price or bring fraud claims. In the IPO context, misrepresentations in offering materials are subject to heightened scrutiny under the federal securities laws. Sponsors should treat AI-related representations in exit materials with the same discipline applied to financial projections.
Takeaways
While the risks outlined above are real, they are also all manageable with the right controls. Sponsors should consider the following steps at the portfolio level:
- Substantiation comes first. Every material AI-related claim in marketing, investor or customer materials should be reviewed against the company’s actual technology. Be aware of vendor relationships around AI technologies and treat claims based on third-party services with special care.
- Involve legal and technical teams in claims. Marketing teams often generate AI-related language without complete visibility into how the technology used by the company actually works. Legal review of AI claims, supported by technical assessments, should be standard practice.
- Implement a governance policy. Portfolio companies (particularly in AI-adjacent sectors) should maintain written policies governing AI-related marketing representations and what substantiation is required before a claim is published.
- Conduct periodic claim audits. As AI systems evolve and marketing claims are updated, the substantiation that supported a claim at launch may no longer be accurate. Periodic reassessments of AI-related claims, including after product changes, reduce the risk that claims will inadvertently drift into misrepresentations.
- Do your diligence. Whether acquiring or divesting, sponsors should treat AI capability assessments as a standard workstream, not an afterthought. Like claim assessment, this requires cross-functional diligence by technical and legal teams with a real understanding of the company’s products and services.
Conclusion
AI washing is not a theoretical concern: it is an active enforcement and litigation priority. Federal and state regulators, plaintiffs’ class-action counsel and competitors all treat AI-related misrepresentations as actionable—sponsors and management must as well.
The risks of AI washing arise not only at the portfolio-company level but also at the transaction level, shaping how investments are sourced, structured and exited. As AI capabilities have become material to businesses across virtually every industry, the consequences of overstating them have never been greater. But the fundamental principle remains unchanged: marketing claims must be truthful and substantiated. Paying consistent attention to claim hygiene can meaningfully mitigate these risks.
Private Equity Report Spring 2026, Vol 26, No 1