Model Destruction – The FTC’s Powerful New AI and Privacy Enforcement Tool

22 March 2022
View Debevoise In Depth

A recent FTC settlement is the latest example of a regulator imposing very significant costs on a company for AI or privacy violations by requiring them to destroy algorithms or models. As companies invest millions of dollars in big data and artificial intelligence projects, and regulators become increasingly concerned about the risks associated with automated decision-making (e.g, privacy, bias, transparency, explainability, etc.) it is important for companies to carefully consider the regulatory risks that are associated with certain data practices. In this Debevoise Data Blog post/Client Update, we discuss the circumstances in which regulators may require “algorithmic disgorgement” and some best practices for avoiding that outcome.