FTC Bans Rite Aid from Using AI Facial Recognition Without Reasonable Safeguards

By Whitney E. McCollum and Eric F. Vicente Flores

The Federal Trade Commission (FTC) issued a first-of-its-kind proposed order prohibiting Rite Aid Corporation from using facial recognition technology for surveillance purposes for five years.

The FTC alleged that Rite Aid’s facial recognition technology generated thousands of false-positive matches that incorrectly indicated a consumer matched the identity of an individual who was suspected or accused of wrongdoing. The FTC alleged that false-positive matches were more likely to occur in Rite Aid stores located in “plurality-Black” “plurality-Asian” and “plurality-Latino” areas. Additionally, Rite Aid allegedly failed to take reasonable measures to prevent harm to consumers when deploying its facial recognition technology. Reasonable measures include: inquiring about the accuracy of its technology before using it; preventing the use of low-quality images; training or overseeing employees tasked with operating the facial recognition technology; and implementing procedures for tracking the rate of false positive matches.

The FTC’s proposed order will require Rite Aid to implement comprehensive safeguards to prevent harm to consumers with respect to automated systems that use biometric information to track or flag them, notify customers when their biometric information is enrolled in a database, and provide a clear and conspicuous notice to consumers.

The order presents the FTC’s first incursion into regulating biases in artificial intelligence technologies and follows a statement issued earlier this year warning businesses that it would be closely monitoring any automated systems using biometric information. Businesses looking to implement the use of these technologies should expect further FTC scrutiny and regulatory enforcement.

Copyright © 2024, K&L Gates LLP. All Rights Reserved.