“Grandma, I have [not] been kidnapped”: The FCC Bans AI-Generated Robocalls

By Andrew Glass, Gregory Blase, and Joshua Durham

Effective immediately, the Federal Communications Commission (FCC) banned AI-generated phone calls with its recent Declaratory Ruling (the Ruling). Known as audio or voice “deepfakes,” AI can be trained to mimic any person’s voice, resulting in novel scams such as grandparents receiving a call from their “grandchild” and believing they have been kidnapped or need money for bail. FCC Commissioner Starks deemed such deepfakes a threat to election integrity, recalling that just recently, “potential primary voters in New Hampshire received a call, purportedly from President Biden, telling them to stay home and ‘save your vote’ by skipping the state’s primary.”

The FCC noted in the Ruling that the Telephone Consumer Protection Act (TCPA) restricts the use of “artificial or prerecorded voice” to instances only when the receiving party provides their prior express consent, absent an emergency or within narrow exemptions. The FCC concluded that these audio deepfakes (what the FCC deemed “voice cloning”) are indeed artificial, and thus require: (1) prior express consent of the called party; (2) certain disclosures and identification by the entity initiating the call; and (3) additional requirements for AI calls that also constitute advertising or telemarketing.

Now, when these calls happen, “State Attorneys General across the country can go after the bad actors behind these robocalls and seek damages under the law.” Indeed, 26 State Attorneys General support the FCC’s Ruling, and 48 have specifically agreed to “work with [the FCC] to combat robocalls.”

Copyright © 2024, K&L Gates LLP. All Rights Reserved.