US Prohibits AI-Generated Voices in Scam Robocalls Following Biden Deepfake Incident

The United States has made it illegal to use artificial intelligence-generated voices in unsolicited robocalls, a move spurred by a recent fraudulent robocall campaign impersonating U.S. President Joe Biden.

In a statement released on February 8th, the Federal Communications Commission (FCC) announced the unanimous adoption of a Declaratory Ruling, affirming that calls made with AI-generated voices are considered “artificial” under the Telephone Consumer Protection Act (TCPA).

“This decision equips State Attorneys General across the nation with new tools to combat the perpetrators of these malicious robocalls,” stated the FCC.

The FCC’s action came shortly after residents of New Hampshire received fabricated voice messages purportedly from President Biden, discouraging them from participating in the state’s primary election.

Robocall scams are already prohibited under the TCPA, a law regulating telemarketing activities in the U.S. The recent ruling extends this prohibition to encompass “voice cloning technology” utilized in these scams, effective immediately.

“Fraudsters are utilizing AI-generated voices in unauthorized robocalls to exploit vulnerable individuals, impersonate public figures, and disseminate false information. We are sending a clear message to those behind these robocalls,” emphasized FCC Chair Jessica Rosenworcel.

The FCC initially proposed banning AI robocalls under the TCPA, a law enacted in 1991 to govern automated political and marketing calls made without recipients’ consent.

The primary objective of the TCPA is to shield consumers from unwanted and intrusive communications, including telemarketing calls, automatic telephone dialing systems, and artificial or pre-recorded voice messages.

Under FCC regulations, telemarketers are required to obtain written consent from consumers before initiating robocalls. The new ruling extends these requirements to encompass AI-generated voices in calls.

See also  Why is Bitcoin price up today?

The FCC underscored the escalating prevalence of AI-backed calls in recent years, cautioning that this technology has the potential to deceive consumers with misinformation by mimicking the voices of celebrities, political figures, and family members.

While law enforcement agencies have historically targeted the consequences of unwanted AI-voice-generated robocalls, such as scams or frauds, the new ruling empowers them to pursue legal action against perpetrators solely for employing AI to fabricate voices in robocalls.

Meanwhile, the alleged perpetrator responsible for the Biden robocalls in mid-January has been identified as a Texas-based firm named Life Corporation, along with an individual named Walter Monk.

The Election Law Unit issued a cease-and-desist order to Life Corporation for violating New Hampshire’s statutes on bribery, intimidation, and suppression. The order mandates immediate compliance, with the unit reserving the option to pursue additional enforcement actions based on prior infractions.