In a recent development that signals a growing concern over the misuse of AI technology to make unwanted robocalls and robotexts, the attorneys general of more than half the U.S. states (and the District of Columbia) recently filed a comment letter with the Federal Communications Commission (FCC) arguing that AI-generated voices should be classified as “artificial voices” (as that term is used in the Telephone Consumer Protection Act (TCPA)), and thus that consumers should not receive AI-generated robocalls or robotexts without having given prior express written consent.

This move comes in the wake of the FCC's inquiry, initiated in November, which the FCC said was meant to “gather information and prepare for changes in calling and texting practices that may result from AI-influenced technology.” Here, the coalition of attorneys general expressed concern that the Commission was “opening the door to potential, future rulemaking proceedings wherein outbound calls utilizing AI technology will be permitted without the prior express written consent of the consumer."

This united front reflects a proactive approach to safeguarding consumer rights in the face of rapidly advancing AI technology, and underscores a broader recognition of the need for existing laws to evolve and adapt to new technological realities, ensuring that consumer protection remains at the forefront of telecommunications policy. 

The comment letter came from the following attorneys general: include Alabama, Arizona, California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Massachusetts, Maine, Maryland, Michigan, Minnesota, Mississippi, New Jersey, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, South Dakota, Pennsylvania, Tennessee, Vermont, Washington, and the District of Columbia.