The Federal Communication Commission has proposed new guidelines around the use of generative AI tools in robocalls and automated text alerts to curb risks of misinformation.
The FCC unveiled its "first-of-their-kind" standards that would help consumers identify AI-generated messages better, preventing the use of the technology to scam or mislead people.
To do so, the FCC would require robocall distributors to disclose the use of generative AI when obtaining prior express consent from the receiver.
This will be in addition to protections to ensure positive uses of the technology in helping disabled people to communicate.
The FCC is currently taking "additional comment and information" in developing the new transparency standards around generative AI.
The new proposed rulings are expected to complement earlier proposals seeking to prohibit the use of voice cloning technology in robocall scams and AI abuse to mislead consumers.
Also Read : FCC Outlaws AI-Generated Voices in Robocalls
FCC Tackles Issues, Concerns Around GenAI Ahead of 2024 Elections
The new ruling is only part of the FCC's recent efforts to tackle generative AI as the technology continues to become ingrained in politics and the upcoming US presidential elections.
With the 2024 Elections in full swing, the FCC has been considering how to handle AI in politics amid the number of issues and concerns surrounding the technology.
FCC Chair Jessica Rosenworcel has also proposed earlier potential uses of generative AI in political ads, limiting potential misuse of the technology for political and disinformation campaigns.
The commission claimed that such rules would allow government bodies to have a better grasp on the technology's positive uses when it is still in its infancy.
Related Article : AI Tools May Soon Come to Political Ads Under New FCC Proposition
FCC Pushes to Build Clear Definition on Generative AI
At the Center of it all is the FCC building clear definitions about what can be considered as AI-generated and its possible uses in communication.
In the same ruling proposal, the commission cited previous incidents of the generative AI being used to mislead voters, specifically the voice imitations of President Joe Biden in New Hampshire, as the core motivation of the new proposals.
Other government agencies are doing the same in accordance with the White House's executive order for standardized AI use last October.