The Federal Communication Commission has now made it illegal to use AI-generated voices to place robocalls over reasons that it could be used to manipulate voters.
A unanimous decision from the commission was made last week restricting AI and pre-recorded voice messages for junk calls under the Telephone Consumer Protection Act of 1991.
According to the FCC declaratory ruling, the implementation will help the commission and other agencies in "putting the fraudsters behind these robocalls on notice."
The commission pointed out AI robocalls can only be made if the callers "have obtained prior express consent" from the receiver, including providing "certain identification and disclosure information."
The announcement came after AI-spliced voice messages of President Joe Biden went viral in New Hampshire, telling voters not to attend the Democrat primary in the state.
FCC Chair Jessica Rosenworcel also cited the AI-faked video of Tom Hanks promoting dental plans and the explicit images of Taylor Swift as aspects of AI that pose "new challenges to consumers."
US Gov't Starts Cracking Down on AI Disinformation
Following the news of the AI-recorded voice of Biden, more and more politicians have started echoing calls for stricter regulations on AI use, especially to political figures.
One particular was House Rep. Frank Pallone, a Democrat, pushed to further clarify the definitions of robocalls under the Supreme Court decision.
It is worth noting, however, that the US has still to implement a standardized rule to regulate all applications of AI.
So far, Biden has issued an executive order to seize opportunities and manage emerging risks of AI to the country.
Several obligated departments have already provided their proposals on managing emerging risks brought by AI.
Related Article : Democrats Push for Gov't Crack Downs on Deepfakes, AI Robocalls
More AI-Powered Disinformation Expected for 2024 Elections
Several digital experts have alerted people and the government that AI-fueled disinformation will rise as the 2024 Elections advance.
Many websites have already been detected in the past months disseminating bogus information at a faster rate thanks to AI.
So far, OpenAI has started to restrict the impersonation of known personalities through DALL-E and ChatGPT.
Social media platforms have also implemented more rules on AI use, including disclosures that an image or video was generated by AI.