A new bipartisan legislation is being pursued on the US House of Representatives that would require proper identification and labeling of AI-generated videos, audios, and images online.
California Rep. Anna Eshoo and Florida Rep. Neal Dunn on Thursday proposed the Protecting Consumers from Deceptive AI Act to require AI developers to add digital watermarks or metadata on their generated products.
This would be in addition to requiring online platforms like YouTube, X (formerly Twitter), and TikTok to notify their users that the post was AI-generated.
The National Institute of Standard and Technology, the Commerce Department's tech division, has been tasked to finalize the guidelines of the proposed act once the bills is passed.
The bill cited the "proliferation of deepfakes" as the primary reason for the bill amid rising concerns on the technology's integration into disinformation culture ahead of the 2024 US Presidential Elections.
The Associated Press first reported about the bill.
Political Deepfakes, AI-Fueled Disinformation on the Rise During Election Period
Calls for regulations and restrictions on the use of AI has been increasing over the past months
Congress' attention on the technology has only been heightened since last January after a deepfake of US President Joe Biden circulated in New Hampshire telling residents to not vote on the Democrat's first primary.
Subsequent investigation on the incident has since prompted the Federal Communications Commission to ban AI-generated voices and deepfake audios from spam and robocalls.
With the 2024 US Elections looming closer, watchdogs and lawmakers anticipate a surge of political disinformation online, at least double than as it was in the previous election thanks to AI.
Related Article : Deepfake Audio of Biden Supposed to Highlight Need for AI Rules, Robocall Mastermind Says
Social Media Platforms, AI Firms Push for AI Watermarking
If the bill gets passed, it would compliment ongoing voluntary efforts from social media companies to properly label AI-generated media from their own AI products.
Several online platforms like Meta have even started adding their own non-visible watermarks on all AI-generated media using their in-house AI model to alert other platforms.
YouTube, X (formerly Twitter), and TikTok also announced this year to start labeling near-realistic AI-generated content on their platforms.
AI firms like Hugging Face and Anthropic has also started to unveil new policies that allow companies to embed AI identifiers to content generated from their technologies.