Google is Requiring Advertisers to Label AI-Generated Political Ads

The tech world has been fascinated by what generative AI can do in the past year and its users are utilizing it for all sorts of things. It was only a matter of time before it was used to create ads, especially since it's capable of doing more complicated tasks. With it being potentially misused, Google requires AI political ads to be labeled.

Google
Nicolas Economou/NurPhoto via Getty Images

Full Disclosure for AI-Generated Ad

Starting in November, advertisers who plan to put up political ads that were generated by AI are required to "prominently disclose" how it was created. This applies to all ads that are synthetically created or depict "realistic-looking people or events."

With elections approaching, politicians will start to put up ads for their campaigns, and with the help of AI, putting them in any scenario is possible. Anyone who uses the tool could even put a candidate beside JFK waving at the crowd if they wanted to.

That means that through AI, the ad could show candidates do or say something they've never done or said, even tweaking real events by changing a few key moments. What that made possible, Google requires disclaimers for images, videos, and audio content.

As mentioned in The Verge, the label or indication that the content is AI-generated must be in a "clear and conspicuous" place, stating things like "This image does not depict real events," or "This audio was generated through AI."

The problem isn't just about politicians using AI for their ads, but also about using AI to run a smear campaign against rival candidates. There has already been an instance where a politician attacked another by spreading misleading AI-generated images.

Google spokesperson Allie Bodack says: "Given the growing prevalence of tools that produce synthetic content, we're expanding our policies a step further to require advertisers to disclose when their election ads include material that's been digitally altered or generated."

AI Contributing to Misinformation

Misuse and misinformation are the two most problematic aspects of AI-generated content. Without the disclaimer that Google requires, people may not be able to tell whether or not an ad is fake, which could very well be the intention of the advertiser.

A couple of tools can be used to deceive a wide range of audiences. With ElevenLabs, a person can easily recreate a person's voice and have them utter statements that would ruin their reputation, as pointed out by Interesting Engineering.

With image samples using real photos, anyone can create false scenarios using DALL-E 2, StableDiffusion, MIdjourney, and other AI image generators. All a user has to do is create them with the right text prompts.

Gary Marcus, both a professor at New York University and an expert in AI, believes that people should be afraid as AI-generated misinformation could be a "major force" in the 2024 election, which is not impossible and could even be a likely circumstance.

While AI detectors are already being developed, the tools are still not foolproof. AI-generated content can still be convincing enough to fool both people and detector tools. With the help of social media, they could easily spread like wildfire.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost

Real Time Analytics