AI has been beneficial to creators in a lot of ways, especially since it helps minimize the work and is capable of generating new ideas. Sadly, it is also being misused by many, so YouTube is setting its own rules and policies regarding AI content.
YouTube Takedown Requests
Generative AI can be used to create many media formats, including audio and video. In the event that a musician or an actor finds content that mimics their likeness, whether it's their face or their voice, they can ask YouTube to remove it from the site.
They can submit the request for takedowns through YouTube's privacy request process. However, that does not mean that the video or content will automatically be taken down. The company will still review whether the content in question really does violate its policies.
For instance, YouTube will take action depending on factors like whether the content is obviously satire, or if the person can be uniquely recognized. When an uploader is identified as a violator, follow-up consequences may be implemented, as reported by Engadget.
As punishment for the AI misuse, the creator of the deepfake may be suspended from the YouTube Partner Program on top of the removal of the video. There are also other repercussions that the violator may face depending on YouTube's decisions.
This policy comes at the perfect time if not a little late, especially since AI misuse is becoming rampant across social media sites. To avoid misinterpretation or misinformation, the streaming platform is also requiring users to clearly indicate if their content was AI-generated.
These are especially enforced for videos that look realistic. The new policy is set to come in the following months, although it's unclear whether the same can be said for the takedown requests, as there are already several videos and audio on the platform that are identified as fake.
Other Social Media Sites Should Do The Same
YouTube is not the only site that bad actors use when it comes to spreading AI-generated misinformation. Facebook, Instagram, and X (formerly Twitter) have also seen their share of deepfakes and have already taken down a few.
Meta already announced their own measure recently wherein advertisers are required to disclose when they digitally create or alter political or social ads. The policy will enforced by 2024 and will be required for all Facebook advertisers worldwide.
These will apply to ads that depict a real person saying or doing something they did not say or do, a realistic-looking person that does not exist a realistic-looking event that did not happen, altered footage of a real event that happened, or a realistic event that allegedly occurred but isn't real.
The only AI alteration that doesn't need to be indicated is those that are "immaterial to the claim, assertion, or issue" regarding the ad as such image size adjusting, cropping an image, color correction, image sharpening, and more. Once the advertiser discloses the use of AI, Meta will automatically add the information to the ad.