Meta is expanding the scope for its labeling policy on AI-generated and manipulated media on the platform after deeming the current policies to be "too narrowed."
The Facebook and Instagram owner announced on Friday that it will soon require users to label their videos, images, and audio content if AI was ever used to enhance or manipulate the media.
Meta will also stop removing manipulated content on the platform, but will rather add context on the altered media to give "people more information about the content so they can better assess it."
The social media giant, however, will still remove content, "regardless of whether it is created by AI or a person," if it violates the platforms' other policies against bullying, harassment, and violence. Platform changes will start to go into effect in May 2024.
Meta Vouches to Improve Policies on AI Content Amid Scrutiny
The changes were made in cooperation with the platform's Oversight Board, "extensive public opinion surveys," and consultations with "academics, civil society organizations, and others."
It can be remembered that the independent transparency group earlier scrutinized Meta for its "incoherent" rules on digitally altered content after allowing manipulated videos of US President Joe Biden to pass through its loopholes.
In response, Meta promised to further increase efforts in detecting AI-generated images on its platforms in preparation for the election period.
AI-Powered Disinformation Still Run Rampant on Meta Platforms
The changes in Meta's AI policies follow as unlabeled AI-generated content continue to run rampant across its platforms.
Several news outlets, including iTech Post, earlier reported of AI-generated media depicting Jesus Christ made out of shrimps or crabs on Facebook.
Many of these posts regularly gain thousands of reactions, views, and shares from bot accounts spreading similar content.
It does not help that Meta's in-house "Imagined with AI" feature was caught generating politically inaccurate media and information, further polluting the misinformation environment on its platforms.
Digital experts have long urged Meta to improve its content policies against AI-generated content to prevent the same disaster that occurred during the 2016 and 2020 Elections.