Meta Oversight Board Recommends Update on Deepfake Policies

Meta's Oversight Board admitted that the tech giant's policies on non-consensual deepfake content are "not sufficiently clear."

The quasi-independent panel revealed its decision after the recent cases of AI-generated explicit content that featured two famous women.

Facebook is Trying to Win Back Younger Users Amid Looming Ban on TikTok

(Photo : Kirill Kudryavtsev/AFP via Getty Images)

Read Also: Instagram Partners With COS to Study Its Effect on Teens' Mental Health

Meta Must Revise Deepfake Policies, Oversight Board Says

According to the Oversight Board, Meta's current policies failed to take down deepfake explicit images of a famous Indian woman. Other celebrities also suffered from the same cases due to the lack of regulation on this type of content.

In 2020, Meta established the board to serve as the panel for content across Facebook and Instagram. The panel took months to review the AI-generated images of two women, one Indian and one American.

Meta's initial decision to not take down the posts was scrutinized by the board and was only removed after the user appealed the case to the panel.

The board explained that the user reported the image as pornography but the report failed to be reviewed within the 48-hour deadline, leading to its automatic closure.

Meta's Policies Fails to Protect Users Against Deepfake

The board did not disclose the identity of the women in the posts. Both of them are referred to by the public as "a female public figure."

Meta announced that the account that posted the images is already disabled. Any similar attempt to re-upload the images was also added to the database, allowing the system to automatically detect and remove those that violate the rules.

Despite the taking down of the posts, the board is seeking revised policies for Meta. The panel stated that the policy wording is not clear to users and must be clarified.

"This is worrying because many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance," the board said.

Meta announced that the board's recommendation is now up for review.

Related Article: Meta Launches Llama 3.1 as Its Best Open-Source AI Model

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost