Facebook Fails To Stop Threatening Ads Targeting Midterm Election Workers

Meta's commitment to its security policies in connection to the 2022 midterm elections does not seem to be as strict as the company claims it to be.

A new investigation revealed that Facebook approved ads with brutal threats toward US election workers, explicitly inciting violence ahead of the midterm elections earlier this month.

Researchers Found That Facebook's Automatic Moderation System Approved Threatening Ads

According to Global Witness, Facebook was unable to detect more than half of the brutal test ads directed toward election workers as New York University's Cybersecurity for Democracy found.

The investigation conducted by the researchers tested Facebook, TikTok, and YouTube's ability to detect ads that have death threats in them, violating the policies they have in place.

While the tests revealed that YouTube and TikTok would suspend accounts for such rule breaks, Facebook's automatic moderation system approves 15 out of 20 threatening ads.

In some cases, the social media platform even allowed the ads after they were less profane or misspelled to get past the initial review.

Engadget writes that the NYU Cybersecurity for Democracy team's experiments were based on real-life threats using clear language that included explicitly violent and concerning statements.

The researchers tested these ads in English and Spanish language versions and submitted them to the platform the day before the midterm elections.

"It's incredibly alarming that Facebook approved ads threatening election workers with violence, lynching, and killing - amidst growing real-life threats against these workers," said Rosie Sharpe, Global Witness investigator says.

Sharpe adds that doing such threatens the safety of the elections, despite the commitment it claims to keep hate speech and disinformation off Facebook.

Meanwhile, Damon McCoy, co-director of NYU Cybersecurity for Democracy, notes that the failure to block violent ads against election workers jeopardizes their safety.

Additionally, McCoy says that it is disturbing that Meta allows advertisers on Facebook to remain undetected with content that promotes violence, Global Witness writes.

Read More: Meta Gets Fined Nearly $300 Million For Facebook Data Scraping Fiasco

Meta Pushes Back On The Investigation Results

According to Engadget, a spokesperson from Meta said that the ads were only a small sample and it does not represent the overall content users see on Facebook.

The spokesperson also claims that no ad inciting violence is welcome on the platform and has made it clear that the company is committed to continuously improving its system.

The company also backed these claims by providing quotes that showed the amount of resources dedicated to stopping violent threats but not details on the effectiveness of said resources.

While the test ads would not have caused real damage as they were experimental, incidents outside of the research pose very real threats to the lives of election workers.

Gizmodo reports that election workers are exposed to huge amounts of violent threats during midterm election season, where cases of such are expected to likely increase even more.

With this, the NYU Cybersecurity for Democracy team calls on social media sites to adequately moderate content to ensure that their platforms are safe.

Related Article: Facebook Will Remove Several Categories of Information From Your Profile Soon

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost

Real Time Analytics