The 2024 Elections are near, and so does the "fake news" and political disinformation.
Past elections have shown how misinformation has spiked since 2016 as fake accounts and "troll farms" have become prevalent on its platforms.
With the rise of AI and deepfakes, the threat of widescale election disinformation has never been more worrying.
With a large part of the voter population mainly using Facebook and Instagram, or at least 71.43% of the US population, election misinformation poses a risk of how future policies will be made.
That said, Meta has been taking steps to address these issues.
Meta Taking Steps Against Disinformation
Since last year with the rise of AI-generated images, Meta has already implemented new policies to combat both AI-powered and organic disinformation on its platforms.
Aside from introducing more independent fact-checker on its social media sites, Meta has started using AI technology to detect "harmful" content, including "fake news" and disinformation.
Just recently, Meta made it that all images generated from its AI will include visible markers in addition to an invisible watermark embedded in its metadata to help other companies identify its authenticity.
The company will also start requiring accounts to disclose whether they use AI tools for images, video, and audio recordings.
Too Little, Too Late
While the company has been addressing issues of disinformation on its platform, most have been reactionary rather than active efforts.
Just the recent AI label policy was released after its own oversight board criticized Meta for its "incomprehensible" policies against non-AI-manipulated media.
The board's scrutiny comes in as it ruled to retain an edited video of US President Joe Biden being portrayed as touching his then-18-year-old granddaughter in the chest.
One of the criticized features in its efforts to limit disinformation is the censor option for posts proven to be fake.
Current platform misinformation policies allow users to still view posts even after being tagged as false. This issue has been recently raised during Meta's appearance on the online child safety trial in Washington, DC.
For Meta's defense, the optional sensor allows researchers and other independent groups to study how disinformation works online.
It is also worth noting that Facebook and Instagram do not have their own fact-checking unit and mainly rely on a limited number of third-party contractors to verify dubious claims in the platforms.
With the amount of disinformation expected to double as the elections inch closer, it is unsure whether the current number of people under Meta is capable of handling a site with millions of people posting every second.
It does not help that the company has been among the tech firms that reduced its workers in the beginning of 2024.
Calls for More Effort
Meta's oversight board is not the only one that has started calling for more changes in the company's current efforts on "fake news" and disinformation.
More organizations have started urging the company to improve measurements against misinformation in what is probably be one of the biggest elections in the 21st Century.
Of course, a single company cannot completely eradicate all disinformation on its platform alone. Many people see this as a chance to further push the creation of new laws that will address these worries.
So far, no law has been passed yet to directly combat disinformation or regulate AI development and distribution.
Related Article : How to Spot Election 'Fake News' on Social Media