Google, OpenAI, and Microsoft chatbots are reportedly spreading political misinformation ahead of the European Union's elections despite the companies' promises to improve their safety protocols.
In a data analysis by Politico, it has been found that the chatbots are providing voters with incorrect election dates or wrong instructions on how to submit a vote.
The chatbots have also been noted to generate incoherent answers when asked in languages other than English. In several cases, Politico has spotted the AI giving answers in Japanese or giving irrelevant YouTube links.
As of writing, the reported chatbots have stopped answering related questions following an update from their developers.
The study coincides with earlier concerns from tech experts and digital watchdogs of how AI "hallucinations" could impact the results of the upcoming elections as they become more accessible to the public.
At least one-third of the world's population is expected to attend voting polls this year with Taiwan, India, and South Korea already feeling the brunt of its impact.
EU Braces for AI-Fueled Disinformation Ahead of Elections
With the passing of its first-ever AI regulation law, the European bloc is preparing to combat the spread of AI-powered misinformation and disinformation as the election period closes in.
The EU has previously coordinated with big techs to address issues of rampant disinformation in the region that was only further escalated thanks to the misuse of their AI technologies.
So far, several AI firms like Meta have launched their own dedicated AI teams to help the Parliament's effort to keep citizens rightfully informed on election events.
OpenAI, Microsoft Under Fire for Fueling AI-Powered Election Misinformation
Microsoft and OpenAI, in particular, face added scrutiny after several promises to prevent its AI products from contributing to election misinformation.
An earlier study from the Center for Countering Digital Hate found that OpenAI's DALL-E and Microsoft's Image Creator continues to generate politically inaccurate images.
According to the study, the image generators provided incorrect depictions of politicians and election-related topics at 41% out of the 40 text prompts tested.
Both companies have earlier promised to tighten their in-door regulations to protect their products from being abused ahead of the election period.