Elon Musk's Grok Chatbot Keeps Hallucinating Wrong News Briefs: WSJ

Elon Musk has long hinted at plans of making his Grok chatbot people's no. 1 news source on social media, however, that dream seems to be as far away as he hoped to be.

According to The Wall Street Journal, the Grok chatbot keeps "hallucinating" incorrect news summaries about the attempted assassination of presidential candidate Donald Trump.

Elon Musk's Grok Chatbot Keeps Hallucinating Wrong News Briefs: WSJ

(Photo : Jonathan Raa/NurPhoto via Getty Images)

The report noted that the AI was found telling people that Vice President Kamala Harris was shot rather than Trump, identifying a different person for the assassination, or making up their motive for the attack when investigators have yet to determine it.

In some cases, the chatbot generates vague and confusing headlines that could easily mislead people.

Grok Could Worsen Misinformation on X

For a platform that is already facing problems with rampant conspiracy theories and disinformation, Grok's incorrect information could only worsen misinformation on the platform ahead of the 2024 US Elections.

Musk, who has since declared support for Trump, and Grok developer xAI have yet to issue a statement regarding the matter.

The xAI owner has started pushing the chatbot to X (formerly Twitter) since May as Musk pivots the social platform into an "everything app."

Also Read: xAI's Grok Allegedly Uses OpenAI Code, Employee Cites ChatGPT's Online Presence Everywhere

 

Generative AIs are Struggling to Keeping News Details Accurate

The Grok chatbot is far from the only chatbot generating inaccurate information despite the tech industry's continuous form of more AI models capable of "critical thinking."

Google DeepMind's Gemini chatbot was embroiled in similar controversies earlier this year after the AI was caught hallucinating basic inquiry results repeatedly.

OpenAI's ChatGPT has also gained notoriety for generating fake links for its sources and fabricating quotes, including from politicians and influential figures.

Grok itself has previously been spotted generating inaccurate summaries earlier in June about California Gov. Gavin Newsom supposedly winning a debate.

It was actually people's reaction regarding the recent presidential debate which Newsom reportedly attended but did not participate in.

In the chatbot's defense, xAI clarified that Grok's answers "evolve over time" as more information becomes available and that it can "make mistakes."

Related Article: xAI Partners With Dell, Super Micro for Server Racks Amid Supercomputer Ambitions

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost