Google’s Gemini Produces Super Bowl Statistics Before It Takes Place

The Super Bowl is one of the biggest events in US culture, and people cannot wait to see which teams will emerge victorious. However, it appears that certain chatbots are more excited about the event as it generates statistics before the games even happen.

Super Bowl 2024
PATRICK T. FALLON/AFP via Getty Images

Google's Gemini Chatbot Makes Up Stats

It has already been established that chatbots are not completely reliable when it comes to providing information, as they have the tendency to make up information based on the available data they are trained with.

That has been made more apparent with Google's Gemini, where a user managed to get statistics out of the chatbot before the 2024 Super Bowl took place. The chatbot presents the statistics like the event had just finished the day before or even weeks ago.

As reported by Tech Crunch, Gemini was also favoring a certain team as the 49ers had better statistics over the Chiefs. Some of the information can get pretty specific and somewhat far-fetched, such as Kansas Chief quarterback Patrick Mahomes running 286 yards for two touchdowns.

When a user asked about the odds of the Super Bowl, Google's Gemini responded that it has already happened, and so it's no longer possible to place bets on teams. It also generated a score of 34-28 with the San Francisco 49ers in the lead.

The same has happened with Microsoft's Copilot chatbot. What's worse is that it even provided citations to make the generated information seem more credible. In this case, it was the Chiefs who won the Super Bowl, not the 49ers.

When asked about the results, Copilot responded by saying that the San Francisco 49ers won with a final score of 24-21. It even expressed that the team's quarterback, Brock Purdy, played a crucial role in the victory of the team.

While ChatGPT uses the same AI model as Copilot, it does not make the same mistake. When asked about the statistics regarding the Super Bowl LVIII, ChatGPT simply says that the game is still ongoing, which means that it cannot provide the right statistics yet.

AI Hallucinations

This instance is just one of the many examples of why you cannot rely on AI for information just yet. In a more widely used term, chatbots "hallucinate" information. This is due to large language models creating information that is not based on the data it was trained with.

These can be prompted by several factors, including misinterpretation, training data bias, training data inaccuracy, and high model complexity, as mentioned in IBM. Companies behind the chatbots resolve the issues as soon as they detect them, but many are yet to be fixed.

One example of how things can go wrong is when a lawyer uses ChatGPT for a brief. While the chatbot produced results, most of them were false and made up, but what made it worse was that it used the names of real judges, which led to the lawyer being fined $5,000.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost

Real Time Analytics