CNET’s Reliability Rating Has Been Downgraded Due to AI-Generated Articles

Credibility has always been an important factor with news outlets, especially since they deal with information. When one starts using resources that are still known to be unreliable, it can ruin an organization's reputation. Such is the situation CNET is currently in.

Using AI for News Articles

Unlike other forms of written texts, news articles should be nothing short of truthful and accurate. When one describes AI-generated content, these characteristics aren't exactly what comes to mind, especially with the state it's in now.

CNET has reportedly used AI to generate articles under the byline "CNET Money Staff" around November 2022. Other news publications raised the issue, pointing out that the use of the tool resulted in an article riffled with mistakes and plagiarized content.

After the news broke about CNET, the news outlet paused the practice to prevent further damage to its reputation, but there have already been consequences. As a result, Wikipedia no longer labels CNET as a highly reliable source of information.

As per Ars Technica, editors started discussing how the situation should be dealt with. Under the page called "Reliable sources/Perennial sources," anyone can view which publications are held in high regard.

Wikipedia editor David Gerard mentioned that CNET, once seen as an ordinary tech reliable source, has begun experimentally running AI-generated articles, all of which are "riddled with errors." Between November 2022 and January 2023, CNET was categorized as "generally unreliable."

The description said that the news outlet "began deploying an experimental AI tool to rapidly generate articles," which ended up with "factual inaccuracies." Red Ventures, which acquired CNET in October 2020, issued corrections to over half of the AI-generated articles.

Even before the news company was labeled "generally unreliable," it was already losing its status as a credible source. It was regarded as "generally reliable" before October 2020 and was labeled "leading to a deterioration in editorial standards" from then up to the present.

Why AI-Generated Articles are Unreliable

At present, AI tools still tend to make up information based on the provided details. Most call these "hallucinations," wherein AI models start perceiving patterns that result in false outcomes.

There are a lot of factors that could lead to this. For one, it could be due to insufficient data that the AI tool ends up filling in. There are also biases mixed into the AI's training data, which could be reflected in the content generated by it.

Generally, AI-generated content should be labeled as such, especially with the current issue of AI content circulating online being passed as genuine works. While it's not advisable or possibly unethical to use AI tools to create articles, there are ways to prevent it from generating false information.

As suggested in a blog post by Google Cloud, a writer can create a template for the AI to follow, such as a title, an introduction, a body and a conclusion. One can also tell the AI what shouldn't be included in the article to prevent instances of hallucinations.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Tags AI tools

More from iTechPost

Real Time Analytics