The issue with users using AI to create fake content has been getting worse. It can prove to be dangerous especially when the intentions of the creator are malicious. Such is the case in an incident in China, where a man is arrested for using ChatGPT to generate fake news.
Using ChatGPT to Generate Fake News
A man from China allegedly used OpenAI technology to create fake news claiming that a local train accident caused the death of nine people. Upon the generation of the disinformation, the suspect named "Hong" posted the article on several accounts.
The fake article was spotted by the cyber division of the country police bureau, and by then, the news had already been clicked over 15,000 times. It was posted by more than 20 accounts on a blog platform called Baijiahao, which is run by the China-based search engine, Baidu.
The authorities from the northwestern Gansu province stated that the man had been detained for the offense, saying that he used artificial intelligence technology to "concoct false and untrue information," as mentioned in Gadgets 360.
The detention follows law enforcement tracing where the article originated, leading them to the company that was owned by the suspect, Hong. The police searched the suspect's home after around ten days as well as his computer before detaining him.
Hong confessed that he had bypassed Baijiahao's duplication check function to be able to publish the article on many accounts. To avoid redundancy, he reportedly used trending social stories from the country and added them to generate fake stories with the same premise.
The suspect may be charged for "picking quarrels and provoking trouble," which will result in a five-year sentence, although depending on the severity of the offense, the sentence can go as high as ten years and be charged with more penalties.
That's besides the possible consequences that come with the violation of China's provisions that aim to regulate the use of "deepfake" technology. The policy has been effective since January 2023, which may cost Hong to get more years on top of the first mentioned potential charge.
Misuse of AI
The misuse of AI has been increasing since 2012. Statistics from an AIAAIC database state that incidents related to the misuse of AI have increased 26-fold since then up to 2021, and that's not including the undocumented and unreported incidents in the earlier years.
As mentioned in The Decoder, it can be evidence of the growing popularity of artificial intelligence and its uses in the real world, as well as the users' awareness of its potential ways to be ethically misused.
One of the biggest concerns right now for AI is the use of deepfake technology, where people can create fake media content by substituting the face or the voice of an individual. Although some are innocent like "Balenciaga Pope," some might have more malicious intentions.
AI is also being used in surveillance such as facial recognition. Intel, for instance, is developing software that can determine the mood of a person based on their face. Students can be labeled as bored, distracted, or confused during a videoconferencing class.