Microsoft detected that state-backed hackers from Russia, China, and Iran have been utilizing AI-powered tools of OpenAI to trick targets and enhance their skills.
The report also detailed that the hacking groups tracked are affiliated with Russian military intelligence, Iran's Revolutionary Guard, and the Chinese and North Korean governments.
State-Backed Hackers Enhance Skills Using LLMs
According to Microsoft, government-affiliated hackers are using OpenAI's large language models (LLMs) to improve their hacking campaigns and skills using AI. As a result, the company has rolled out a blanket ban on state-backed hacking groups to use their AI products.
"Independent of whether there's any violation of the law or any violation of terms of service, we just don't want those actors that we've identified - that we track and know are threat actors of various kinds - we don't want them to have access to this technology," Microsoft VP for customer security, Tom Burt, explained with Reuters.
On the other hand, China's U.S. embassy spokesperson Liu Pengyu argued that the reports are "groundless smears and accusations against China." He also advocated for the "safe, reliable, and controllable" utilization of AI technology for the betterment of mankind.
Microsoft Detects Early Stage of AI Utilization for Hacking
OpenAI and Microsoft shared that the hackers' use of the AI tools is only at the early stage and there was no sign of cyber spies making any breakthroughs yet.
"We really saw them just using this technology like any other user," OpenAI's lead for cybersecurity, Bob Rotsted, stated.
For instance, the alleged hackers affiliated with the Russian military have used the models to research satellite and radar technologies, hinting at the conventional military operations in Ukraine.
Meanwhile, North Korean hackers used the models to generate spear-phishing campaigns. Iranian hackers are also learning to write more convincing emails to lure victims on a booby-trapped website.