OpenAI Hacker Leaked Important AI Details in Online Forums

OpenAI's data breach last year has reportedly led to the hacker sharing the company's important details on its AI products in online forum boards.

Citing two anonymous sources, The New York Times reported that the OpenAI hacker leaked the design of the company's AI products.

OpenAI Hacker Leaked Important AI Details in Online Forums

(Photo : Stefani Reynolds/AFP via Getty Images)

Sources claimed that the hacker was unable to access the company's AI systems where most of the critical data are stored, limiting the information leaked to the public.

OpenAI executives reportedly hid these critical details from the Federal Bureau of Investigation or any law enforcement believing that the attack was the work of a private individual.

A few months later, OpenAI reported disrupting five influence operations from China, Russia, Iran, and Israel using its AI products to manipulate American voters' perceptions ahead of the 2024 US Presidential Elections.

Experts Warn of Potential Cybersecurity Dangers on OpenAI Vulnerabilities

Experts warned that such vulnerabilities could allow foreign adversaries to steal OpenAI's products for their own gain.

New details about the April 2023 cyberattack surfaced following reports of another major vulnerability on OpenAI's ChatGPT app on Mac that was earlier spotted.

OpenAI has since fixed the issue but not before other software engineers hound its products for other potential data breach exploits.

It is worth noting that The New York Times has an ongoing copyright lawsuit against OpenAI and its investor Microsoft over its licensed content.

Also Read: OpenAI Fixes Major ChatGPT Vulnerability on its Mac App

AI Development Poses Growing National Security Risks

Reports about OpenAI's vulnerabilities came amid growing concerns about the generative AI technology becoming a national security risk.

One US government-sponsored study even claimed that the technology may reach an "extinction-level threat to the human species" if no proper risk mitigation is done.

Despite dangers, more companies like Meta have been deploying their AI products as open-sourced believing that an accessible source code could better improve the technology's capabilities against cyberattacks.

OpenAI, despite its name, locked most information about its AI designs behind closed doors, raising safety concerns of security lapses undetected as the products are released to the public.

Related Article: How to Limit OpenAI from Collecting Personal Data on ChatGPT?

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost