OpenAI Fixes Major ChatGPT Vulnerability on its Mac App

OpenAI has finally fixed a major data breach vulnerability on its ChatGPT app on Mac devices allowing third-party apps to access people's conversations with the chatbot.

First reported by The Verge, the vulnerability showed that the ChatGPT app on Mac stores all of its user conversations in plain text.

OpenAI Fixes Major ChatGPT Vulnerability on its Mac App

(Photo : Leon Neal/Getty Images)

This means that bad actors and third-party apps can essentially access these conversations by just reading the stored plain texts in the victim's computer.

Thankfully, there is no indication that other private data can be accessed through the same method. It also helps that users can easily opt out of OpenAI recording their conversations with ChatGPT.

OpenAI is currently only offering ChatGPT for Mac on its own website, outside of Apple App Store's sandboxing requirements, which is believed to be cause for the vulnerability.

Data Privacy, Safety Concerns Hound OpenAI

This was not the first time OpenAI and its chatbot have faced data privacy criticisms as the company continues to roll out new AI models despite growing safety concerns.

Last May, OpenAI disrupted five influence operations using its AI tools in an attempt to sway public opinion ahead of the 2024 US Elections. The operations were believed to originate from Israel, China, Russia, and Iran.

This was after Microsoft published a review showing how its chatbot and OpenAI's were being used by supposed state-backed threat actors to launch cyberattack operations across the globe.

The inclusion of former National Security Agency chief Lt. Gen. Paul Nakasone as part of its board of directors did not also spell confidence to its customers who are already wary of the agency for excessive data collection.

Also Read: OpenAI Welcomes Trump-Appointed Ex-NSA Chief into Board of Directors 

OpenAI Faces Safety Criticisms from its Former Employees

In addition to government bodies, OpenAI has also been criticized multiple times by both its current and former employees regarding the company's current attitudes toward AI development.

Jan Leike, OpenAI's former "Superalignment" team, even criticized the company for putting its safety culture and processes on a "backseat to shiny products."

Leike's co-leader and one of OpenAI's founding members, Ilya Sutskever, also left the company to start his own AI safety startup while citing concerns about the industry's current approach to AI.

OpenAI later formed another AI safety team to replace its "Superalignment" group with the company's board of directors, including CEO Sam Altman, at the lead.

Related Article: OpenAI Dissolves Safety Team Responsible for Keeping Humanity from AI Harm

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost