OpenAI Fiasco 2.0: Unraveling Safety Concerns on the Leading AI Startup

OpenAI is facing its second biggest controversy yet since the leadership shakeup fiasco last November as growing safety concerns clash with the AI startup's leadership.

OpenAI Fiasco 2.0: Unraveling Safety Concerns on the Leading AI Startup

(Photo : Kyle Marcelino/iTech Post via Andrew Caballero-Reynolds/AFP/Getty Images)

Also Read: How Will OpenAI's Chaotic Leadership Changes Affect the AI Industry?

It All Started with the Ousting

Cracks at OpenAI's foundation as the leading AI innovator have begun following Altman's firing and reinstatement last year, resulting in half of its board members being replaced.

While the exact reason for the ousting was never publicized directly, The New York Times noted how the ousting highlighted "the extreme views on the dangers of A.I. embedded in the company's work since it was created in 2015."

It can be remembered that the ousting occurred as more AI experts and digital watchdogs criticized OpenAI for its vulnerability to further spread misinformation online, especially with the 2024 Elections inching closer.

While Altman's return has eased public concerns about the company's attitude at the time, it did not take long before more studies and researchers noted an increase in safety issues, including "AI hallucinations."

These concerns come as the company repeatedly promises to improve its guardrails against misuse.

OpenAI Ex-Employees Speak Out on Company's Safety Culture

Months after the messy leadership shakeup in the company, several notable OpenAI employees, including co-founder Ilya Sutskever, have started leaving the startup over concerns about the company's current safety culture.

Sutskever's co-leader in OpenAI's "superalignment" team, Jan Leike, even spoke out about how OpenAI's safety culture and processes "have taken a backseat to shiny products."

The "superalignment" team is one of the safety groups OpenAI launched last year to ensure that its AI products would not pose any real dangers to humans and society.

Leike departed the company soon after Sutskever following the release of GPT-4o, OpenAI's latest chatbot model that immediately got involved in controversy after one of its voice assistants was accused of using Scarlet Johansson's voice without authorization.

And now, more former and current employees are coming forward as "whistleblowers" on the AI industry's inadequate safety oversight as it prioritizes speed over safety while stifling criticisms from the inside.

Related Article: OpenAI Dissolves Safety Team Responsible for Keeping Humanity from AI Harm

What Happens Next?

With no standardized law yet ready to address these concerns directly, most changes would likely still be, for better or for worse, OpenAI's initiative.

Following the shutdown of its "superalignment" team, OpenAI has launched a new safety team with its board members, including Altman, leading the efforts to ensure the safety of its products.

One of the company's first actions was the interception of online influence operations from Russia, China, Israel, and Iran using their AI to manipulate the public.

The report is among the first of its kind with an AI firm taking solid and swift steps to address people's concerns about AI's impact on the information landscape.

These efforts are in addition to more AI-powered tools the company is set to roll out ahead of the elections to help voters, journalists, and officials detect deepfakes and other AI-generated political campaigns.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost