OpenAI Dissolves Safety Team Responsible for Keeping Humanity from AI Harm

OpenAI is practically closing down its "Superalignment" team to keep humanity safe from AI after two of its leaders left the company amid concerns about the company's safety culture.

"Superalignment" team head Jan Leike confirmed his resignation on X (formerly Twitter), thanking the "amazing people" he worked with.

OpenAI Dissolves Safety Team Responsible for Keeping Humanity from AI Harm
Justin Sullivan/Getty Images

The team head departed after he disagreed with OpenAI's leadership regarding its core priorities and "reached a breaking point," pointing out that "building smarter-than-human machines is an inherently dangerous endeavor."

Leike's team was responsible for the "first scalable oversight" on large language models and provided cash grants supporting independent studies to align AI products to humanity's benefits.

OpenAI CEO Sam Altman promised to provide further information on the subject "in the next couple of days."

OpenAI Co-Founder's Departure Marks Changes in AI Firm's Priorities

The team's other leader, OpenAI's chief scientist Ilya Sutskever, has earlier left the company following the announcement of the GPT-4o.

It can be remembered that Sutskever was among the board members who voted to oust Altman during last year's leader fiasco as safety concerns on OpenAI's current development process were raised.

Sutskever was replaced by OpenAI's research director Jakub Pachocki as chief scientist.

What is Next for OpenAI's Human Safety Commitments?

While OpenAI's "Superalignment" team would be no more, the company intends to keep the remaining members across other research efforts for its safety commitments, OpenAI told Bloomberg.

According to the report, the company would be spreading the 20% allotted computing power on the team across other safety groups, including its "Preparedness" it launched last October.

While OpenAI seems to be confident on the move, critics are worried that the absence of a dedicated team to address the long-term effects of AI would only cause further problems in the near future.

AI experts have long raised concerns about AI's priority to speed over safety as the AI startup releases flaw-riddled products to the public.

Previous studies have even indicated that ChatGPT, OpenAI's flagship chatbot, continues to generate erroneous information and deepfakes of political figures despite the company's earlier promises to prevent such cases from happening.

Concerns are particularly heightened with the company aiming for ambitious artificial general intelligence, a supposed advanced AI that could threaten many people's livelihoods, safety, and data privacy.

ยฉ 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost

Real Time Analytics