OpenAI formed a specific committee that will handle the "critical safety and security decisions" on training a new model used in the ChatGPT chatbot.
The Safety and Security Committee, which involves Bret Taylor, Adam D'Angelo, Nicole Seligman, and Sam Altman, is responsible for making recommendations on OpenAI projects and operations.
OpenAI Establishes Safety Committee to Ensure Responsible Projects
In a blog post, OpenAI detailed the role of the new committee and how it will play a significant role in the operations amid criticism against the company. The committee will start evaluating and developing the company's processes and safeguards for the next 90 days.
Afterward, recommendations formed by the committee will be submitted to the board. OpenAI committed to publicly sharing an update on the recommendations that will be adopted and promised prioritization on safety and security.
The team will also be joined by technical and policy experts Aleksander Madry, Lilian Weng, John Schulman, Matt Knight, and Jakub Pachocki. The company will also consult with other experts for their work including former cybersecurity officials Rob Joyce and John Carlin.
OpenAI Ensures Safety Amid Criticisms
Previously, OpenAI researcher Jan Leike resigned from his post and claimed that the company has been compromising safety for the sake of "shiny products." The company's co-founder and chief scientist Ilya Sutskever also resigned which led to the disbandment of superalignment team.
Leike has now joined the rival AI company Anthropic which was also founded by former OpenAI leaders. The researcher revealed that he plans to "continue the superalignment mission" within a new company.
OpenAI also disclosed that its training for its next frontier model has already started. The new model is expected to bring enhanced capabilities to AGI.
"While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment," the company wrote.