New OpenAI Guidelines Say that the Board Gets the Final Say in AI Innovation

The whole ouster drama in OpenAI is still not forgotten by the tech world, especially since there have been interesting bits of information that have emerged since then. Following those events, it seems that the board will still hold more power than the face of OpenAI, Sam Altman.

Sam Altman
SeongJoon Cho/Bloomberg via Getty Images

The Board is Still the Boss

The entire chaos within OpenAI was somewhat of a power play, one of which Sam Altman obviously won. Now that there is a new board of directors, you'd think that the power structure would change, but it's the same as before.

Now made up of former CEO of Salesforce Bret Taylor, former US Treasury Secretary Larry Summers, and Quora CEO Adam D'Angelo, the board will still have the power to make the ultimate decision when it comes to innovations within OpenAI.

The new guidelines state that the "OpenAI Board of Directors, as the ultimate governing body of OpenAI, will oversee OpenAI Leadership's implementation and decision making," as reported by Gizmodo. While that's the case, it will be a different dynamic now.

In case you missed it, rumors say that Sam Altman was ousted since the board no longer trusted him for not being completely forthcoming with OpenAI tech development. This decision was prompted also because of the ideals of the former board members.

Ilya Sutskever, Helen Toner, and Tasha McCauley were all known advocates of AI safety, which is why OpenAI researchers raising concerns about a potential artificial general intelligence (AGI) was grounds enough to give Altman the boot.

The new board, on the other hand, is somewhat more open to such innovations, reports say. As opposed to those going against developments that might endanger humanity, the new board is more experienced with politics and profits.

To avoid repeating the mistake OpenAI had before, the board will be receiving monthly documentation from OpenAI's Preparedness team when it comes to the development of AGI. As for the Preparedness team, certain metrics will allow the company to asses AI dangers.

OpenAI Preparedness Framework

To put safety ahead of all else, the company will be evaluating AI models based on "scorecards." It will be based on factors such as Cybersecurity, CBRN, Persuasion, and Model Autonomy.

The levels in which it will be scored are Low, Medium, High, and Critical. In the event that the model exceeds a post-mitigation score of Medium, then the company will refrain from deploying it to the public.

The work in determining whether an AI model is safe is broken down into four groups. The technical work will be the responsibility of the Preparedness Team, the recommendations will be that of the Safety Advisory Group, the decision-making will fall on leadership, and the board will get the final say.

OpenAI's teams can bring some kind of confidence to those who are concerned about the downfall of humanity over a superintelligent AI. Mitigating risks is an important aspect in the development of technology, especially when it comes to artificial intelligence that can someday be smarter than humans.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Tags OpenAI

More from iTechPost

Real Time Analytics