OpenAI built a team solely focused on dealing with the potential "catastrophic risks" brought by the advancement of AI.
OpenAI Promotes Preparedness Against Potential AI Issues
OpenAI established the team as part of its mission to build safe artificial general intelligence. The company also cited that they take seriously the safety risks associated with AI. While their AI models are capable of dealing with various things, OpenAI acknowledged that there could be risks with the advancement of the existing models.
The new Preparedness team led by Aleksander Madry will focus on tracking, evaluating, forecasting, and protecting against the "catastrophic risks" of AI such as cybersecurity, individualized persuasion, and autonomous replication and adaptation. Moreover, the team will also oversee possible chemical, biological, radiological, and nuclear threats.
Last May, OpenAI CEO Sam Altman and other AI researchers issued a statement that risk mitigation from AI should be a global priority. The CEO also suggested that governments should take AI as seriously as nuclear weapons.
OpenAI Launches Risk-Informed Development Policy (RDP)
OpenAI also announced that the team will also develop and maintain an RDP which is focused on the rigorous frontier model capability evaluations and monitoring. The RDP is expected to complement the risk mitigation efforts of the company.
Aside from these, OpenAI also opened an AI Preparedness Challenge for catastrophic misuse prevention. The challenge aims to identify less obvious areas of concern and will also help build the team.
Anyone interested can fill up the application form until December 31. OpenAI will be offering $25,000 each in API credits for up to 10 top submissions. Successful candidates might also be picked to join the Preparedness team.
Related Article : Researchers Found Loopholes with OpenAI's GPT-4