Just as excited as people are for the services that AI can provide us, some are also afraid of what it is capable of taking away. With AI systems becoming more and more intelligent, the risks and threats are growing with it. OpenAI is creating a team to prevent or mitigate those risks.
OpenAI's Efforts to Manage Superintelligence
The AI giant is assembling a group of the top machine-learning researchers and engineers to find a way to control or rein in a potential superintelligent AI, which the company believes could be very dangerous and even arrive within the decade.
OpenAI admits to having its hand empty when it comes to solutions for controlling a superintelligent AI, even saying that it could lead to the "disempowerment of humanity or even human extinction," which is where the researcher team comes in handy.
Ultimately, the company's goal is to create a human-level automated alignment researcher, but there are several steps to take before this can be done. For one, OpenAI needs to develop a scalable training method, as well as validate the resulting model.
A stress test will be conducted for the alignment researcher by first providing a training signal on tasks that are hard for humans to evaluate. AI systems will be used to evaluate other AI systems. OpenAI will also automate the search for problematic behavior to validate its alignment.
After those steps will the company test the entire pipeline, which they will conduct by "deliberately training misaligned models," which will confirm that the techniques developed are capable of detecting the worst kinds of misalignments.
Right now, the AI giant is looking for researchers and engineers to help them reach their goals. The assembled team will also tackle issues concerning AI such as misuse, economic disruption, disinformation, bias and discrimination, addiction and overreliance, and more.
Positions that are available include research engineers, research scientists, and research managers. The company also mentioned that experts in the field are welcome even if they are not already working on alignment.
Potential Threats with AI
The company and the government already recognize the potential risks of AI, and not only the possibility that it will come to a point where it can no longer be controlled, but also the potential misuse that can come from people who want to exploit it.
Regulation of AI is a mission that both AI companies and the government share. Rules and laws are already being established when it comes to privacy and copyright, and people in the field and governing bodies are discussing who would be responsible for AI regulation.
Microsoft CEO Satya Nadella says that "in some sense, having a conversation about AI and responsible AI and societal impact of AI all simultaneously," is a good thing, as mentioned in Channel News Asia.
With the fast-paced development of AI technology, those involved in creating and implementing regulations are having a hard time catching up. Despite all that, tech companies hope that the regulations will not slow down the advancement of AI technology.