The tech world is buzzing with all things AI, which is both unsurprising and encouraged. Awareness of the current progress of AI will also bring the issues that it could have to the surface. OpenAI, being one of the top players in the AI race, is among those who also express concerns.
OpenAI's Plans for an Ethical AI
One major element that's missing in AI is its ability to discern bad from good on its own. It only knows what it is taught or programmed to know. Most importantly, it needs to remain neutral and impartial, yet there have been reports of bias and discrimination.
This is why OpenAI has its ears open for suggestions on how to solve the issue, and a hefty grant will be given to anyone who comes up with a brilliant idea. There are ten $100,000 grants ready for the taking. All you have to do is solve one of the biggest problems with AI.
The company behind ChatGPT has been advocating for the safe use of AI for some time, and even with the best intentions, $100,000 may not be a big enough incentive for experts to put their heads into solving the ethical offenses that advanced AI may inflict.
As mentioned in Interesting Engineering, most AI engineers can earn more than the amount given for the grant. They can even earn as high as $300,000 if they are remarkably good at their job, so they might not even see the endeavor as something that's worth their efforts.
Perhaps, ideals and the goal of advancing AI in the right direction would push someone to put ideas forth. OpenAI did state that they are launching the grant program to take the first step in the direction where AI systems would benefit all of humanity.
Some would argue that the bias and prejudice might be from the data from which the AI is trained. More controversial theories say that it might also reflect the opinions of the people behind the development of AI models.
Consulting people might be a good start for tackling this problem, especially those that are reportedly affected by an AI's unintended discrimination and bias. Another would be AI regulation, which allows an agency or organization to set limits for AI technology.
AI Regulation
A hearing concerning the regulation of AI has already happened with the presence of OpenAI CEO, Sam Altman. The hearing has addressed the risk of AI and how to deal with it without affecting its progress and innovation.
In fact, Altman himself even urged lawmakers to regulate AI. If that does happen, the anxiety around the potential harm of AI may be lessened. Before the hearing, the OpenAI CEO had already met with House members at dinner.
Altman provided a loose framework to manage the systems that are being developed, as mentioned in The New York Times, stating that if the technology goes wrong, it can really go wrong and that they wanted to be vocal about that.