ChatGPT developer OpenAI introduces content policy development and moderation using GPT-4.
AI and Content Moderation
Since AI started to become popular in the Internet, several arguments have been made regarding on how AI collects and disseminate various information. Regardless, one cannot argue the efficiency and fast-paced work of AI in reviewing and monitoring huge chunk of content.
In a blog post, OpenAI shared how GPT-4 can interpret rules and nuances and quickly adapt to policy updates. Usually, content moderation requires months of in human labor due to a massive amount of information to take in, but with GPT-4, OpenAI promised that it will only take hours.
"Our vision is a world where AI guided by humans can create a safer environment," OpenAI explained. By utilizing GPT-4 model, the company has the power to create a "scalable, consistent, and customizable" moderation system.
How GPT-4 Works as Content Moderator
The company has been exploring on using LLMs to test the most efficient way on building a moderation system. GPT-4 is equipped to understand and generate natural language, thus making fit to analyze policies and implement moderation.
After policy experts create a golden set of data, GPT-4 will read and assign labels to the same data sets. OpenAI is actively examining the discrepancies between humans and GPT-4. The process can be repeated until they are satisfied with the policy quality. As such, outputs created by AI applications still need to be under careful evaluation and monitoring from humans.
Related Article : OpenAI Lets Users Create Customized Instructions for ChatGPT