GPT-4, the most recent in OpenAI's line of AI language models that power programs like ChatGPT and Bing, has been officially released after months of rumors and speculation.
OpenAI's paying customers can currently access GPT-4 through ChatGPT Plus, and developers can join a waitlist to gain access to the API, Tech Crunch reports.
OpenAI Has Already Partnered With Companies Prior to GTP-4 Release
Sam Altman, a co-founder of OpenAI, described the new system as the most capable and aligned model yet.
The system is a multimodal model, which means it can accept both images and text as inputs, enabling users to ask questions about photos.
The model, according to the company, is more collaborative, innovative, and accurate at solving complex problems than ever before.
The updated version can process large amounts of text input and can recall and respond to more than 20,000 words at once, enabling it to accept a novella's worth of prompts.
However, OpenAI warns that the systems still have many of the same issues as earlier language models, such as a tendency to invent data and the ability to produce violent and damaging text.
The company also claims to have already established partnerships with several businesses to include GPT-4 into their products, including Duolingo, Stripe, and Khan Academy.
Duolingo Max, a new subscription tier of the language-learning software, will now provide English-speaking users AI-powered dialogues in French or Spanish.
Additionally, it can also employ GPT-4 to explain the errors language learners have made, according to The Guardian.
On the opposite end of the spectrum, payment processing company Stripe uses GPT-4 to respond to corporate users' support inquiries and to help identify potential scammers in the business's support forums.
The new model powers Microsoft's Bing chatbot and is accessible to the general public via ChatGPT Plus, OpenAI's $20 per month ChatGPT membership.
Read More: Microsoft's Bing Will Use a More Powerful Language Model than OpenAI's ChatGPT
OpenAI Says That The Improvement Will Be Iterative
According to The Verge, in casual conversation, OpenAI said that the distinction between GPT-4 and its predecessor GPT-3.5 is "subtle."
GPT-4 still has flaws and limitations, but according to Altman's tweets, it is immediately more impressive than it is with more use.
The performance of the system on various assessments and benchmarks, including as the Uniform Bar Test, LSAT, SAT Math, and SAT Evidence-Based Reading & Writing exams, is claimed to demonstrate GPT-4's advances.
Throughout the past year, there has been a lot of speculation regarding GPT-4 and its potential, with many people predicting a significant improvement over current systems.
But, as the business had previously cautioned, it appears from OpenAI's release that the improvement is more iterative.
Greg Brockman, the president and co-founder of Open AI, offered users a sneak preview of the image-recognition capabilities of the most recent iteration of the system on Tuesday.
Although in fewer media than some had anticipated, GPT-4 is multimodal, and its system can accept both text and image inputs and create text outputs.
The company claims that because the model can simultaneously parse text and images, it can understand more complicated data.
Related Article: Chinese Students Are Using ChatGPT To Cheat on Schoolwork