OpenAI has unveiled its new GPT 4 AI model, which is a huge improvement over the GPT 3.5 model used in popular chat application ChatGPT.
The company describes GPT-4 as "multi-paradigm". Unlike previous generations that only supported text input, GPT-4 supports image input in addition to text thanks to its ability to understand, parse, and summarize information contained in images. For example, users can ask their new spreadsheet to summarize a study. With images and graphics, GPT-4 will be able to read and analyze graphics, answer questions about the graphics, and include their explanations in text summaries.
According to the company, after a series of professional and academic tests, GPT-4 has achieved performance similar to that of a human. The new model passed the mock practice test, ranking in the top 10% of graduates, while GPT-3.5 ranked in the bottom 10% in the same test.
The company said that it has been refining and developing GPT-4 for the past six months based on its experience with ChatGPT. She added that she worked with Microsoft to build a supercomputer from scratch to prepare the infrastructure needed to run the new model, which requires a lot of processing power.
GPT-4 will be available to developers via its API, which requires initial registration, while the new model will be available to regular ChatGPT Plus subscribers. The company notes that the photo entry feature will not be immediately available after the initial release.
OpenAI asserted that the new model has a significant advantage over its predecessors in its ability to hold a conversation, especially when responding to complex commands and understanding different languages.
In GPT-4, the company supports what it calls in the new model (manipulability) and says that rather than adopting the traditional ChatGPT personality and fixed tone and style, ChatGPT developers and users can choose what works for them via system messages. The pattern feature allows the user to define the limits of model retention depending on the model required for the task at hand.
Finally, the company said that although its new model has been greatly improved, it is still as error-prone as any other artificial intelligence model, and advises users to be careful when using it, review and modify content, and not use it for sensitive purposes, including the purposes of using tasks with high accuracy.