OpenAI Launches GPT 4, The Updated Version Of GPT-3.5

GPT-4 can work with image inputs but can only output text. Multimodal models are those that combine text, images, and video.

OpenAI Launches GPT 4, The Updated Version Of GPT-3.5

A new Artificial Intelligence (AI) language model, GPT-4, has been released by OpenAI. It replaces the previous technology, GPT-3.5, and “exhibits human-level performance on various professional and academic benchmarks,” according to the creators. An artificial intelligence algorithm called GPT, also known as the Generative Pre-trained Transformer, is trained to write like a human.

Microsoft today confirmed that Bing Chat, a chatbot it created with OpenAI, is already using GPT-4. Earlier Microsoft Germany executives had stated that they would introduce GPT-4 within a week and that it will offer many new possibilities, such as the use of videos. It turns out that the new version cannot generate images in addition to text from the same interface, contrary to what had been predicted.

GPT-4 can work with image inputs but can only output text. Multimodal models are those that combine text, images, and video.

Sending an image of the interior of your refrigerator to the AI, which would then review the ingredients on hand before developing recipe ideas, is one example of how that might operate. This capability is currently only accessible through Be My Eyes, one of OpenAI’s partners.

Questions about images sent to it can be answered by its virtual volunteer. OpenAI claims that the upgrade to GPT has significantly improved its performance on exams, as evidenced by the fact that it passed a mock bar exam with a score in the top 10%. Within the bottom 10% was GPT-3.5.

OpenAI claims that while it may be difficult to tell the difference between the models in casual conversation, GPT-4 is “more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5” when dealing with more complex discussion. Deeper discussions are made possible, and more complex problems are made easier to solve.

The technology that trains machines to understand language in a way that only humans used to, according to Microsoft Germany executive Andreas Braun, has advanced to the point where it essentially works in all languages. Braun claims that multimodality also increases the models’ breadth.

On March 14, OpenAI made AI GPT-4 available. It is accessible to ChatGPT Plus users who pay for OpenAI’s services ($20 per month plus applicable taxes).

The precise usage cap will vary, according to OpenAI, “depending on demand and system performance in practise.” The GPT-4 still has drawbacks despite the advancements. Like previous iterations it generally lacks knowledge of anything that happened after September 2021, and “it does not learn from its experience,” admits OpenAI.

In addition, “It occasionally exhibits simple reasoning errors that do not seem to be consistent with its competence in so many different areas, or it may be overly trusting when accepting blatantly false claims from a user. Additionally, it occasionally makes mistakes when solving complex problems, much like humans do, for example, adding security flaws to the code it generates.”