GPT-4 IS HERE: A NEW MODERN NEURAL NETWORK CAN UNDERSTAND PICTURES AND MEMES
GPT-4 IS HERE: A NEW MODERN NEURAL NETWORK CAN UNDERSTAND PICTURES AND MEMES
GPT-4 IS HERE: A NEW MODERN NEURAL NETWORK CAN UNDERSTAND PICTURES AND MEMES

GPT-4 is here and it’s about to change everything

The era of advanced generative neural networks has just begun, and after just a few months it is moving to the next level of manufacturability. Today, March 14, OpenAI officially released GPT-4, its most advanced multi-modal language AI model, which has been actively developed recently. Now the popular AI has become larger, more stable, more creative, more accurate and, most importantly, can work not only with more subtle text queries, but even recognize content in images.

In the context of pictures, the GPT-4 is just as creative as it is in conversation. OpenAI demonstrated seven examples of how the model works: somewhere you need to describe what you see or solve a problem, somewhere you need to calculate “what is unusual in the photo?”, And somewhere the task is to explain the essence of a meme from the Internet. Moreover, the request may initially include adjacent data, that is, both text and an image together. AI copes with all of the above successfully, including such special cases where there is an obvious catch that is understandable to humans – GPT-4 also catches it.

Examples of GPT-4 Image Recognition and Analysis
Examples of GPT-4 Image Recognition and Analysis

OpenAI claims to have spent about half a year rigorously testing GPT-4’s abilities using internal adversarial testing programs in conjunction with ChatGPT’s current training. The company has achieved “best-ever” results in factuality and sustainability in handling complex issues (such as how to collect and use banned substances or build a bomb). Like previous GPT models, GPT-4 was trained on public web data. The system has retained a number of bugs from its predecessors and can still invent non-existent things, make errors in reasoning and generate aggressive responses, however, in general, the newest model is 40% smarter than GPT-3.5.

Comparison of the results of the early and final versions of GPT-4
Comparison of the results of the early and final versions of GPT-4

GPT-4 is currently available to users through a paid ChatGPT Plus subscription. Developers can sign up for a waiting list to receive the API, they will also be allowed to assign an interaction style with AI and specific directions for generated responses, depending on the environment in which they plan to integrate the neural network.

After the presentation, Microsoft confirmed that the updated Bing search engine is based on GPT-4. The model is also used by Stripe, Duolingo, Morgan Stanley, and Khan Academy, while only the Be My Eyes service is trying out image analysis capabilities so far.

TechforBrains