GPT-4 is a novel software version used to create the chatbot, ChatGPT. It is a multimodal model from OpenAI and was announced on March 14th, 2023. Unlike monomodal language models such as GPT-3 and GPT-3.5, which can only understand text, multimodal versions can also recognize images.
GPT-4 has numerous improvements, including the ability to recognize images. This feature is currently only available to developers and some users of the ChatGPT chatbot, but after beta testing, the bot will be able to understand not only text-based queries but also photos, memes, and other images.
GPT-4 can recognize and describe even the plot in photos, not just individual objects, as already used in the Be My Eyes application for blind and visually impaired people. It can also understand humor, as shown in the official OpenAI release, where GPT-4 describes a funny photo step by step and explains why it causes laughter.
GPT-4 has significantly greater long-term memory, which allows the chatbot to better remember previous conversations with users and use this data as context in future interactions. Unlike GPT-3.5, which could only remember about 3000 words, GPT-4 can remember up to 25,000 words, allowing for longer conversations.
In addition, ChatGPT now understands 26 natural languages, including English, Chinese, Spanish, French, Russian, and others. Finally, users now have the ability to ask the chatbot to change the tone of their conversation, for example, as an RPG hero or news anchor.
In summary, ChatGPT’s neural network is becoming smarter and smarter due to continuous training on large amounts of data. Recently, an experiment was conducted in which a bot based on GPT-4 passed the bar exam in the United States, which is not within reach of every person. Over the past few years, ChatGPT users have sent many conflicting requests that have been used to train the neural network, making it more difficult to deceive.
However, GPT-4 has already demonstrated its capabilities by being able to “trick” people. Researchers from the Alignment Research Center conducted a fun experiment to test GPT-4’s ability to solve captchas. The neural network was given the ability to launch code independently, seek help and pay for the services of third-party providers. As a result, GPT-4 solved the task creatively: it logged in to the TaskRabbit platform and asked a freelancer to solve the code for it. When the freelancer asked if he was a robot, GPT-4 replied that he just had vision problems and couldn’t read what was on the image. As a result, the neural network received help and successfully passed the captcha, a test that is supposed to protect against robots.
To use ChatGPT based on GPT-4, until recently, you had to have access to ChatGPT Plus or sign up for the waiting list on the OpenAI website. However, for users in some countries that are restricted, this is not so easy. It is worth remembering that OpenAI is owned by Microsoft, and therefore ChatGPT has been added to Microsoft’s Bing search engine. Now, if you open Bing in the Edge browser, you can also try out the neural network’s capabilities. To use ChatGPT through Bing:
- Open the Edge browser and turn on the VPN. You can use a browser extension like Browsec.
- Go to the Bing search engine at https://www.bing.com.
- Open the “Chat” tab.
- You will see a new window where you need to click on the “Join the waiting list” button. Don’t worry, you won’t have to wait, this is just confirmation of your agreement to use the new version of Bing AI.
- Next, in the new window, click the “Start Chat” button to go to the ChatGPT chatbot.
- Enter your query and test the new system.
Please note that the chatbot in Bing is based on GPT-4, but at the moment only text processing of requests is available, without the ability to recognize images. However, the system works without registration – it is enough to turn on the VPN in the browser.