It’s official: ChatGPT is more extensive and brighter than ever before. Earlier this week, at Open AI DevDay, the organization announced the launch of GPT-4 Turbo, a new and improved version of GPT4 and a custom GPT creation interface.
One of the most notable enhancements is the news that GPT-4 Turbo will have a 128,000 context window compared to GPT-4’s 8,000. This means GPT-4 Turbo users can process 16 times as much text at once, the equivalent of 300 pages of text—enough space to include small novels like Animal Farm and The Great Gatsby.
During his keynote speech, OpenAI CEO Sam Altman also announced the end of GPT-4’s knowledge cutoff. While GPT-4 was restricted to knowledge of events that took place up until 2021, the AI giant announced that GPT-4 Turbo would know the world up until April 2023.
This comes just as Open AI announced that ChatGPT has achieved 100 million weekly active users. Together, these latest announcements highlight that GPT-4 Turbo not only has access to an improved knowledge base but also has the performance to respond to larger user requests.
Key Takeaways
- OpenAI introduces GPT-4 Turbo, an upgraded version of GPT-4, with a 128,000 context window, 16 times larger than its predecessor, allowing for more extensive text processing.
- GPT-4 Turbo has reduced its knowledge cutoff date, with information now ingested up to April 2023, making it more up-to-date in its understanding of the world.
- The enhancements in GPT-4 Turbo include its ability to analyze texts and images, create images using DALL-E 3, and improve support for function calling and user instructions.
- OpenAI is launching the GPT Store, where users can create custom versions of ChatGPT and monetize them, opening the door to a wide range of potential use cases and making 2024 an exciting year for AI development.
What Does GPT-4 Turbo Mean for Generative AI?
The improvements made by OpenAI to GPT-4 Turbo make it arguably the best multimodal LLM on the market. As of this latest release, GPT-4 Turbo can analyze texts and images, as well as create images with DALL-E 3.
This means it can be used for a diverse range of use cases, from answering users’ questions to creating written content and images on demand, analyzing text/images, creating image captions, and more.
However, one area where GPT-4 Turbo is particularly more sophisticated than its predecessor is in its support for function calling and its ability to follow user instructions.
For example, users can now enter a single prompt to trigger multiple actions, “open the car window and turn off the A/C,” which before would have needed to be broken down into multiple messages. It can also better follow instructions to respond in specific formats like XML.
When considering that GPT-4 was already placing top on AI performance benchmarks like multi-task language understanding (MMLU), HellaSwag, and common sense reasoning on ARC, the improvements made by GPT-4 Turbo have the potential to solidify OpenAI’s position as the golden goose of the generative AI market.
This is particularly true considering that GPT-4 Turbo is now cheaper to run. According to OpenAI, GPT-4 Turbo costs three times less for input tokens ($0.01) and two times less for output tokens ($0.03) than GPT-4.
OpenAI’s GPTs: Smarter Customization
OpenAI GPTs also have the potential to be a powerful force multiplier for the organization. With GPTs, users can create a custom version of ChatGPT, which can be connected to a range of external data sources and used to perform specific tasks.
OpenAI said in its official announcement:
“Anyone can easily build their own GPT – no coding is required. You can make them for yourself, just for your company’s internal use, or for everyone. Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images or analyzing data.”
In a move that some commentators have likened to the opening of the App Store on the iPhone, turning a smartphone with a few apps into an all-in-one tool for much of our digital lives, the GPT Store is coming soon — allowing anyone to upload and monetize custom GPTs.
The company revealed:
“Later this month, we’re launching the GPT Store, featuring creations by verified builders. Once in the store, GPTs become searchable and may climb the leaderboards. We will also spotlight the most useful and delightful GPTs we come across in categories like productivity, education, and “just for fun”. In the coming months, you’ll also be able to earn money based on how many people are using your GPT.”
So far, two example GPTs are available to ChatGPT Plus and Enterprise users to experiment with; the first, from Canva, enables users to create visual designs from natural language prompts. The second, AI Actions by Zapier, allows developers to create a custom GPT in ChatGPT that connects to over 6,000 Zapier apps.
When building custom GPTs, developers can use the Assistants API to use a Code Interpreter and a Retrieval tool.
Code Interpreter enables developers to write and run Python code in a sandboxed execution environment. This allows it to process files and generate graphs, charts, and code that can solve complex code and mathematical problems.
The retrieval tool can collect data from external sources, such as proprietary datasets to help extract insights from third-party systems.
Ultimately, GPTs will make it easier for developers and enterprises to start building their use case-specific versions of ChatGPT that can optimize certain processes and workflows.
The Bottom Line
GPT-4 Turbo reaffirms ChatGPT’s position as the top generative AI assistant in the market.
The expanded context length, alongside its multimodal capabilities and the customization offered by GPTs, has opened the door to a wider range of potential use cases than ever before.
With a crowd-sourced approach to improving GPTs of all types, 2024 is shaping up to be a Turbo year for AI.