At the DevDay event, OpenAI announced a number of new models and products for developers that are expected to change the landscape of artificial intelligence. One of the main announcements was GPT-4 Turbo, a more powerful iteration of the GPT-4 chatbot that now has a 128k character context window, which allows you to fit the equivalent of more than 300 pages of text in a single prompt.
GPT-4 Turbo is not only about size, it is also about affordability. OpenAI has optimized the model’s performance, making the cost of incoming tokens three times cheaper and outgoing tokens half as much as the previous model. This new model will make advanced AI capabilities more accessible to developers around the world.
During the presentation, OpenAI also introduced the Assistants API, designed to simplify the development of AI assistive applications. This new API simplifies the creation of AI assistants that can understand goals, call models, and use tools to perform tasks more efficiently.
The multimodal capabilities of the OpenAI platform have also been expanded. Developers can now integrate vision, DALL-E 3 imaging, and text-to-speech (TTS) into their applications, opening up new opportunities for creative and practical use of AI.
In addition to these new offerings, OpenAI has made significant improvements to existing features. GPT-4 Turbo is now better able to handle tasks requiring precise following of instructions and supports a new JSON mode for developers who need to generate correct responses in JSON format. Model reproducibility has been improved with a new seed parameter that ensures consistent results, which is critical for debugging and unit testing.
OpenAI has also updated GPT-3.5 Turbo for developers, which now supports a 16K context window and boasts a 38% improvement in task performance.
The Assistants API is a game changer for developers looking to create agent-like experiences in their apps. With new features such as a code interpreter and search, the API takes most of the heavy lifting out of the way, allowing you to build high-quality AI apps.
In addition, OpenAI introduced new features in the API, including GPT-4 Turbo with computer vision, which allows for detailed image analysis and document reading, and DALL-E 3, which allows for direct integration of image generation into applications and products.
The text-to-speech API is another notable addition, offering human-quality speech generation with a variety of preset voices and model options.
In the spirit of continuous improvement, OpenAI has also launched an experimental access program for fine-tuning GPT-4 and a Custom Models program for organizations that require extensive customization.
In addition to this, OpenAI has announced a price reduction for the entire platform and an increase in tariff limits to help developers scale their applications more efficiently:
- Incoming GPT-4 Turbo tokens are 3 times cheaper than GPT-4, by $0.01, and outgoing tokens are 2 times cheaper, by $0.03;
- GPT-3.5 Turbo input tokens are 3 times cheaper than the previous 16K model – $0.001, and output tokens are 2 times cheaper – $0.002. Developers who have previously used GPT-3.5 Turbo 4K can take advantage of a 33% discount on input tokens at $0.001. These lower prices apply only to the new GPT-3.5 Turbo;
- The cost of incoming GPT-3.5 Turbo 4K tokens has been reduced by 4 times to $0.003, and the cost of outgoing tokens has been reduced by 2.7 times to $0.006. The new GPT-3.5 Turbo model also supports 16K context at the same price as 4K. These new prices also apply to the gpt-3.5-turbo-0613 fine-tuning models.
OpenAI has also announced that it is committed to protecting customers with built-in copyright protection in its systems. The company will introduce a new copyright protection feature – Copyright Shield – that will defend its customers and pay for the costs they incur if they face legal claims related to copyright infringement. This applies to publicly available ChatGPT Enterprise features and the developer platform.