In a groundbreaking series of announcements at OpenAI DevDay 2023 event, the future of artificial intelligence is taking an exciting turn. OpenAI is introducing an array of new models, developer products that promise to enhance AI accessibility and power. Along with new exciting features, including a new way to customize ChatGPT called GPTs. These GPTs, short for “Generative Pre-trained Transformers,” enable users to create tailored versions of ChatGPT for specific purposes. Not just that, it’s accompanied by an ability to share them with others. Let’s explore the highlights of these transformative developments.
One of the most remarkable aspects of GPTs is that anyone can easily build their own. Above all, with no coding expertise required. Whether it’s for personal use within a company, or to share with a wider audience, the process is simple. For example: starting a conversation, providing instructions and specifying its capabilities, such as web searching, or data analysis. You can try it out for yourself here. OpenAI has already made Example GPTs available for ChatGPT Plus and Enterprise users as shared at OpenAI DevDay 2023.
OpenAI believes that the most incredible GPTs will emerge from the community. Whether you’re a coach, or simply someone with a passion for creating useful tools, you can contribute to the GPT ecosystem. The GPT Store, set to launch later this month, will be a hub for verified builders to showcase their creations. GPTs will become searchable and even have leaderboards. OpenAI also plans to highlight the most useful and delightful GPTs in categories like productivity, education, and pure entertainment. Moreover, users will have the opportunity to earn money based on how many people use their GPTs.
Learn More: How is ChatGPT Trained?
OpenAI is ushering in the era of GPT-4 Turbo, with the next generation of AI models. With impressive 128K context window, this model can process over 300 pages of text in a single prompt. Notably, it’s well-versed in world events up to April 2023 and offers enhanced performance at a lower cost compared to its predecessor, GPT-4. Developers can already access a preview version of GPT-4 Turbo via the API, and a stable production ready model is on the horizon.
Function calling has become more versatile, as it allow users to call multiple functions in a single message, enabling complex actions with one interaction. Moreover, GPT-4 Turbo has improved comprehension and accuracy when it comes to understanding and returning the right function parameters.
GPT-4 Turbo excels at tasks that require precise instruction following and can deliver responses in JSON format. A new JSON mode ensures that responses are valid JSON, simplifying connections with other systems. Developers gain fine-grained control over generating syntactically correct JSON objects with the ‘response_format’ API parameter.
As announced at OpenAI DevDay 2023, company is launching a new version of GPT-3.5 Turbo with a 16K context window by default, providing a 38% improvement in tasks like generating JSON, XML, and YAML. Existing applications using the older GPT-3.5 Turbo model will be automatically upgraded to the new version on December 11.
The Assistants API paves the way for agent-like experiences in applications. Assistants are specialized AI entities with specific instructions, additional knowledge, and the capability to call models and tools for tasks. This API includes a Code Interpreter and Retrieval, making it easier to create high-quality AI applications with persistent and infinitely long threads for simplified message management.
The Code Interpreter is a game-changing feature that empowers assistants to write and execute Python code in a sandboxed environment. It can handle tasks like generating graphs, processing files, and solving complex code and math problems iteratively.
Learn More: GPT for Natural Language Understanding
GPT-4 Turbo now supports images as inputs, enabling AI to generate image captions, analyze images, and extract information from documents with figures. DALL·E 3, available through the Images API, allows for the generation of images from textual descriptions. OpenAI’s Text-to-Speech API enables human-quality speech generation from text.
OpenAI is launching an experimental access program for GPT-4 fine-tuning. At first, the Custom Models program allows organizations to work closely with OpenAI to train GPT-4 models tailored to their specific domains. This is turn ensures the utmost privacy for proprietary data.
Whisper large-v3, an advanced version of the automatic speech recognition model, offers enhanced performance across languages. The open source Consistency Decoder improves image quality, especially in text, faces, and straight lines.
OpenAI has put in place strong privacy and safety measures for GPTs. Your conversations with GPTs are never shared with creators. If a GPT interacts with third-party APIs, you have full control over whether data is shared with those APIs. When builders customize their GPTs with actions or knowledge, they can decide if user chats with that GPT contribute to improving and training the models. OpenAI has worked hard to ensure that users have robust privacy controls, including the option to opt out of model training entirely.
OpenAI’s first developer conference marks a significant step in the world of AI, giving users more control and flexibility in how they interact with GPTs. With customization, community-driven initiatives, and enhanced privacy and safety measures, GPTs are set to revolutionize how we harness the power of AI in our daily lives. The future of AI is becoming more personalized and accessible, thanks to OpenAI’s innovative developments. Stay tuned for more exciting updates and advancements in the AI landscape.