A week after rebranding Bard as Gemini, Google unveiled the next generation of its chatbot, Gemini 1.5. The company announced this in its official blog.

Google CEO Sundar Pichai commented on the move.

“Last week, we rolled out our most capable model, Gemini 1.0 Ultra, and took a significant step forward in making Google products more helpful, starting with Gemini Advanced. Today, developers and Cloud customers can begin building with 1.0 Ultra too — with our Gemini API in AI Studio and in Vertex AI,” he noted.

According to him, the company’s teams continue to work, paying special attention to safety.

“In fact, we’re ready to introduce the next generation: Gemini 1.5. It shows dramatic improvements across a number of dimensions and 1.5 Pro achieves comparable quality to 1.0 Ultra, while using less compute,” added Sundar Pichai.

According to Google DeepMind CEO Demis Hassabis, the mid-range version of the new AI model, Gemini 1.5 Pro, operates at a level similar to Gemini 1.0 Ultra, with a standard context window of 128,000 tokens. However, a limited group of developers and enterprise customers can try out the AI version with a context window of up to 1 million tokens through AI Studio and Vertex AI in a private preview mode.

A contextual window of 1 million tokens means that 1.5 Pro will be able to process much larger amounts of information at a time – including 1 hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or more than 700,000 words.

The new generation will be based on Transformer and Mixture-of-Experts (MoE) architectures. While traditional Transformer functions as one large neural network, MoE models are divided into smaller “expert” neural networks. Depending on the type of input, MoE models learn to selectively activate only the most relevant expert pathways in their neural network.