Google has introduced the Gemma family of AI models. What are they suitable for?

Google has introduced Gemma, a family of lightweight, state-of-the-art open source artificial intelligence models for developers and researchers.

Gemma is available today in English and comes in two sizes – 2B and 7B. Each size is available immediately with pre-trained and customized options.

The new models are based on the same research and technology used to create the Gemini models. They are well suited for text creation tasks, such as answering questions, summarizing, or arguing.

Its relatively small size allows you to run models locally on a laptop, PC, or developers’ own cloud infrastructure.

According to Google, Gemma outperforms much larger models in key tests, while adhering to the standards of safe and responsible products.

The company has also released the Responsible Generative AI Toolkit, which contains guidelines and basic tools for creating AI programs. In addition, the models have ready-to-use notes on Colab and Kaggle, as well as integration with such tools as Hugging Face, MaxText, NVIDIA NeMo, and others.

Additionally, Google has partnered with NVIDIA to optimize Gemma for NVIDIA GPUs, from the data center to the cloud and on-premises computers with RTX AI.