Google DeepMind deploys Gemini Robotics AI model locally at work
Google DeepMind has unveiled Gemini Robotics On-Device, a local version of its artificial intelligence model that allows robots to operate without a connection to the cloud. It is the company’s first full-fledged VLA (vision-language-action) model that operates exclusively on a device, The Verge reports.
Unlike the hybrid Gemini model, the new version is optimized for autonomous operation in low or no network environments, providing fast response, support for complex tasks, generalization of new scenarios, and precise motion control — all without a connection to the cloud.
The model was trained on the ALOHA workbench, but has already been adapted to other platforms, including Apptronik's Apollo humanoid and the two-handed Franka FR3. The model only needs 50–100 examples to learn new actions.
Along with the model, Google has released an SDK for testing, training, and integrating the model into its own systems. This is the first time DeepMind has opened up access to its AI to developers. It is important to note that the model does not include a built-in security system — the company encourages integrating your own protection mechanisms.
Gemini Robotics On-Device is now available to a limited number of developers as part of the Trusted Tester program.