Google Gemini AI model brings real-time intelligence to bi-arm robots

Google DeepMind has launched Gemini Robotics On-Device, a new AI model for robotics that operates locally without internet. It enables bi-arm robots to perform complex tasks using voice, language, and action processing.

Written By Govind Choudhary
Updated25 Jun 2025, 06:41 PM IST
Google DeepMind has announced the launch of a new artificial intelligence model tailored for robotics, capable of functioning entirely on a local device without requiring an active data connection.
Google DeepMind has announced the launch of a new artificial intelligence model tailored for robotics, capable of functioning entirely on a local device without requiring an active data connection. (Google DeepMind)

Google DeepMind has announced the launch of a new artificial intelligence model tailored for robotics, capable of functioning entirely on a local device without requiring an active data connection. NamedGemini Robotics On-Device, the advanced model is designed to enable bi-arm robots to carry out complex tasks in real-world environments by combining voice, language and action (VLA) processing.

In a blog post, Carolina Parada, Senior Director and Head of Robotics at Google DeepMind, introduced the new model, highlighting its low-latency performance and flexibility. As it operates independently of the cloud, the model is especially suited to latency-sensitive environments and real-time applications where constant internet connectivity is not feasible.

Currently, access to the model is restricted to participants of Google’s trusted tester programme. Developers can experiment with the AI system through the Gemini Robotics software development kit (SDK) and the company's MuJoCo physics simulator.

Although Google has not disclosed specific details about the model’s architecture or training methodology, it has outlined the model’s robust capabilities. Designed for bi-arm robotic platforms, Gemini Robotics On-Device requires minimal computing resources. Remarkably, the system can adapt to new tasks using only 50 to 100 demonstrations, a feature that significantly accelerates deployment in diverse settings.

Also Read | From Project Mariner to Imagen 3: What could come to Gemini AI at I/O 2025

In internal trials, the model demonstrated the ability to interpret natural language commands and perform a wide array of sophisticated tasks, from folding clothes and unzipping bags to handling unfamiliar objects. It also successfully completed precision tasks such as those found in industrial belt assembly, showcasing high levels of dexterity.

Though originally trained on ALOHA robotic systems, Gemini Robotics On-Device has also been adapted to work with other bi-arm robots including Franka Emika’s FR3 and Apptronik’s Apollo humanoid robot. According to the American tech giant, the model exhibited consistent generalisation performance across different platforms, even when faced with out-of-distribution tasks or multi-step instructions.

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.

Business NewsTechnologyNewsGoogle Gemini AI model brings real-time intelligence to bi-arm robots
MoreLess