Google has taken another step towards robot independence. Instead of programming their every action, robots have been paired with language models that allow them to independently analyze human requests and form a response, writes The Washington Post.
During one of the demonstrations of this approach, the researcher turned to the robot: “I’m hungry, can you bring me a snack?”. The robot independently searched the cafeteria, opened the drawer, found a bag of chips and brought it to the person.
Large language models are newly developed software that will allow robots to be used in more everyday tasks. Language models take large amounts of text from the Internet and use it to train artificial intelligence so that it can guess the answers to certain questions or comments. They became so skilled that one of the Google engineers even believed the AI could be conscious.
According to Google executives and researchers, this is the first time they have combined language models with robots.
The approach is not yet perfect. For example, when asked to assemble a burger, the robot was able to put the ingredients in the correct sequence, and instead of adding ketchup, it decided to put the entire bottle inside. However, language models will provide robots with knowledge for high-level planning. They also enable robots to independently analyze complex instructions, interpret them and provide answers that make sense.
Loading comments …