These virtual obstacle courses help real robots learn to walk

Its an army The robot, like the more than 4,000 stray dogs, is a vaguely terrifying sight even in a simulation. But it can also lead to learning new techniques for machines.

Researchers at Switzerland’s ETH Zurich and chipmaker Nvidia have created a virtual robot army. They used stray bots to train an algorithm that was then used to control the legs of real-world robots.

In the simulation, machines called ANYmals tackle challenges such as slides, steps and steep drops in the virtual landscape. Every time a robot learns to navigate a challenge, researchers present more difficult, more sophisticated control algorithms.

From a distance, the resulting scenes resemble an army of ants across a large area. During training, the robots could walk up and down the stairs fairly easily; More complex obstacles took longer. Opals have proven particularly difficult to deal with, although some virtual robots have learned to slide down them.

A clip from the simulation where virtual robots learn to step up.

When the effective algorithm was transferred to a real version of ANYmal, a four-legged robot with a sensor on a large dog’s head in fairly large size and an isolated robot arm, it was able to navigate stairs and blocks but encountered problems at high speeds. Researchers have blamed errors in how its sensors perceive the real world in comparison to simulations,

Similar robotic learning machines can help you learn all sorts of useful things, from packaging to sewing and harvesting. The project also reflects the importance of simulations and custom computer chips for future advances in applied artificial intelligence.

“At a high level, having a very fast simulation is really a great thing,” said Peter Abel, a professor at UC Berkeley and co-founder of Coveriant. Logistics firm. He said Swiss and Nvidia researchers had “gained some excellent momentum.”

AI has promised to train robots to do real-world tasks that are not easily written into software, or require some kind of adaptation. For example, the ability to perceive awkward, slippery or unfamiliar objects is not something that can be written in a line of code.

4,000 simulated robots were trained to learn reinforcement, an AI methodology that inspires research into how animals learn through positive and negative feedback. As the robots move their legs, an algorithm judges how it affects their ability to walk and changes control algorithms accordingly.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button