Researchers train robots to recognize terrains relying on tactile, vision prediction

Chinese researchers have made progress in environmental cognitive learning and autonomous navigation of legged robots, allowing them to recognize terrains relying on tactile and vision prediction.

Researchers from the Harbin Institute of Technology simulated the behavior of animals and proposed an unsupervised learning framework for legged robots to learn the physical characteristics of terrains.

The proposed scheme allows robots to interact with the environment and adjust their cognition in real time, thereby endowing robots with adaptation ability.

In terms of ground representation, the research team used tactile parameters of the foot-to-ground contact model, allowing the robot to know the degree of softness and friction by “touching” the ground.

The team of researchers also proposed an unsupervised visual feature extraction method, enabling the robot to automatically compare different terrain textures without human participation.

Indoor and outdoor experiments on a hexapod robot were carried out to show that the robot can extract tactile and visual features of terrains to create cognitive networks independently.

The researchers published their findings in the journal National Science Review.