Learning indoor robot navigation using visual and sensorimotor map information
Frontiers in Neurorobotics,
Volume 7,
Number 15,
pages 1--14,
doi: 10.3389/fnbot.2013.00015
- Oct 2013
As a fundamental research topic, autonomous indoor robot navigation continues to be a challenge in unconstrained real-world indoor environments. Although many models for map-building and planning exist, it is difficult to integrate them due to the high amount of noise, dynamics, and complexity. Addressing this challenge, this paper describes a neural model for environment mapping and robot navigation based on learning spatial knowledge. Considering that a person typically moves within a room without colliding with objects, this model learns the spatial knowledge by observing the person's movement using a ceiling-mounted camera. A robot can plan and navigate to any given position in the room based on the acquired map, and adapt it based on having identified possible obstacles. In addition, salient visual features are learned and stored in the map during navigation. This anchoring of visual features in the map enables the robot to find and navigate to a target object by showing an image of it. We implement this model on a humanoid robot and tests are conducted in a home-like environment. Results of our experiments show that the learned sensorimotor map masters complex navigation tasks.
@Article{YWW13, author = {Yan, Wenjie and Weber, Cornelius and Wermter, Stefan}, title = {Learning indoor robot navigation using visual and sensorimotor map information}, journal = {Frontiers in Neurorobotics}, number = {15}, volume = {7}, pages = {1--14}, year = {2013}, month = {Oct}, publisher = {Frontiers Media SA}, doi = {10.3389/fnbot.2013.00015}, }