A Self-organizing Method for Robot Navigation based on Learned Place and Head-direction cells
International Joint Conference on Neural Networks (IJCNN/WCCI),
pages 5276--5283,
doi: 10.1109/IJCNN.2018.8489348
- Jul 2018
This paper describes a neural model for a robot
learning spatial knowledge and navigating on learned place and
head-direction (HD) cell representations. The place and HD cells,
which are trained through unsupervised slow feature analysis
(SFA) from sequences of visual stimuli, provide positional and
directional information for navigation. Based on the ensemble
activity of place cells, the robot learns a topological map of the
environment through extracting the statistical distribution of the
place cell activities covering the traversable areas and realizes
self-localization based on the map. The robots heading direction,
which is encoded by the HD cells, works as a control signal
to adjust its behavior. Action representations supporting state
transitions are learned through memorizing the same movement
from a previous phase where an experimenter drives a robot to
explore an environment. Given reward signals spreading from a
target location along the topological map, the robot can reach
the goal in a reward-ascending way. This work intends to build a
practical navigation system by simulating animals hippocampal
cell firing activities on a robot platform using its self-contained
sensor. Experimental results from simulation demonstrate that
our system navigates a robot to the desired position smoothly
and effectively.
@InProceedings{ZWW18, author = {Zhou, Xiaomao and Weber, Cornelius and Wermter, Stefan}, title = {A Self-organizing Method for Robot Navigation based on Learned Place and Head-direction cells}, booktitle = {International Joint Conference on Neural Networks (IJCNN/WCCI)}, editors = {}, number = {}, volume = {}, pages = {5276--5283}, year = {2018}, month = {Jul}, publisher = {}, doi = {10.1109/IJCNN.2018.8489348}, }