A Hybrid Planning Strategy through Learning from Vision for Target-directed Navigation
International Conference on Artificial Neural Networks (ICANN),
Volume 11140,
pages 304--311,
doi: 10.1007/978-3-030-01421-6_30
- Oct 2018
In this paper, we propose a goal-directed navigation system
consisting of two planning strategies that both rely on vision but work
on different scales. The first one works on a global scale and is responsible for generating spatial trajectories leading to the neighboring area
of the target. It is a biologically inspired neural planning and navigation
model involving learned representations of place and head-direction (HD)
cells, where a planning network is trained to predict the neural activities of these cell representations given selected action signals. Recursive
prediction and optimization of the continuous action signals generates
goal-directed activation sequences, in which states and action spaces are
represented by the population of place-, HD- and motor neuron activities. To compensate the remaining error from this look-ahead modelbased planning, a second planning strategy relies on visual recognition
and performs target-driven reaching on a local scale so that the robot can
reach the target with a finer accuracy. Experimental results show that
through combining these two planning strategies the robot can precisely
navigate to a distant target.
@InProceedings{ZWBW18,
author = {Zhou, Xiaomao and Weber, Cornelius and Bothe, Chandrakant and Wermter, Stefan},
title = {A Hybrid Planning Strategy through Learning from Vision for Target-directed Navigation},
booktitle = {International Conference on Artificial Neural Networks (ICANN)},
journal = {None},
editors = {}
number = {}
volume = {11140},
pages = {304--311},
year = {2018},
month = {Oct},
publisher = {Springer},
doi = {10.1007/978-3-030-01421-6_30},
}