From Exploration to Planning
International Conference on Artificial Neural Networks (ICANN),
Editors: V. Kurkova, R. Neruda, and J. Koutnik,
pages 740--749,
doi: 10.1007/978-3-540-87536-9_76
- 2008
Learning and behaviour of mobile robots faces limitations.
In reinforcement learning, for example, an agent learns a strategy to get
to only one specific target point within a state space. However, we can
grasp a visually localized object at any point in space or navigate to
any position in a room. We present a neural network model in which
an agent learns a model of the state space that allows him to get to an
arbitrarily chosen goal via a short route. By randomly exploring the state
space, the agent learns associations between two adjoining states and the
action that links them. Given arbitrary starting and goal positions, routefinding is done in two steps. First, an activation gradient spreads around
the goal position along the associative connections. Second, the agent
uses state-action associations to determine the actions leading to ascend
the gradient toward the goal. All mechanisms are biologically justifiable.
@InProceedings{WT08a, author = {Weber, Cornelius and Triesch, Jochen}, title = {From Exploration to Planning}, booktitle = {International Conference on Artificial Neural Networks (ICANN)}, editors = {V. Kurkova, R. Neruda, and J. Koutnik}, number = {}, volume = {}, pages = {740--749}, year = {2008}, month = {}, publisher = {Springer}, doi = {10.1007/978-3-540-87536-9_76}, }