Goal-Directed Feature Learning
Proc. International Joint Conference on Neural Networks (IJCNN),
pages 3319--3326,
doi: 10.1109/IJCNN.2009.5179064
- Jun 2009
Only a subset of available sensory information is
useful for decision making. Classical models of the brains
sensory system, such as generative models, consider all elements of the sensory stimuli. However, only the action-relevant
components of stimuli need to reach the motor control and
decision making structures in the brain. To learn these actionrelevant stimuli, the part of the sensory system that feeds into
a motor control circuit needs some kind of relevance feedback.
We propose a simple network model consisting of a feature
learning (sensory) layer that feeds into a reinforcement learning
(action) layer. Feedback is established by the reinforcement
learners temporal difference (delta) term modulating an otherwise Hebbian-like learning rule of the feature learner. Under
this influence, the feature learning network only learns the
relevant features of the stimuli, i.e. those features on which
goal-directed actions are to be based. With the input preprocessed in this manner, the reinforcement learner performs
well in delayed reward tasks. The learning rule approximates
an energy functions gradient descent. The model presents a
link between reinforcement learning and unsupervised learning
and may help to explain how the basal ganglia receive selective
cortical input.
@InProceedings{WT09, author = {Weber, Cornelius and Triesch, Jochen}, title = {Goal-Directed Feature Learning}, booktitle = {Proc. International Joint Conference on Neural Networks (IJCNN)}, editors = {}, number = {}, volume = {}, pages = {3319--3326}, year = {2009}, month = {Jun}, publisher = {}, doi = {10.1109/IJCNN.2009.5179064}, }