Lazy Neural Network Learning for Building Symbolic Transducers
Proceedings of the International Conference on Computational Intelligence and Neuroscience,
- 1998
Recurrent artificial neural networks can provide essential computational models and systems for bridging the gap between neuroscience and cognitive science.
However, it is essential to understand better how a recurrent network learns and what it represents after learning.
This paper describes new dynamic methods for the interpretation of recurrent neural networks.
While most precious work on interpretation has focused on interpreting the final state of non-recurrent networks, we particularly focus on the process of learning as well as the final state of recurrent networks.
Analyzing the dynamics of the learning of simple recurrent networks we found a 'lazy learning' strategy which led to neural representations after learning which can be described as symbolic transducers.
@InProceedings{Wer98, author = {Wermter, Stefan}, title = {Lazy Neural Network Learning for Building Symbolic Transducers}, booktitle = {Proceedings of the International Conference on Computational Intelligence and Neuroscience}, editors = {}, number = {}, volume = {}, pages = {}, year = {1998}, month = {}, publisher = {}, doi = {}, }