Recurrent Neural Network Learning for Text Routing
Proceedings of the International Conference on Artificial Neural Networks,
pages 898--903,
doi: 10.1049/cp:19991226
- Sep 1999
This paper describes new recurrent plausibility networks with internal recurrent hysteresis connections. These recurrent connections in multiple layers encode the sequential context of word sequences. We
show how these networks can support text
routing of noisy newswire titles according
to different given categories. We demonstrate the potential of these networks using an 82 339 word corpus from the Reuters newswire, reaching recall and precision rates above 92%. In addition, we carefully analyze the internal representation using cluster analysis and output representations using a new surface error technique. In
general, based on the current recall and precision performance, as well as the detailed
analysis, we show that recurrent plausibility
networks hold a lot of potential for developing learning and robust newswire agents for
the internet.
@InProceedings{WAP99, author = {Wermter, Stefan and Arevian, Garen and Panchev, Christo}, title = {Recurrent Neural Network Learning for Text Routing}, booktitle = {Proceedings of the International Conference on Artificial Neural Networks}, editors = {}, number = {}, volume = {}, pages = {898--903}, year = {1999}, month = {Sep}, publisher = {}, doi = {10.1049/cp:19991226}, }