Meaning Spotting and Robustness of Recurrent Networks

Stefan Wermter , Garen Arevian , Christo Panchev
Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium Volume 3, pages 433 -- 438, doi: 10.1109/IJCNN.2000.861346 - Oct 2000
Associated documents :  
This paper describes and evaluates the behavior of preference-based recurrent networks which process text sequences. First, we train a recurrent plausibility network to learn a semantic classication of the Reuters news title corpus. Then we analyze the robustness and incremental learning behavior of these networks in more detail. We demonstrate that these recurrent networks use their recurrent connections to support incremental processing. In particular, we compare the performance of the real title models with reversed title models and even random title models. We nd that the recurrent networks can, even under these severe conditions, provide good classication results. We claim that previous context in recurrent connections and a meaning spotting strategy are pursued by the network which supports this robust processing.

 

@InProceedings{WAP00, 
 	 author =  {Wermter, Stefan and Arevian, Garen and Panchev, Christo},  
 	 title = {Meaning Spotting and Robustness of Recurrent Networks}, 
 	 booktitle = {Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium},
 	 number = {},
 	 volume = {3},
 	 pages = {433 -- 438},
 	 year = {2000},
 	 month = {Oct},
 	 publisher = {IEEE},
 	 doi = {10.1109/IJCNN.2000.861346}, 
 }