Surprisal-Based Activation in Recurrent Neural Networks

Proceedings of the 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2018), pages 597--602, - Apr 2018
Associated documents :  
Learning hierarchical abstractions from sequences is a challenging and open problem for Recurrent Neural Networks (RNNs). This is mainly due to the difficulty of detecting features that span over long distances with also different frequencies. In this paper, we address this challenge by introducing surprisal-based activation, a novel method to preserve activations contingent on encoding-based self-information. The preserved activations can be considered as temporal shortcuts with perfect memory. We evaluate surprisal-based activation on language modelling by testing it on the Penn Treebank corpus and find that it can improve performance when compared to a baseline RNN.

 

@InProceedings{AAW18, 
 	 author =  {Alpay, Tayfun and Abawi, Fares and Wermter, Stefan},  
 	 title = {Surprisal-Based Activation in Recurrent Neural Networks}, 
 	 booktitle = {Proceedings of the 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2018)},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {597--602},
 	 year = {2018},
 	 month = {Apr},
 	 publisher = {},
 	 doi = {}, 
 }