Preserving Activations in Recurrent Neural Networks Based on Surprisal

Neurocomputing, doi: 10.1016/j.neucom.2018.11.092 - Feb 2019 Open Access
Associated documents :  
Learning hierarchical abstractions from sequences is a challenging and open problem for Recurrent Neural Networks (RNNs). This is mainly due to the difficulty of detecting features that span over long time distances with also different frequencies. In this paper, we address this challenge by introducing surprisal-based activation, a novel method to preserve activations and skip updates depending on encoding-based information content. The preserved activations can be considered as temporal shortcuts with perfect memory. We present a preliminary analysis by evaluating surprisal-based activation on language modelling with the Penn Treebank corpus and find that it can improve performance when compared to baseline RNNs and Long Short-Term Memory (LSTM) networks.

 

@Article{AAW19, 
 	 author =  {Alpay, Tayfun and Abawi, Fares and Wermter, Stefan},  
 	 title = {Preserving Activations in Recurrent Neural Networks Based on Surprisal}, 
 	 journal = {Neurocomputing},
 	 number = {},
 	 volume = {},
 	 pages = {},
 	 year = {2019},
 	 month = {Feb},
 	 publisher = {Elsevier},
 	 doi = {10.1016/j.neucom.2018.11.092}, 
 }