Hierarchical goals contextualize local reward decomposition explanations

Finn Rietz , Sven Magg , Fredrik Heintz , Todor Stoyanov , Stefan Wermter , Johannes A. Stork
Neural Computing and Applications, doi: https://doi.org/10.1007/s00521-022-07280-8 - May 2022 Open Access
Associated documents :  
One-step reinforcement learning explanation methods account for individual actions but fail to consider the agent’s future behavior, which can make their interpretation ambiguous. We propose to address this limitation by providing hierarchical goals as context for one-step explanations. By considering the current hierarchical goal as a context, one-step explanations can be interpreted with higher certainty, as the agent’s future behavior is more predictable. We combine reward decomposition with hierarchical reinforcement learning into a novel explainable reinforcement learning framework, which yields more interpretable, goal-contextualized one-step explanations. With a qualitative analysis of one-step reward decomposition explanations, we first show that their interpretability is indeed limited in scenarios with multiple, different optimal policies—a characteristic shared by other one-step explanation methods. Then, we show that our framework retains high interpretability in such cases, as the hierarchical goal can be considered as context for the explanation. To the best of our knowledge, our work is the first to investigate hierarchical goals not as an explanation directly but as additional context for one-step reinforcement learning explanations.

 

@Article{RMHSWS22, 
 	 author =  {Rietz, Finn and Magg, Sven and Heintz, Fredrik and Stoyanov, Todor and Wermter, Stefan and Stork, Johannes A.},  
 	 title = {Hierarchical goals contextualize local reward decomposition explanations}, 
 	 journal = {Neural Computing and Applications},
 	 number = {},
 	 volume = {},
 	 pages = {},
 	 year = {2022},
 	 month = {May},
 	 publisher = {Springer Nature},
 	 doi = {https://doi.org/10.1007/s00521-022-07280-8}, 
 }