Mental Modeling of Reinforcement Learning Agents by Language Models

arXiv:2406.18505, doi: 10.48550/arXiv.2406.18505 - Jun 2024
Associated documents :  
Can emergent language models faithfully model the intelligence of decision-making agents? Though modern language models exhibit already some reasoning ability, and theoretically can potentially express any probable distribution over tokens, it remains underexplored how the world knowledge these pretrained models have memorized can be utilized to comprehend an agent's behaviour in the physical world. This study empirically examines, for the first time, how well large language models (LLMs) can build a mental model of agents, termed agent mental modelling, by reasoning about an agent's behaviour and its effect on states from agent interaction history. This research may unveil the potential of leveraging LLMs for elucidating RL agent behaviour, addressing a key challenge in eXplainable reinforcement learning (XRL). To this end, we propose specific evaluation metrics and test them on selected RL task datasets of varying complexity, reporting findings on agent mental model establishment. Our results disclose that LLMs are not yet capable of fully mental modelling agents through inference alone without further innovations. This work thus provides new insights into the capabilities and limitations of modern LLMs.

 

@Article{LZSLW24, 
 	 author =  {Lu, Wenhao and Zhao, Xufeng and Spisak, Josua and Lee, Jae Hee and Wermter, Stefan},  
 	 title = {Mental Modeling of Reinforcement Learning Agents by Language Models}, 
 	 booktitle = {},
 	 journal = {arXiv:2406.18505},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {},
 	 year = {2024},
 	 month = {Jun},
 	 publisher = {},
 	 doi = {10.48550/arXiv.2406.18505}, 
 }