Internally Rewarded Reinforcement Learning
We study a class of reinforcement learning problems where the reward signals for policy learning are generated by an internal reward model that is dependent on and jointly optimized with the policy. This interdependence between the policy and the reward model leads to an unstable learning process because reward signals from an immature reward model are noisy and impede policy learning, and conversely, an under-optimized policy impedes reward estimation learning. We call this learning setting Internally Rewarded Reinforcement Learning (IRRL) as the reward is not provided directly by the environment but internally by a reward model. In this paper, we formally formulate IRRL and present a class of problems that belong to IRRL. We theoretically derive and empirically analyze the effect of the reward function in IRRL and based on these analyses propose the clipped linear reward function. Experimental results show that the proposed reward function can consistently stabilize the training process by reducing the impact of reward noise, which leads to faster convergence and higher performance compared with baselines in diverse tasks.
@Article{LZLWW23, author = {Li, Mengdi and Zhao, Xufeng and Lee, Jae Hee and Weber, Cornelius and Wermter, Stefan}, title = {Internally Rewarded Reinforcement Learning}, journal = {arXiv:2302.00270}, number = {}, volume = {}, pages = {}, year = {2023}, month = {Feb}, publisher = {}, doi = {10.48550/arXiv.2302.00270}, }