Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision
Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021),
doi: 10.1109/IJCNN52387.2021.9534023
- Jul 2021
Using a model of the environment, reinforcement
learning agents can plan their future moves and achieve super-human performance in board games like Chess, Shogi, and Go,
while remaining relatively sample-efficient. As demonstrated by
the MuZero Algorithm, the environment model can even be
learned dynamically, generalizing the agent to many more tasks
while at the same time achieving state-of-the-art performance.
Notably, MuZero uses internal state representations derived from
real environment states for its predictions. In this paper, we bind
the model's predicted internal state representation to the environment state via two additional terms: a reconstruction model loss
and a simpler consistency loss, both of which work independently
and unsupervised, acting as constraints to stabilize the learning
process. Our experiments show that this new integration of
reconstruction model loss and simpler consistency loss provide a
significant performance increase in OpenAI Gym environments.
Our modifications also enable self-supervised pretraining for
MuZero, so the algorithm can learn about environment dynamics
before a goal is made available.
@InProceedings{SWHW21, author = {Scholz, Julien and Weber, Cornelius and Hafez, Burhan and Wermter, Stefan}, title = {Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision}, booktitle = {Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021)}, editors = {}, number = {}, volume = {}, pages = {}, year = {2021}, month = {Jul}, publisher = {}, doi = {10.1109/IJCNN52387.2021.9534023}, }