Language Model-Based Paired Variational Autoencoders for Robotic Language Learning
IEEE Transactions on Cognitive and Developmental Systems
Volume 15,
Number 4,
pages 1812-1824,
doi: 10.1109/TCDS.2022.3204452
- Sep 2022
Human infants learn language while interacting with
their environment in which their caregivers may describe the
objects and actions they perform. Similar to human infants,
artificial agents can learn language while interacting with their
environment. In this work, first, we present a neural model
that bidirectionally binds robot actions and their language
descriptions in a simple object manipulation scenario. Building on
our previous Paired Variational Autoencoders (PVAE) model, we
demonstrate the superiority of the variational autoencoder over
standard autoencoders by experimenting with cubes of different
colours, and by enabling the production of alternative vocabularies.
Additional experiments show that the models channel separated
visual feature extraction module can cope with objects
of different shapes. Next, we introduce PVAE-BERT, which
equips the model with a pretrained large-scale language model,
i.e., Bidirectional Encoder Representations from Transformers
(BERT), enabling the model to go beyond comprehending only
the predefined descriptions that the network has been trained on;
the recognition of action descriptions generalises to unconstrained
natural language as the model becomes capable of understanding
unlimited variations of the same descriptions. Our experiments
suggest that using a pretrained language model as the language
encoder allows our approach to scale up for real-world scenarios
with instructions from human users.
@Article{OKWLW22, author = {Özdemir, Ozan and Kerzel, Matthias and Weber, Cornelius and Lee, Jae Hee and Wermter, Stefan}, title = {Language Model-Based Paired Variational Autoencoders for Robotic Language Learning}, journal = {IEEE Transactions on Cognitive and Developmental Systems}, number = {4}, volume = {15}, pages = {1812-1824}, year = {2022}, month = {Sep}, publisher = {IEEE}, doi = {10.1109/TCDS.2022.3204452}, }