Robots Can Multitask Too: Integrating a Memory Architecture and LLMs for Enhanced Cross-Task Robot Action Generation

Hassan Ali , Philipp Allgeuer , Carlo Mazzola , Giulia Belgiovine , Burak Can Kaplan , Lukáš Gajdošech , Stefan Wermter
2024 IEEE-RAS International Conference on Humanoid Robots (HUMANOIDS), doi: 10.48550/arXiv.2407.13505 - Nov 2024
Associated documents :  
Large Language Models (LLMs) have been recently used in robot applications for grounding LLM common-sense reasoning with the robot's perception and physical abilities. In humanoid robots, memory also plays a critical role in fostering real-world embodiment and facilitating long-term interactive capabilities, especially in multi-task setups where the robot must remember previous task states, environment states, and executed actions. In this paper, we address incorporating memory processes with LLMs for generating cross-task robot actions, while the robot effectively switches between tasks. Our proposed dual-layered architecture features two LLMs, utilizing their complementary skills of reasoning and following instructions, combined with a memory model inspired by human cognition. Our results show a significant improvement in performance over a baseline of five robotic tasks, demonstrating the potential of integrating memory with LLMs for combining the robot's action and perception for adaptive task execution.

 

@InProceedings{AAMBKGW24, 
 	 author =  {Ali, Hassan and Allgeuer, Philipp and Mazzola, Carlo and Belgiovine, Giulia and Kaplan, Burak Can and Gajdošech, Lukáš and Wermter, Stefan},  
 	 title = {Robots Can Multitask Too: Integrating a Memory Architecture and LLMs for Enhanced Cross-Task Robot Action Generation}, 
 	 booktitle = {2024 IEEE-RAS International Conference on Humanoid Robots (HUMANOIDS)},
 	 journal = {},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {},
 	 year = {2024},
 	 month = {Nov},
 	 publisher = {},
 	 doi = {10.48550/arXiv.2407.13505}, 
 }