When Robots Get Chatty: Grounding Multimodal Human-Robot Conversation and Collaboration

Proceedings of the International Conference on Artificial Neural Networks, pages 306–321, doi: 10.1007/978-3-031-72341-4_21 - Sep 2024 Open Access
Associated documents :  
We investigate the use of Large Language Models (LLMs) to equip neural robotic agents with human-like social and cognitive competencies, for the purpose of open-ended human-robot conversation and collaboration. We introduce a modular and extensible methodology for grounding an LLM with the sensory perceptions and capabilities of a physical robot, and integrate multiple deep learning models throughout the architecture in a form of system integration. The integrated models encompass various functions such as speech recognition, speech generation, open-vocabulary object detection, human pose estimation, and gesture detection, with the LLM serving as the central text-based coordinating unit. The qualitative and quantitative results demonstrate the huge potential of LLMs in providing emergent cognition and interactive language-oriented control of robots in a natural and social manner. Video: https://youtu.be/A2WLEuiM3-s.

 

@InProceedings{AAW24b, 
 	 author =  {Allgeuer, Philipp and Ali, Hassan and Wermter, Stefan},  
 	 title = {When Robots Get Chatty: Grounding Multimodal Human-Robot Conversation and Collaboration}, 
 	 booktitle = {Proceedings of the International Conference on Artificial Neural Networks},
 	 journal = {},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {306–321},
 	 year = {2024},
 	 month = {Sep},
 	 publisher = {},
 	 doi = {10.1007/978-3-031-72341-4_21}, 
 }