Comparing Apples to Oranges: LLM-powered Multimodal Intention Prediction in an Object Categorization Task
Intention-based Human-Robot Interaction (HRI) systems allow robots to perceive and interpret user actions to proactively interact with humans and adapt to their behavior. Therefore, intention prediction is pivotal in creating a natural interactive collaboration between humans and robots. In this paper, we examine the use of Large Language Models (LLMs) for inferring human intention during a collaborative object categorization task with a physical robot. We introduce a hierarchical approach for interpreting user non-verbal cues, like hand gestures, body poses, and facial expressions and combining them with environment states and user verbal cues captured using an existing Automatic Speech Recognition (ASR) system. Our evaluation demonstrates the potential of LLMs to interpret non-verbal cues and to combine them with their context-understanding capabilities and real-world knowledge to support intention prediction during human-robot interaction.
@Article{AAW24, author = {Ali, Hassan and Allgeuer, Philipp and Wermter, Stefan}, title = {Comparing Apples to Oranges: LLM-powered Multimodal Intention Prediction in an Object Categorization Task}, booktitle = {}, journal = {arXiv:2404.08424}, editors = {}, number = {}, volume = {}, pages = {}, year = {2024}, month = {Apr}, publisher = {}, doi = {10.48550/arXiv.2404.08424}, }