Multi-modal Integration of Speech and Gestures for Interactive Robot Scenarios
IEEE/RSJ International Conference on Intelligent Robots and Systems, Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics,
- Oct 2016
Human-Robot Interaction (HRI) has become an increasingly interesting area of study among developmental roboticists since robot learning can be speeded up with the use of parent-like trainers who deliver useful advice allowing robots to learn a specific task in less time than a robot exploring the environment autonomously.
In this regard, the parent-like trainer guides the apprentice robot with actions that allow to enhance its performance in the same manner as external caregivers may support infants in the accomplishment of a given task, with the provided support frequently decreasing over time.
This teaching technique has become known as parental scaffolding.
When interacting with their caregivers, infants are subject to different environmental stimuli which can be present in various modalities.
In general terms, it is possible to think about some of those stimuli as guidance that the parent-like trainer delivers to the apprentice agent.
Nevertheless, when more modalities are considered, issues can emerge regarding the interpretation and integration of multi-modal information, especially when multiple sources are conflicting or ambiguous.
As a consequence, the advice may not be clear and misunderstood, and hence, may lead the apprentice agent to a decreased performance when solving a task.
@InProceedings{CPW16, author = {Cruz, Francisco and Parisi, German I. and Wermter, Stefan}, title = {Multi-modal Integration of Speech and Gestures for Interactive Robot Scenarios}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems, Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics}, editors = {}, number = {}, volume = {}, pages = {}, year = {2016}, month = {Oct}, publisher = {IEEE}, doi = {}, }