Embodied Multi-modal Interaction in Language learning: the EMIL data collection
Proceedings of the ICDL-EpiRob Workshop on Active Vision, Attention, and Learning (ICDL-Epirob 2018 AVAL),
pages 2p,
- Sep 2018
Humans develop cognitive functions from a body-rational
perspective. Particularly, infants develop representations
through sensorimotor environmental interactions and goaldirected actions [1]. This embodiment plays a major role
in modeling cognitive functions from active perception to
natural language learning. For the developmental robotics
community, working with humanoid robotic proxies, datasets
are interesting that provide low-level multi-modal perception
during the environmental interactions [2].
Related Data Sets: In the last years, many labs made
considerable efforts to provide such datasets, focussing on
different research goals but also taking technical limitations
into account. Examples include: the KIT Motion-Language
set for descriptions of whole-body poses [3], the MOD165
set of a gripper-robot having vision, audio, and tactile senses
for interacting with objects [4], the Core50 set focussing on
human perspective and vision [5], and the similar but upscaled EMMI and iCubWorld sets [6]. However, none of these
corpora provide true continuous multi-modal perception for
interaction cases, as we would expect an infant is experiencing.
In this preview, we introduce the Embodied Multi-modal Interaction in Language learning (EMIL) data collection, an ongoing series of datasets for studying human cognitive functions
on developmental robots. Since we aim to utilize resources in
tight collaboration with the research community, we propose
the first set on object manipulation for fostering discussions
on future directions and needs within the community1
.
@InProceedings{HKSW18, author = {Heinrich, Stefan and Kerzel, Matthias and Strahl, Erik and Wermter, Stefan}, title = {Embodied Multi-modal Interaction in Language learning: the EMIL data collection}, booktitle = {Proceedings of the ICDL-EpiRob Workshop on Active Vision, Attention, and Learning (ICDL-Epirob 2018 AVAL)}, editors = {}, number = {}, volume = {}, pages = {2p}, year = {2018}, month = {Sep}, publisher = {}, doi = {}, }