Robotic Imitation of Human Actions
IEEE International Conference on Development and Learning,
doi: 10.1109/ICDL61372.2024.10644215
- Jun 2024
Imitation can allow us to quickly gain an understanding of a new task. Through a demonstration, we can gain direct knowledge about which actions need to be performed and which goals they have. In this paper, we introduce a new approach to imitation learning that tackles the challenges of a robot imitating a human, such as the change in perspective and body schema. Our approach can use a single human demonstration to abstract information about the demonstrated task, and use that information to generalise and replicate it. We facilitate this ability by a new integration of two state-of-the-art methods: a diffusion action segmentation model to abstract temporal information from the demonstration and an open vocabulary object detector for spatial information. Furthermore, we refine the abstracted information and use symbolic reasoning to create an action plan utilising inverse kinematics, to allow the robot to imitate the demonstrated action.
@InProceedings{SKW24a, author = {Spisak, Josua and Kerzel, Matthias and Wermter, Stefan}, title = {Robotic Imitation of Human Actions}, booktitle = {IEEE International Conference on Development and Learning}, journal = {}, editors = {}, number = {}, volume = {}, pages = {}, year = {2024}, month = {Jun}, publisher = {}, doi = {10.1109/ICDL61372.2024.10644215}, }