FINGeR: Framework for Interactive Neural-based Gesture Recognition
European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN '14),
pages 443--447,
- Apr 2014
For operating in real world scenarios, the recognition of human gestures must be adaptive, robust and fast. Despite the prominent
use of Kinect-like range sensors for demanding visual tasks involving motion, it still remains unclear how to process depth information for efficiently
extrapolating the dynamics of hand gestures. We propose a learning framework based on neural evidence for processing visual information. We first
segment and extract spatiotemporal hand properties from RGB-D videos.
Shape and motion features are then processed by two parallel streams
of hierarchical self-organizing maps and subsequently combined for a more
robust representation. We provide experimental results to show how multicue integration increases recognition rates over a single-cue approach.
@InProceedings{PBW14,
author = {Parisi, German I. and Barros, Pablo and Wermter, Stefan},
title = {FINGeR: Framework for Interactive Neural-based Gesture Recognition},
booktitle = {European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN '14)},
journal = {None},
editors = {}
number = {}
volume = {}
pages = {443--447},
year = {2014},
month = {Apr},
publisher = {None},
doi = {}
}