We present a Bag-of-Visual-and-Depth-Words
(BoVDW) model for gesture recognition, an extension
of the Bag-of-Visual-Words (BoVW) model, that benefits
from the multimodal fusion of visual and depth features.
State-of-the-art RGB and depth features, including
a new proposed depth descriptor, are analysed and
combined in a late fusion fashion. The method is
integrated in a continuous gesture recognition pipeline,
where Dynamic Time Warping (DTW) algorithm is
used to perform prior segmentation of ge...
We present a Bag-of-Visual-and-Depth-Words
(BoVDW) model for gesture recognition, an extension
of the Bag-of-Visual-Words (BoVW) model, that benefits
from the multimodal fusion of visual and depth features.
State-of-the-art RGB and depth features, including
a new proposed depth descriptor, are analysed and
combined in a late fusion fashion. The method is
integrated in a continuous gesture recognition pipeline,
where Dynamic Time Warping (DTW) algorithm is
used to perform prior segmentation of gestures. Results
of the method in public data sets, within our gesture
recognition pipeline, show better performance in
comparison to a standard BoVW model.
We present a Bag-of-Visual-and-Depth-Words (BoVDW) model for gesture recognition, an extension of the Bag-of-Visual-Words (BoVW) model, that benefits from the multimodal fusion of visual and depth features. State-of-the-art RGB and depth features, including a new proposed depth descriptor, are analysed and combined in a late fusion fashion. The method is integrated in a continuous gesture recognition pipeline, where Dynamic Time Warping (DTW) algorithm is used to perform prior segmentation of gestures. Results of the method in public data sets, within our gesture recognition pipeline, show better performance in comparison to a standard BoVW model.
Citació
Hernandez-Vela, A. [et al.]. BoVDW: Bag-of-Visual-and-Depth-Words for gesture recognition. A: International Conference on Pattern Recognition. "Proceedings of the 21st International Conference on Pattern Recognition". Tsukuba Science City: 2012, p. 449-452.