Please use this identifier to cite or link to this item:
https://elib.vku.udn.vn/handle/123456789/6178Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Pham, Dinh Tan | - |
| dc.contributor.author | Diem, Cong Hoang | - |
| dc.date.accessioned | 2026-01-19T08:24:49Z | - |
| dc.date.available | 2026-01-19T08:24:49Z | - |
| dc.date.issued | 2026-01 | - |
| dc.identifier.isbn | 978-3-032-00971-5 (p) | - |
| dc.identifier.isbn | 978-3-032-00972-2 (e) | - |
| dc.identifier.uri | https://doi.org/10.1007/978-3-032-00972-2_57 | - |
| dc.identifier.uri | https://elib.vku.udn.vn/handle/123456789/6178 | - |
| dc.description | Lecture Notes in Networks and Systems (LNNS,volume 1581); The 14th Conference on Information Technology and Its Applications (CITA 2025) ; pp: 779-789 | vi_VN |
| dc.description.abstract | The need for human-computer interaction in robotics, virtual and augmented reality, and sign language understanding has made hand gesture recognition an attractive research topic. Numerous methods have been proposed in recent years. This paper proposes a graph-based deep learning model that integrates the TCR-GC module for spatial modeling and the MB-TC module for temporal modeling. The CTR-GC extracts spatial features and updates the graph topologies. A shared topology is used as a generic prior for channels and then fine-tuned according to the distinct correlations. The correlations are calculated for every sample, capturing more intricate connections between vertices. Extensive experiments are implemented on the SHREC public dataset. The experimental results show that our proposed method performs better than existing methods on the SHREC dataset. | vi_VN |
| dc.language.iso | en | vi_VN |
| dc.publisher | Springer Nature | vi_VN |
| dc.subject | Gesture recognition | vi_VN |
| dc.subject | Deep learning | vi_VN |
| dc.subject | Graph convolutional network | vi_VN |
| dc.title | Graph-based Deep Learning for Dynamic Hand Gesture Recognition | vi_VN |
| dc.type | Working Paper | vi_VN |
| Appears in Collections: | CITA 2025 (International) | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.