Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/5830
Full metadata record
DC FieldValueLanguage
dc.contributor.authorVo, Hoang Chuong-
dc.contributor.authorVo, Hung Cuong-
dc.contributor.authorVo, Ngoc Dạt-
dc.contributor.authorNguyen, Trong Cong Thanh-
dc.contributor.authorPhan, Trong Thanh-
dc.contributor.authorNgo, Le Quan-
dc.date.accessioned2025-11-17T02:19:37Z-
dc.date.available2025-11-17T02:19:37Z-
dc.date.issued2025-02-
dc.identifier.isbn978-981-97-3559-4-
dc.identifier.issn2367-3389 (e)-
dc.identifier.urihttps://doi.org/10.1007/978-981-97-3559-4-
dc.identifier.urihttps://elib.vku.udn.vn/handle/123456789/5830-
dc.descriptionProceedings of Ninth International Congress on Information and Communication Technology (ICICT 2024), London, Volume 3; pp: 125-135.vi_VN
dc.description.abstractThe communication of human emotions and intentions is significantly facilitated through the powerful and innate channel of human expression. As deep learning methodologies continue to demonstrate remarkable achievements across various domains, coupled with the proliferation of diverse datasets, facial expression recognition has evolved from being predominantly a laboratory-based challenge to a realm of practical, real-world applications. In fact, deep learning has played an integral part in enriching the facial features being exploited from plenty of faces that further improve the recognition accuracy. Recent methods have been witnessing two major problems:Thefirstoneislackingsufficienttrainingdata,andthesecondone is that many methods cannot overcome hardship conditions such as illumination, head poses, complex backgrounds, etc. In this paper, we propose a CNN-based network to deal with FER in videos. The input frames will be fed to a well-known neural architecture (lite-XceptionFCNet) to obtain the spatial features. Then, these features go through a classifier in the later component of the architecture. We used the confusion matrix and the parameters inferred from it such as accuracy, misclassification, precision, recall (sensitivity), and specificity to evaluate our fire detection module and model. The system achieved high accuracy in the FER dataset, respectively. The proposed system behaved robustly and showed the potential of being applied to real-time facial expression recognition.vi_VN
dc.language.isoenvi_VN
dc.publisherSpringer Naturevi_VN
dc.subjectExpression recognitionvi_VN
dc.subjectDeep Learningvi_VN
dc.subjectHumanvi_VN
dc.subjectFacial expressionvi_VN
dc.subjectLite modelvi_VN
dc.titleFacial Expression Recognition: A Lite Deep Learning-Based Approachvi_VN
dc.typeWorking Papervi_VN
Appears in Collections:NĂM 2025

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.