Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/5875
Full metadata record
DC FieldValueLanguage
dc.contributor.authorNguyen, Si Thin-
dc.contributor.authorVan, Hung Trong-
dc.contributor.authorPham, U. P. Thao-
dc.date.accessioned2025-11-17T22:46:47Z-
dc.date.available2025-11-17T22:46:47Z-
dc.date.issued2025-05-
dc.identifier.isbn978-3-031-87061-3-
dc.identifier.urihttps://doi.org/10.1007/978-3-031-87061-3_11-
dc.identifier.urihttps://elib.vku.udn.vn/handle/123456789/5875-
dc.descriptionBig Data and Data Science Engineering, Studies in Computational Intelligence 1201; pp: 145-156.vi_VN
dc.description.abstractFacial age and gender recognition from images facilitates intelligent applications across various fields. Although convolutional neural networks (CNNs) represent the leading approach, achieving peak performance necessitates careful adjustment of regularization strategies. Building on the previous study, this research has adjusted the model by adding an iterative parameter to process batch normalization and dropout, optimizing the weights during model execution. Compared to the results of the previous study, the proposed model 2 achieved improved accuracy over proposed model 1 for gender and age detection, with improvements of 3% and 2%, respectively. These findings underscore the importance of fine-tuning batch normalization and dropout for developing robust CNNs with loop parameter. Our research offers valuable perspectives on optimizing CNN architecture and regularization in the field of computer vision.vi_VN
dc.language.isoenvi_VN
dc.publisherSpringer Naturevi_VN
dc.subjectConvolutional neural networksvi_VN
dc.subjectBatch normalisationvi_VN
dc.subjectDropout configurationvi_VN
dc.titleEnhancing Age and Gender Detection Performance in CNNs Through Optimization of Batch Normalization and Dropout Settings with Iterative Parametervi_VN
dc.typeWorking Papervi_VN
Appears in Collections:NĂM 2025

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.