Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/5802
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLe, Q. Huy-
dc.contributor.authorNguyen, Huu Nhat Minh-
dc.contributor.authorThwal, Chu Myaet-
dc.contributor.authorQiao, Yu-
dc.contributor.authorZhang, Chaoning-
dc.contributor.authorHong, Choong Seon-
dc.date.accessioned2025-11-13T09:54:14Z-
dc.date.available2025-11-13T09:54:14Z-
dc.date.issued2025-03-
dc.identifier.urihttps://doi.org/10.1016/j.neunet.2024.107017-
dc.identifier.urihttps://elib.vku.udn.vn/handle/123456789/5802-
dc.descriptionNeural Networks; Volume 183, 107017vi_VN
dc.description.abstractFederated learning (FL) enables a decentralized machine learning paradigm for multiple clients to collaboratively train a generalized global model without sharing their private data. Most existing works have focused on designing FL systems for unimodal data, limiting their potential to exploit valuable multimodal data for future personalized applications. Moreover, the majority of FL approaches still rely on labeled data at the client side, which is often constrained by the inability of users to self-annotate their data in real-world applications. In light of these limitations, we propose a novel multimodal FL framework, namely FedMEKT, based on a semi-supervised learning approach to leverage representations from different modalities. To address the challenges of modality discrepancy and labeled data constraints in existing FL systems, our proposed FedMEKT framework comprises local multimodal autoencoder learning, generalized multimodal autoencoder construction, and generalized classifier learning. Bringing this concept into the proposed framework, we develop a distillation-based multimodal embedding knowledge transfer mechanism which allows the server and clients to exchange joint multimodal embedding knowledge extracted from a multimodal proxy dataset. Specifically, our FedMEKT iteratively updates the generalized global encoders with joint multimodal embedding knowledge from participating clients through upstream and downstream multimodal embedding knowledge transfer for local learning. Through extensive experiments on four multimodal datasets, we demonstrate that FedMEKT not only achieves superior global encoder performance in linear evaluation but also guarantees user privacy for personal data and model parameters while demanding less communication cost than other baselines.vi_VN
dc.language.isoenvi_VN
dc.publisherElseviervi_VN
dc.subjectFederated learningvi_VN
dc.subjectpersonal datavi_VN
dc.subjectdecentralized machine learningvi_VN
dc.subjectFL approachesvi_VN
dc.subjectautoencodervi_VN
dc.titleFedMEKT: Distillation-based embedding knowledge transfer for multimodal federated learningvi_VN
dc.typeWorking Papervi_VN
Appears in Collections:NĂM 2025

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.