Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/3988
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTun, Ye Lin-
dc.contributor.authorNguyen, Huu Nhat Minh-
dc.contributor.authorThwal, Chu Myaet-
dc.contributor.authorChoi, Jinwoo-
dc.date.accessioned2024-07-29T08:20:08Z-
dc.date.available2024-07-29T08:20:08Z-
dc.date.issued2023-08-
dc.identifier.urihttps://doi.org/10.1016/j.neunet.2023.06.010-
dc.identifier.urihttps://elib.vku.udn.vn/handle/123456789/3988-
dc.descriptionNeural Networks 165 (2023); pp: 689-704vi_VN
dc.description.abstractFederated learning (FL) is a promising approach that enables distributed clients to collaboratively train a global model while preserving their data privacy. However, FL often suffers from data heterogeneity problems, which can significantly affect its performance. To address this, clustered federated learning (CFL) has been proposed to construct personalized models for different client clusters. One effective client clustering strategy is to allow clients to choose their own local models from a model pool based on their performance. However, without pre-trained model parameters, such a strategy is prone to clustering failure, in which all clients choose the same model. Unfortunately, collecting a large amount of labeled data for pre-training can be costly and impractical in distributed environments. To overcome this challenge, we leverage self-supervised contrastive learning to exploit unlabeled data for the pre-training of FL systems. Together, self-supervised pre-training and client clustering can be crucial components for tackling the data heterogeneity issues of FL. Leveraging these two crucial strategies, we propose contrastive pre-training-based clustered federated learning (CP-CFL) to improve the model convergence and overall performance of FL systems. In this work, we demonstrate the effectiveness of CP-CFL through extensive experiments in heterogeneous FL settings, and present various interesting observations.vi_VN
dc.language.isoenvi_VN
dc.publisherElsevier Ltdvi_VN
dc.subjectFederated learningvi_VN
dc.subjectdata heterogeneityvi_VN
dc.subjectcontrastive learningvi_VN
dc.subjectunlabeled datavi_VN
dc.titleContrastive encoder pre-training-based clustered federated learning for heterogeneous datavi_VN
dc.typeWorking Papervi_VN
Appears in Collections:NĂM 2023

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.