Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/2160
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLe, Huy Q.-
dc.contributor.authorShin, Jong Hoon-
dc.contributor.authorNguyen, Huu Nhat Minh-
dc.contributor.authorHong, Choong Seon-
dc.date.accessioned2022-06-21T08:20:46Z-
dc.date.available2022-06-21T08:20:46Z-
dc.date.issued2021-09-
dc.identifier.citationhttps://doi.org/10.23919/APNOMS52696.2021.9562670vi_VN
dc.identifier.isbn978-1-6654-3174-3-
dc.identifier.issn2576-8565-
dc.identifier.urihttp://elib.vku.udn.vn/handle/123456789/2160-
dc.description2021 22nd Asia-Pacific Network Operations and Management Symposium (APNOMS)vi_VN
dc.description.abstractNowadays, Federated Learning has emerged as the prominent collaborative learning approach among multiple machine learning techniques. This framework enables communication-efficient and privacy-preserving solution that a group of users interacts with a server to collaboratively train a powerful global model without exchanging users' raw data. However, federated learning might face the significant challenge with high communication cost when exchanging the huge model parameters. Moreover, training such a large model on devices is an obstacle under the battery limitation of mobile devices. To address this hindrance, we propose the federated learning with bi-level distillation, namely FedBD. The key idea of this proposal is to exchange the soft targets instead of transferring the model parameters between server and clients. The exchange knowledge was constructed based on the prediction outcomes for the shared reference dataset. By interchanging the knowledge of the learning models, our algorithm obtains the benefits of reducing both communication and computation costs. The proposed mechanism allows the different model architectures between server and learning agents. The experiments show that our proposed method can achieve comparable or even slightly higher accuracy than FedAvg algorithm on the image classification task while using fewer communication resources and power.vi_VN
dc.language.isoenvi_VN
dc.publisherIEEEvi_VN
dc.subjectPerformance evaluationvi_VN
dc.subjectTrainingvi_VN
dc.subjectCostsvi_VN
dc.subjectComputational modelingvi_VN
dc.subjectCollaborative workvi_VN
dc.subjectPrediction algorithmsvi_VN
dc.subjectClassification algorithmsvi_VN
dc.titleDistilling Knowledge in Federated Learningvi_VN
dc.title.alternativeHuy Q. Le, J. H. Shin, Minh N. H. Nguyen and C. S. Hong*vi_VN
dc.typeWorking Papervi_VN
Appears in Collections:NĂM 2021

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.