Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/3989
Full metadata record
DC FieldValueLanguage
dc.contributor.authorThwal, Chu Myaet-
dc.contributor.authorNguyen, Huu Nhat Minh-
dc.contributor.authorTun, Ye Lin-
dc.contributor.authorKim, Seong Tae-
dc.contributor.authorThai, My T.-
dc.contributor.authorHong, Choong Seon-
dc.date.accessioned2024-07-29T08:28:21Z-
dc.date.available2024-07-29T08:28:21Z-
dc.date.issued2024-02-
dc.identifier.urihttps://doi.org/10.1016/j.neunet.2023.11.044-
dc.identifier.urihttps://elib.vku.udn.vn/handle/123456789/3989-
dc.descriptionNeural Networks; Volume 170; pp: 635-649vi_VN
dc.description.abstractFederated learning (FL) has emerged as a promising approach to collaboratively train machine learning models across multiple edge devices while preserving privacy. The success of FL hinges on the efficiency of participating models and their ability to handle the unique challenges of distributed learning. While several variants of Vision Transformer (ViT) have shown great potential as alternatives to modern convolutional neural networks (CNNs) for centralized training, the unprecedented size and higher computational demands hinder their deployment on resource-constrained edge devices, challenging their widespread application in FL. Since client devices in FL typically have limited computing resources and communication bandwidth, models intended for such devices must strike a balance between model size, computational efficiency, and the ability to adapt to the diverse and non-IID data distributions encountered in FL. To address these challenges, we propose OnDev-LCT: Lightweight Convolutional Transformers for On-Device vision tasks with limited training data and resources. Our models incorporate image-specific inductive biases through the LCT tokenizer by leveraging efficient depthwise separable convolutions in residual linear bottleneck blocks to extract local features, while the multi-head self-attention (MHSA) mechanism in the LCT encoder implicitly facilitates capturing global representations of images. Extensive experiments on benchmark image datasets indicate that our models outperform existing lightweight vision models while having fewer parameters and lower computational demands, making them suitable for FL scenarios with data heterogeneity and communication bottlenecks.vi_VN
dc.language.isoenvi_VN
dc.publisherElsevier Ltdvi_VN
dc.subjectFederated learningvi_VN
dc.subjectmachine learningvi_VN
dc.subjectdistributed learningvi_VN
dc.subjectVision Transformervi_VN
dc.subjectneural networksvi_VN
dc.subjectwidespread applicationvi_VN
dc.subjectcommunication bandwidthvi_VN
dc.subjectdata distributionsvi_VN
dc.subjecttraining datavi_VN
dc.subjectlocal featuresvi_VN
dc.subjectdepthwise separable convolutionsvi_VN
dc.subjectdata heterogeneityvi_VN
dc.titleOnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learningvi_VN
dc.typeWorking Papervi_VN
Appears in Collections:NĂM 2024

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.