Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/5803
Title: Cross-modal prototype based multimodal federated learning under severely missing modality
Authors: Le, Q. Huy
Thwal, Chu Myaet
Qiao, Yu
Tun, Ye Lin
Nguyen, Huu Nhat Minh
Huh, Eui-Nam
Hong, Choong Seon
Keywords: federated learning
decentralized machine learning
data heterogeneity
autonomous driving
absence
regularization
Issue Date: Oct-2025
Publisher: Elsevier
Abstract: Multimodal federated learning (MFL) has emerged as a decentralized machine learning paradigm, allowing multiple clients with different modalities to collaborate on training a global model across diverse data sources without sharing their private data. However, challenges, such as data heterogeneity and severely missing modalities, pose crucial hindrances to the robustness of MFL, significantly impacting the performance of global model. The occurrence of missing modalities in real-world applications, such as autonomous driving, often arises from factors like sensor failures, leading knowledge gaps during the training process. Specifically, the absence of a modality introduces misalignment during the local training phase, stemming from zero-filling in the case of clients with missing modalities. Consequently, achieving robust generalization in global model becomes imperative, especially when dealing with clients that have incomplete data. In this paper, we propose Multimodal Federated Cross Prototype Learning (MFCPL), a novel approach for MFL under severely missing modalities. Our MFCPL leverages the complete prototypes to provide diverse modality knowledge in modality-shared level with the cross-modal regularization and modality-specific level with cross-modal contrastive mechanism. Additionally, our approach introduces the cross-modal alignment to provide regularization for modality-specific features, thereby enhancing the overall performance, particularly in scenarios involving severely missing modalities. Through extensive experiments on four multimodal datasets, we demonstrate the effectiveness of MFCPL in mitigating the challenges of data heterogeneity and severely missing modalities while improving the overall performance and robustness of MFL.
Description: Information Fusion; Volume 122, 103219
URI: https://doi.org/10.1016/j.inffus.2025.103219
https://elib.vku.udn.vn/handle/123456789/5803
Appears in Collections:NĂM 2025

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.