Please use this identifier to cite or link to this item: https://elib.vku.udn.vn/handle/123456789/4266
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSong, Wenhao-
dc.contributor.authorGao, Mingliang-
dc.date.accessioned2024-12-03T09:37:52Z-
dc.date.available2024-12-03T09:37:52Z-
dc.date.issued2024-11-
dc.identifier.isbn978-3-031-74126-5-
dc.identifier.urihttps://elib.vku.udn.vn/handle/123456789/4266-
dc.identifier.urihttps://doi.org/10.1007/978-3-031-74127-2_3-
dc.descriptionLecture Notes in Networks and Systems (LNNS,volume 882); The 13th Conference on Information Technology and Its Applications (CITA 2024) ; pp: 28-37.vi_VN
dc.description.abstractInfrared and visible image fusion seeks to retain complementary information from source images and generate a comprehensive image. Most fusion methods ignore the detailed information in the frequency domain. To address this problem, we propose a Frequency-spatial Feature Fusion Network (F3Net) in this work. The F3Net consists of three modules, namely Frequency-Spatial Feature Extraction Module (FSFEM), Feature Fusion Module (FFM), and Image Reconstruction Module (IRM). First, the FSFEM is built to extract complementary information from the source image separately in the frequency and spatial domains. Then, the FMM is introduced to fuse the features of the frequency and spatial domains. Finally, the fused image is reconstructed by IRM. Comprehensive experiments demonstrate that the F3Net outperforms the state-of-the-art (SOTA) methods subjectively and objectively.vi_VN
dc.language.isoenvi_VN
dc.publisherSpringer Naturevi_VN
dc.subjectFrequency-Spatialvi_VN
dc.subjectFeature Fusionvi_VN
dc.subjectNetwork for Infraredvi_VN
dc.titleFrequency-Spatial Feature Fusion Network for Infrared and Visible Image Fusionvi_VN
dc.typeWorking Papervi_VN
Appears in Collections:CITA 2024 (International)

Files in This Item:

 Sign in to read



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.