Sui, Chenhong and Yang, Guobin and Hong, Danfeng and Wang, Haipeng and Yao, Jing and Atkinson, Peter M and Ghamisi, Pedram (2024) IG-GAN : Interactive Guided Generative Adversarial Networks for Multimodal Image Fusion. IEEE Transactions on Geoscience and Remote Sensing, 62. ISSN 0196-2892
TGRS_IGGAN_1_.pdf - Accepted Version
Available under License Creative Commons Attribution.
Download (7MB)
Abstract
Multimodal image fusion has recently garnered increasing interest in the field of remote sensing. By leveraging the complementary information in different modalities, the fused results may be more favorable in characterizing objects of interest, thereby increasing the chance of a more comprehensive and accurate perception of the scene. Unfortunately, most existing fusion methods tend to extract modality-specific features independently without considering inter-modal alignment and complementarity, leading to a suboptimal fusion process. To address this issue, we propose a novel interactive guided generative adversarial network, named IG-GAN, for the task of multi-modal image fusion. IG-GAN comprises guided dual streams tailored for enhanced learning of details and content, as well as cross-modal consistency. Specifically, a details-guided interactive running-in module and a content-guided interactive running-in module are developed, with the stronger modality serving as guidance for detail richness or content integrity, and the weaker one assisting. To fully integrate multi-granularity features from dual-modality, a hierarchical fusion and reconstruction branch is established. Specifically, a shallow interactive fusion module followed by a multi-level interactive fusion module is designed to aggregate multi-level local and long-range features. Concerning feature decoding and fused image generation, a high-level interactive fusion and reconstruction module is further developed. Additionally, to empower the fusion network to generate fused images with complete content, sharp edges, and high fidelity without supervision, a loss function facilitating the mutual game between the generator and two discriminators is also formulated. Comparative experiments with fourteen state-of-the-art methods are conducted on three datasets. Qualitative and quantitative results indicate that IG-GAN exhibits obvious superiority in terms of both visual effect and quantitative metrics. Moreover, experiments on two RGB-IR object detection datasets are also conducted, which demonstrate that IG-GAN can enhance the accuracy of object detection by integrating complementary information from different modalities.The code will be available at https://github.com/flower6top.