Dual Generative Adversarial Network for Infrared and Visible Image Fusion
DOI:
https://doi.org/10.54097/jwm4va34Keywords:
Image fusion, Infrared image, Visible image, Generative adversarial networkAbstract
The objective of infrared and visible image fusion is to integrate the prominent targets from the infrared image with the background information from the visible image into a single image. Many deep learning-based approaches have been employed in the field of image fusion. However, most methods have not been able to sufficiently extract the distinct features of images from different modalities, resulting in fusion outcomes that lean towards one modality while losing information from the other. To address this, we have developed a novel method based on generative adversarial network for infrared and visible image fusion. We have designed two sets of generative adversarial networks. The first set is utilized for preliminary feature extraction, generating intermediate results and discriminating features with the infrared image. The second set is employed for deep feature extraction, generating the fused image and discriminating features with the visible image. Through the adversarial training of the two sets of generators and discriminators, we ensure the comprehensive extraction of diverse features from images of various modalities. Extensive qualitative and quantitative experimental results indicate that our approach can retain more information from the source images. Compared to seven other prominent methods, our approach achieves superior quality.
Downloads
References
[1] H. Zhang, H. Xu, X. Tian, J. Jiang, and J. Ma, “Image fusion meets deep learning: A survey and perspective,” Information Fusion, vol. 76, pp. 323–336, 2021.
[2] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in neural information processing systems, vol. 27, 2014.
[3] J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “Fusiongan: A generative adversarial network for infrared and visible image fusion,” Information fusion, vol. 48, pp. 11–26, 2019.
[4] J. Ma, H. Xu, J. Jiang, X. Mei, and X.-P. Zhang, “Ddcgan: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion,” IEEE Transactions on Image Processing, vol. 29, pp. 4980–4995, 2020.
[5] W. Wang and F. Chang, “A multi-focus image fusion method based on laplacian pyramid.,” J. Comput., vol. 6, no. 12, pp. 2559–2566, 2011.
[6] R. Chao, K. Zhang, and Y.-j. Li, “An image fusion algorithm using wavelet transform,” ACTA ELECTONICA SINICA, vol. 32, no. 5, pp. 750, 2004.
[7] H. Li and X.-J. Wu, “Densefuse: A fusion approach to infrared and visible images,” IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2614– 2623, 2018.
[8] H. Zhang, H. Xu, Y. Xiao, X. Guo, and J. Ma, “Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, pp. 12797–12804, 2020.
[9] H. Zhang and J. Ma, “Sdnet: A versatile squeeze-and-decomposition network for real-time image fusion,” International Journal of Computer Vision, vol. 129, pp. 2761–2785, 2021.
[10] H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling, “U2fusion: A unified unsupervised image fusion network,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 1, pp. 502–518, 2020.
[11] D. Y. Tsai, Y. Lee, and E. Matsuyama, “Information entropy measure for evaluation of image quality,” Journal of digital imaging, vol. 21, pp. 338–347, 2008.
[12] G. Hur, S. W. Hong, S. Y. Kim, Y. H. Kim, Y. J. Hwang, W. R. Lee, and S. J. Cha, “Uniform image quality achieved by tube current modulation using sd of attenuation in coronary ct angiography,” American Journal of Roentgenology, vol. 189, no. 1, pp. 188–196, 2007.
[13] R. N. Youngworth and B. D. Stone, “Simple estimates for the effects of midspatial-frequency surface errors on image quality,” Applied optics, vol. 39, no. 13, pp. 2198–2209, 2000.
[14] A. Tanchenko, “Visual-psnr measure of image quality,” Journal of Visual Communication and Image Representation, vol. 25, no. 5, pp. 874–878, 2014.
[15] E. Matsuyama, D.-Y. Tsai, and Y. Lee, “Mutual information-based evaluation of image quality with its preliminary application to assessment of medical imaging systems,” Journal of Electronic Imaging, vol. 18, no. 3, pp. 033011–033011, 2009.
[16] L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarity index for image quality assessment,” IEEE transactions on Image Processing, vol. 20, no. 8, pp. 2378–2386, 2011.
[17] H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Transactions on Image Processing, vol. 15, no. 2, pp. 430–444, 2006.
[18] Y. Chen and R. S. Blum, “A new automated quality assessment algorithm for image fusion,” Image and vision computing, vol. 27, no. 10, pp. 1421–1432, 2009.
[19] U. Sara, M. Akter, and M. S. Uddin, “Image quality assessment through fsim, ssim, mse and psnra comparative study,” Journal of Computer and Communications, vol. 7, no. 3, pp. 8–18, 2019
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Journal of Computing and Electronic Information Management

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.








