GAN-Enhanced Medical Image Synthesis: Augmenting CXR Data for Disease Diagnosis and Improving Deep Learning Performance

Main Article Content

Prakash Patil, Sarika T. Deokate, Amol Bhoite, Sashikanta Prusty, Abhijeet Bholaji Patil, Purva Mange

Abstract

Deep learning is increasing the need for accurate and reliable medical image analysis tools, especially for CXR disease diagnosis. This study proposes the Attention Mechanisms based Cycle-Consistent GAN (AM-CGAN) to address the lack of annotated medical data. To produce realistic and clinically relevant CXR images, our model uses Generative Adversarial Networks (GAN) and attention mechanisms. Downstream deep learning models for disease classification improve with this enhancement. The Attention Mechanisms based Cycle-Consistent GAN (AM-CGAN) improves the accuracy and reliability of deep learning models used for medical image analysis, specifically for Chest X-ray (CXR) data. CXR data is enhanced to improve disease diagnosis. Generative Adversarial Networks (GAN) create realistic medical images in the proposed model. It also uses attention mechanisms to highlight key areas in generated images. This research aims to address the lack of annotated medical data, particularly for CXR images. Training deep learning models is difficult due to the lack of diverse and well-annotated datasets. Our proposed AM-CGAN uses attention mechanisms to generate synthetic CXR images that closely resemble medical images and highlight disease-specific characteristics. AM-CGAN uses Cycle-Consistent GAN to ensure that generated images match the input distribution and prevent mode collapse. While synthesizing images, the model can selectively focus on important anatomical structures and pathological indicators using attention mechanisms. This attention-driven approach improves the clinical significance of generated images, making them better for training accurate and reliable disease classification models. Many experiments were done to test the AM-CGAN on CXR images of COVID-19, pneumonia, and normal cases. The quantitative results show high precision (98.15% accuracy). This shows the model's ability to create medical-data-like synthetic images. Downstream deep learning models trained on the augmented dataset perform better at capturing disease-specific characteristics. This study advances GAN-enhanced medical image synthesis research and addresses the data shortage in medical imaging research. The AM-CGAN attention-driven focus on disease-related regions in CXR data suggests a promising way to improve diagnostic models, especially in situations with few labeled datasets. The AM-CGAN bridges the gap between diverse data and sophisticated deep learning models for disease diagnosis, making it a major advancement in medical image analysis.

Article Details

Section
Articles
Author Biography

Prakash Patil, Sarika T. Deokate, Amol Bhoite, Sashikanta Prusty, Abhijeet Bholaji Patil, Purva Mange

1Dr. Prakash Patil 

2Dr Sarika T. Deokate

3Dr. Amol Bhoite

4Sashikanta Prusty

5Dr. Abhijeet Bholaji Patil

6Dr. Purva Mange

1Associate    Professor  Department of   Radio-Diagnosis,Krishna Institute of Medical Sciences, Krishna Vishwa Vidyapeeth, Karad, Maharashtra, Email: drprakash24@gmail.com

2Assistant Professor, Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, Maharashtra, India. Email: sarikajaankar2017@gmail.com

3Assistant Professor Department of  Radio-Diagnosis,Krishna Institute of Medical Sciences, Krishna Vishwa Vidyapeeth, Karad, Maharashtra, Email: amool0123@gmail.com

4Department of Computer Science, Siksha 'O' Anusandhan(deemed to be university), Bhubaneswar, India. Email: sashi.prusty79@gmail.com

5Senior Resident Department of Radio-Diagnosis, Krishna Institute of Medical Sciences, Krishna Vishwa Vidyapeeth, Karad, Maharashtra, Email: drabhay.abhay2000@gmail.com

6Associate Professor, Symbiosis School of Planning Architecture and Design, University and city with the country name: Symbiosis International University, Nagpur, Maharashtra, India. Email: purva.mange@gmail.com

Copyright © JES 2023 on-line : journal.esrgroups.org

References

U. Sait et al., “A deep-learning based multimodal system for Covid-19 diagnosis using breathing sounds and chest X-ray images,” Appl. Soft Comput., vol. 109, p. 107522, 2021, doi: 10.1016/j.asoc.2021.107522.

R. Soundrapandiyan, H. Naidu, M. Karuppiah, M. Maheswari, and R. C. Poonia, “AI-based wavelet and stacked deep learning architecture for detecting coronavirus (COVID-19) from chest X-ray images,” Comput. Electr. Eng., vol. 108, no. April, p. 108711, 2023, doi: 10.1016/j.compeleceng.2023.108711.

V. Khetani, Y. Gandhi, S. Bhattacharya, S. N. Ajani, and S. Limkar, “Cross-Domain Analysis of ML and DL : Evaluating their Impact in Diverse Domains,” vol. 11, pp. 253–262, 2023.

V. S. Rohila, N. Gupta, A. Kaul, and D. K. Sharma, “Deep learning assisted COVID-19 detection using full CT-scans,” Internet of Things (Netherlands), vol. 14, p. 100377, 2021, doi: 10.1016/j.iot.2021.100377.

G. Jain, D. Mittal, D. Thakur, and M. K. Mittal, “A deep learning approach to detect Covid-19 coronavirus with X-Ray images,” Biocybern. Biomed. Eng., vol. 40, no. 4, pp. 1391–1405, 2020, doi: 10.1016/j.bbe.2020.08.008.

H. I. Hussein, A. O. Mohammed, M. M. Hassan, and R. J. Mstafa, “Lightweight deep CNN-based models for early detection of COVID-19 patients from chest X-ray images,” Expert Syst. Appl., vol. 223, no. March, p. 119900, 2023, doi: 10.1016/j.eswa.2023.119900.

L. Fang and X. Wang, “COVID-RDNet: A novel coronavirus pneumonia classification model using the mixed dataset by CT and X-rays images,” Biocybern. Biomed. Eng., vol. 42, no. 3, pp. 977–994, 2022, doi: 10.1016/j.bbe.2022.07.009.

N. Ghassemi et al., “Automatic diagnosis of COVID-19 from CT images using CycleGAN and transfer learning,” Appl. Soft Comput., vol. 144, p. 110511, 2023, doi: 10.1016/j.asoc.2023.110511.

M. Gour and S. Jain, “Automated COVID-19 detection from X-ray and CT images with stacked ensemble convolutional neural network,” Biocybern. Biomed. Eng., vol. 42, no. 1, pp. 27–41, 2022, doi: 10.1016/j.bbe.2021.12.001.

G. Celik, “Detection of Covid-19 and other pneumonia cases from CT and X-ray chest images using deep learning based on feature reuse residual block and depthwise dilated convolutions neural network[Formula presented],” Appl. Soft Comput., vol. 133, p. 109906, 2023, doi: 10.1016/j.asoc.2022.109906.

L. T. Duong, P. T. Nguyen, L. Iovino, and M. Flammini, “Automatic detection of Covid-19 from chest X-ray and lung computed tomography images using deep neural networks and transfer learning,” Appl. Soft Comput., vol. 132, p. 109851, 2023, doi: 10.1016/j.asoc.2022.109851.

Y. Liu et al., “Self-paced Multi-view Learning for CT-based severity assessment of COVID-19,” Biomed. Signal Process. Control, vol. 83, no. September 2022, p. 104672, 2023, doi: 10.1016/j.bspc.2023.104672.

G. Bargshady, X. Zhou, P. D. Barua, R. Gururajan, Y. Li, and U. R. Acharya, “Application of CycleGAN and transfer learning techniques for automated detection of COVID-19 using X-ray images,” Pattern Recognit. Lett., vol. 153, pp. 67–74, 2022, doi: 10.1016/j.patrec.2021.11.020.

Z. Cao, J. Huang, X. He, and Z. Zong, “BND-VGG-19: A deep learning algorithm for COVID-19 identification utilizing X-ray images,” Knowledge-Based Syst., vol. 258, p. 110040, 2022, doi: 10.1016/j.knosys.2022.110040.

A. H. Barshooi and A. Amirkhani, “A novel data augmentation based on Gabor filter and convolutional deep learning for improving the classification of COVID-19 chest X-Ray images,” Biomed. Signal Process. Control, vol. 72, no. PA, p. 103326, 2022, doi: 10.1016/j.bspc.2021.103326.

H. Hosseinzadeh, “Deep multi-view feature learning for detecting COVID-19 based on chest X-ray images,” Biomed. Signal Process. Control, vol. 75, no. February, p. 103595, 2022, doi: 10.1016/j.bspc.2022.103595.

A. Kumar, “RYOLO v4-tiny: A deep learning based detector for detection of COVID and Non-COVID Pneumonia in CT scans and X-RAY images,” Optik (Stuttg)., vol. 268, no. June, p. 169786, 2022, doi: 10.1016/j.ijleo.2022.169786.

B. Nigam, A. Nigam, R. Jain, S. Dodia, N. Arora, and B. Annappa, “COVID-19: Automatic detection from X-ray images by utilizing deep learning methods,” Expert Syst. Appl., vol. 176, no. March, p. 114883, 2021, doi: 10.1016/j.eswa.2021.114883.

G. Srivastava, A. Chauhan, M. Jangid, and S. Chaurasia, “CoviXNet: A novel and efficient deep learning model for detection of COVID-19 using chest X-Ray images,” Biomed. Signal Process. Control, vol. 78, no. June, p. 103848, 2022, doi: 10.1016/j.bspc.2022.103848.

M. Kumar, D. Shakya, V. Kurup, and W. Suksatan, “COVID-19 prediction through X-ray images using transfer learning-based hybrid deep learning approach,” Mater. Today Proc., vol. 51, pp. 2520–2524, 2022, doi: 10.1016/j.matpr.2021.12.123.

S. Kumar et al., “Chest X ray and cough sample based deep learning framework for accurate diagnosis of COVID-19,” Comput. Electr. Eng., vol. 103, no. September, p. 108391, 2022, doi: 10.1016/j.compeleceng.2022.108391.

O. Pavia, “COVID-19 CT scans | Kaggle,” Coronacases.org. 2020, [Online]. Available: https://www.kaggle.com/andrewmvd/covid19-ct-scans.