Securing Social Media Imagery: GAN-Driven Encryption and CNN Analysis with DEA Protection

Main Article Content

Rupak Sharma, D Prabakar, Aanchal Madaan, Devendra Kumar, Makarand Upadhyaya, Arvind Kumar Sharma

Abstract

Generative adversarial networks (GANs) transform low-dimensional random noise into a photorealistic picture. The use of such misleading pictures laden with irrelevant information on social media platforms might lead to serious and challenging issues. The goal of this study is to create and identify the affected GAN pictures. Additionally, enhanced GAN detection accuracy by preprocessing and segmentation feature extraction. This study examines the effectiveness of several learning-based methods for detecting image-to-image translation. Generative adversarial networks use deep learning approaches like convolutional neural networks for generative modelling and the DEA (Data encryption standard) is used to produce the encryption key. To accurately identify fake images, an effective picture forgery detector is necessary. Recent advancements in generative adversarial networks (GANs) have been focused on producing photorealistic pictures quickly and successfully. However, GANs can complicate visual forensics and model attribution. Data from one source and pictures from another can have a wide range of uses, including in fields like computer vision, video, and language processing. The investigation of some photos reveals that both conventional and deep learning detectors may reach up to 95% detection accuracy. However, only deep learning maintains good accuracy on compressed data. An article that explains how to spot GAN-generated false images on social media also provides context on GANs and the theoretical concepts underlying them.

Article Details

Section
Articles