Design of Artistic Image Generation and Interactive Painting System Based on Deep Learning

Main Article Content

Wei Du

Abstract

The objective is to investigate the role of neural networks and vision in painting and design webs, and to provide a technique for automatic painting and design element extraction and computer-aided design (CAD) reconstruction from recurrent patterns. In this manuscript, Design of Artistic Image Generation and Interactive Painting System Based on Deep Learning (DAIG-IPS-DHNN-MBGIO) are proposed. Initially, the input image is gathered from the ModelNet40 - Princeton 3D Object Dataset. The input image is then pre-processed using Orthogonal Master-Slave Adaptive Notch Filter (OMSANF) to be used toenhance the image and size adjustment. The pre-processed images undergo feature extraction, employing the Local Maximum Synchrosqueezing Chirplet Transform (LMSCT) to extract effective features like Function, Quality, and Design from the images. The extracted features are given to Dense Hebbian Neural Network (DHNN) for detecting Artistic Image Generation and it classifies such as Academic Art, Art Nouveau, Baroque, Expressionism, Japanese Art,Neoclassicism,Primitivism,Realism,Renaissance,Rococo,Romanticism,Symbolism and Western Medieval. In general, there is no adaptation of optimization techniques using DHNN to identify the ideal parameters to provide precise classification. In order to precisely categorize, the Artistic Image Generation DHNN classifier is optimized using the Multiplayer Battle Game-Inspired Optimizer (MBGIO).The proposed method is implemented in Python. The efficiency of the proposed DAIG-IPS-DHNN-MBGIO approach is evaluated using a number of performance criteria, including Accuracy, precision, recall, F1value and Error rate. The proposed DAIG-IPS-DHNN-MBGIO method attains 28.26%, 21.41% and 22.26% higher accuracy, 24.36%, 15.42% and 20.27% higher precision and 22.36%, 15.42% and 18.27% higher recall is compared with existing methods, such as State of the Art in Defect Detection Based on Machine Vision (SADD-MV-DNN), Art Teaching Innovation Based on Computer Aided Design and Deep Learning Model (ATI-CAD-CNN), and Automatic Extraction and Reconstruction of Drawing Design Elements Based on Computer Vision and Neural Networks (AE-RDDE-RNN), respectively.

Article Details

Section
Articles