Application of Multimodal Data Fusion Attentive Dual Residual Generative Adversarial Network in Sentiment Recognition and Sentiment Analysis

Main Article Content

Yongfang Zhang, Hongxing Fan

Abstract

Recent advancements in Internet technology have led to increased multi-modal data posting on social media, online shopping portals, and video repositories recognizing significance of inter-modal utterances before combining multiple modes. In this manuscript, Application of Multimodal Data Fusion Attentive Dual Residual Generative Adversarial Network in Sentiment Recognition and Sentiment Analysis (MDF-DRGAN-SR-SA) is proposed. The input data are collected from CMU-MOSI dataset. Initially the input data is preprocessed using Subaperture Keystone Transform Matched Filtering (SAKTMF) is used to clean unwanted data. Then, feature extraction is done by Two-Sided Offset Quaternion Linear Canonical Transform (TSOQLCT) to extract unimodal features likes acoustic, textual, visual. Then the selected features are given to ADRGAN classifying Sentiment Recognition and Sentiment Analysis likes positive, negative, neutral. In general, ADRGAN doesn’t express some adaption of optimization strategies for determining optimal parameters to assure accurate classification of Sentiment Recognition and Sentiment Analysis. Hence, Northern Goshawk Optimization Algorithm (GOA) is proposed to enhance weight parameter of ADRGAN, which precisely classifies the Sentiment Recognition and Sentiment Analysis in positive, negative and neutral. The proposed model is implemented and its efficiency is evaluated utilizing some performance metrics likes accuracy, precision, specificity, sensitivity,F1-score. The MDF-DRGAN-SR-SA method provides 25.85%, 26.79% and 27.63% higher accuracy; 35.66%, 34.97% and 26.57% higher precision; 28.18%, 29.52% and 25.68% higher specificity is compared with existing method such as Two-Level Multimodal Fusion for SA in Public Security (TMDF-SA-PS), Multimodal SA Depend on Adaptive Modality-Specific Weight Fusion Network (MFN-SA-AMW) and Multimodal SA Utilizing Multi-tensor Fusion Network and Cross-modal Modeling(MTFN-SA) respectively.

Article Details

Section
Articles