Improving Human Action Recognition in Videos with CNN–sLSTM and Soft Attention Mechanism

Main Article Content

Merit Khaled, Beladgham Mohammed

Abstract

Action recognition in videos has become crucial in computer vision because of its diverse applications, such as multimedia indexing and surveillance in public environments. The incorporation of attention mechanisms into deep learning has gained considerable attention. This approach aims to emulate the human visual processing system by enabling models to focus on pertinent aspects of a scene and derive significant insights. This study introduces an advanced soft attention mechanism designed to enhance the CNN-sLSTM architecture for recognizing human actions in videos. We used the VGG19 convolutional neural network to extract spatial features from the video frames, whereas the sLSTM network models the temporal relationships between frames. The performance of our model was assessed using two widely used datasets, HMDB-51 and UCF-101, with precision as the key evaluation metric. Our results indicate substantial improvements, achieving accuracy scores of 53.12% (base approach) and 67.18% (with attention) for HMDB-51 and 83.98% (base approach) and 94.15% (with attention) for UCF-101. These results underscore the effectiveness of the proposed soft attention mechanism in improving the performance of video action recognition models.

Article Details

Section
Articles