Deep Learning-Based Medical Image Segmentation using Dual Decoder Recurrent Residual U-Net (DDR2U-Net) Architecture
Main Article Content
Abstract
Medical image segmentation is a crucial step in the diagnosis and treatment planning of various diseases, particularly in radiology and pathology. Traditional segmentation techniques often struggle with the complex, noisy, and heterogeneous nature of medical images. To address these challenges, we propose a novel deep learning-based architecture, Dual Decoder Recurrent Residual U-Net (DDR2U-Net), designed for accurate and efficient medical image segmentation. The proposed model integrates two key innovations firstly, a dual decoder mechanism, enabling the network to extract both high-level semantic and low-level spatial features simultaneously, enhancing segmentation accuracy and resolution; and second, recurrent residual connections, which improve feature learning by incorporating temporal context and ensuring smoother gradient flow during training. The dual decoder pathways allow for better feature representation by disentangling complex anatomical structures, while the recurrent residual connections help the model retain essential spatial information across multiple layers. Extensive experiments on benchmark medical image datasets, including MRI, CT scans, and histopathological images, demonstrate that the DDR2U-Net outperforms existing state-of-the-art architectures in terms of segmentation accuracy, boundary delineation, and robustness to noise. The proposed model shows promise for applications in automatic organ segmentation, tumor detection, and other critical medical imaging tasks. The model was experimentally verified using the publicly available Kvasir-SEG dataset, which gives a better global accuracy, Recall, Precision Score, IoU and Dice compared to the prior works ColonSegNet, UPolySeg and R2UPolyseg. These results show an improvement in accuracy obtained by DDR2UPolySeg
Article Details
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.