Multilayer Neural Network Hardware Designs for Faster Convolutional Processing

Main Article Content

Aayushi Arya, Huzaifa Umar, Indrajeet Kumar, Vuda Sreenivasa Rao, Ashok Kumar Sahoo, Shital Kakad

Abstract

The demand for efficient hardware implementations of convolutional neural networks (CNNs) has surged with the proliferation of deep learning applications in various domains such as computer vision, natural language processing, and autonomous systems. Convolutional layers, being the fundamental building blocks of CNNs, are computationally intensive, requiring optimized hardware architectures for real-time inference tasks. This paper presents novel hardware designs targeting faster convolutional processing within multilayer neural networks. We propose a multi-faceted approach that leverages parallelism, pipelining, and hardware acceleration techniques to enhance the efficiency of convolution operations. Our design optimally exploits the inherent data-level and model-level parallelism present in CNNs to achieve high throughput while minimizing latency.   

Article Details

Section
Articles