Ultra-Fast BNN Analysis with the upgraded FINN Framework: Harnessing GoogLeNet and Contemporary Transfer Methods for deploying algorithms on NVidia GPU

Main Article Content

Pramod Kumar Naik, Radha Gupta, Ravinder Singh Kuntal, Baskar Venugopalan, Basavaraj N. Hiremath, Vishal Patil

Abstract

By combining Binarized Neural Networks (BNNs) with the updated FINN framework and utilizing GoogLeNet with sophisticated transfer learning on Nvidia GPUs, this research presents a novel method for object detection. The goal is to decrease processing time and increase detection precision. TensorFlow optimization allows BNN architectures to balance efficiency and accuracy. Pretrained CNNs use supervised learning specific to each dataset and model architecture to handle a variety of datasets, including MNIST, CIFAR, and SVHN. Fast processing is made possible by the Nvidia Jetson Nano GPU, particularly in dynamic environments like automobile fault detection. Transfer learning adaptation of GoogLeNet's last layer achieves 93% accuracy for chairs, 94% accuracy for people, and 96% accuracy for mouse recognition, which is higher than standalone accuracy. Testing times as low as 4 seconds are possible with the combined technique, which reaches 18 FPS processing speed. This study establishes a new standard for neural network deployments by demonstrating the synergy between sophisticated neural network models, traditional topologies, and transfer learning inside the FINN framework. The Nvidia Jetson Nano GPU is essential for accelerating calculations, meeting both accuracy and speed objectives. In conclusion, this work highlights technological advancements in computer vision and plots future exploration trajectories by combining deep learning, GoogLeNet's strengths, transfer learning, and the FINN framework to promote neural network deployments in real-time applications.

Article Details

Section
Articles