Transferability of Learned Knowledge in Neural Networks: Impact of Trained Weights on Untrained Networks

Main Article Content

Adwaita Tulsyan, Udit Chaturvedi, Neha V. Sharma

Abstract

This research explores the intricate dynamics of neural networks (NNs) with a focus on understanding the profound implications of the training process on their performance. In a departure from conventional transfer learning, this study delves into the manual initialization of untrained neural networks with weights and biases extracted from trained networks of identical architecture. Despite their plain congruence, we meticulously investigate the nuanced distinctions between trained and untrained networks and analyze the disparities in their performance. This study introduces mathematical foundations, empirical findings, and implications that underscore the significance of the training process in the realm of neural networks. Applying the ideas of weight extraction, this research is inspired by the unique process of visual learning and “mirroring” seen in humans; the ability to mimic something simply by seeing the methods involved.

Article Details

Section
Articles