A Mathematical Framework for Incorporating Neural Networks into Root-Finding Algorithms
Main Article Content
Abstract
In many branches of science and engineering, root-finding methods are essential numerical analytic approaches for solving nonlinear equations. However, when it comes to convergence behavior, typical root-finding techniques like the Newton-Raphson method, the Secant method, and the Bisection method all have some drawbacks, particularly for functions that are highly nonlinear, discontinuous, or poorly defined. This book offers a mathematically based approach that combines traditional root-finding techniques with neural networks, specifically feed forward neural networks (FNNs), to improve accuracy, responsiveness, and convergence rate in complex systems. The model uses data-driven projections to drive the iterative march towards roots and retains the theoretical power of traditional numerical approaches. When analytic forms are unavailable or computationally costly, a hybrid solution exists where the neural network approximates the function and/or derivative. Numerical solutions exhibit significant gains in convergence and stability when compared to existing techniques. Quantitative estimates of these advantages are provided in a case study that compares conventional and hybrid systems using specific nonlinear benchmark test functions. The framework provides the foundation for integrating deep learning models into mathematical algorithms at the lowest level that may be used across disciplines.
Article Details

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.