Optimizing System Resources and Adaptive Load Balancing Framework Leveraging ACO and Reinforcement Learning Algorithms

Main Article Content

Minal Shahakar, S. A. Mahajan ,Lalit Patil


In today's constantly changing computer settings, the most important things for improving speed and keeping stability are making the best use of system resources and making sure that load balancing works well. To achieve flexible load balancing and resource optimization, this study suggests a new system that combines the Ant Colony Optimization (ACO) and Reinforcement Learning (RL) methods. The structure is meant to help with the problems that come up when tasks and resource needs change in big spread systems. ACO is based on how ants find food and is used to change how jobs are distributed among computer nodes based on local knowledge and scent tracks. This autonomous method makes it easy to quickly look for solutions and adjust to new situations. In addition to ACO, RL methods are used to learn about and adjust to how the system changes over time. By planning load balancing as a series of decisions, RL agents are able to keep improving their rules so that the system works better and resources are used more efficiently. Agents learn the best ways to divide up tasks and use resources by interacting with the world and getting feedback. The suggested system works in a spread way, which makes it scalable and reliable in a variety of settings. The system changes its behavior on the fly to react to changing tasks and resource availability by using the group intelligence of ACO and the flexibility of RL. The system can also handle different improvement goals and limitations, which makes it flexible and usable in a range of situations. The suggested approach works better than standard load balancing methods at improving system performance, lowering reaction times, and making the best use of resources, as shown by the results of experiments. Using the strengths of the ACO and RL algorithms, this structure looks like a good way to deal with the complexity of current computer systems and make good use of resources in changing settings

Article Details

Author Biography

Minal Shahakar, S. A. Mahajan ,Lalit Patil

[1]Minal Shahakar

2Dr. S. A. Mahajan

3Dr. Lalit Patil


[1]Research Scholar, Smt. Kashibai Navale College of Engineering, Savitribai Phule Pune University, India. Email: mhjn.minal@gmail.com

2Assistant Professor, Department of Information Technology, PVG' College of Engg & Tech & GK Pate, (Wani) IOM, Savitribai Phule Pune University, India. Email: sa_mahajan@yahoo.com

3Professor, Department of Information Technology, Smt. Kashibai Navale College of Engineering, Savitribai Phule Pune University, India. Email: lalitvpatil@gmail.com




A. Alizadeh, B. Lim and M. Vu, "Multi-Agent Q-Learning for Real-Time Load Balancing User Association and Handover in Mobile Networks," in IEEE Transactions on Wireless Communications, doi: 10.1109/TWC.2024.3357702.

W. Shi et al., "Design of Broadband Divisional Load-Modulated Balanced Amplifier With Extended Dynamic Power Range," in IEEE Transactions on Microwave Theory and Techniques, doi: 10.1109/TMTT.2024.3357828.

Chronis, C., Anagnostopoulos, G., Politi, E., Dimitrakopoulos, G., & Varlamis, I. (2023). Dynamic Navigation in Unconstrained Environments Using Reinforcement Learning Algorithms. IEEE Access.

J. Wu, T. Wang, Z. Shu, L. Ma, S. Wang and J. Nie, "Power Balance Control Based on Sensorless Parameters Estimation for ISOP Three-Level DAB Converter," in IEEE Transactions on Industrial Electronics, doi: 10.1109/TIE.2024.3352167.

Nishant, Kumar & Sharma, Pratik & Krishna, Vishal & Gupta, Chhavi & Singh, Kuwar & Nitin, Nitin & Rastogi, Ravi. (2012). Load Balancing of Nodes in Cloud Using Ant Colony Optimization. Proceedings - 2012 14th International Conference on Modelling and Simulation, UKSim 2012. 10.1109/UKSim.2012.11.

Ajani, S. N. ., Khobragade, P. ., Dhone, M. ., Ganguly, B. ., Shelke, N. ., & Parati, N. . (2023). Advancements in Computing: Emerging Trends in Computational Science with Next-Generation Computing. International Journal of Intelligent Systems and Applications in Engineering, 12(7s), 546–559

F. Zeidan, M. ElHayani and H. Soubra, "RT DL Tasks Distribution for Sensitive Data Protection and Resource Optimization," 2023 Eleventh International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 2023, pp. 276-282, doi: 10.1109/ICICIS58388.2023.10391204.

Min Xiang, Mengxin Chen, Duanqiong Wang, Zhang Luo, Deep Reinforcement Learning- based load balancing strategy for multiple controllers in SDN, e-Prime - Advances in Electrical Engineering, Electronics and Energy, Volume 2, 2022, 100038,ISSN 2772-6711, https://doi.org/10.1016/j.prime.2022.100038.

Mohammadian V, Navimipour NJ, Hosseinzadeh M, Darwesh A. LBAA: A novel load balancing mechanism in cloud environments using ant colony optimization and artificial bee colony algorithms. Int J Commun Syst. 2023; 36(9):e5481. doi:10.1002/dac.5481

Patra MK, Misra S, Sahoo B, Turuk AK. GWO-Based Simulated Annealing Approach for Load Balancing in Cloud for Hosting Container as a Service. Applied Sciences. 2022; 12(21):11115. https://doi.org/10.3390/app122111115

B. Kruekaew and W. Kimpan, "Multi-Objective Task Scheduling Optimization for Load Balancing in Cloud Computing Environment Using Hybrid Artificial Bee Colony Algorithm With Reinforcement Learning," in IEEE Access, vol. 10, pp. 17803-17818, 2022, doi: 10.1109/ACCESS.2022.3149955.

Kyung Y. Prioritized Task Distribution Considering Opportunistic Fog Computing Nodes. Sensors. 2021; 21(8):2635. https://doi.org/10.3390/s21082635

Alankar, B., Sharma, G., Kaur, H., Valverde, R., & Chang, V. (2020). Experimental setup for investigating the efficient load balancing algorithms on virtual cloud. Sensors, 20(24), 7342.

Kherraf, N., Sharafeddine, S., Assi, C. M., & Ghrayeb, A. (2019). Latency and reliability-aware workload assignment in IoT networks with mobile edge clouds. IEEE Transactions on Network and Service Management, 16(4), 1435-1449.

Reihani, E., Motalleb, M., Ghorbani, R., & Saoud, L. S. (2016). Load peak shaving and power smoothing of a distribution grid with high renewable energy penetration. Renewable energy, 86, 1372-1379.

Leong, W. L., Cao, J., Huang, S., & Teo, R. (2022, June). Pheromone-based approach for scalable task allocation. In 2022 International Conference on Unmanned Aircraft Systems (ICUAS) (pp. 220-227). IEEE.

Sellami, Bassem, Akram Hakiri, Sadok Ben Yahia, and Pascal Berthou. "Energy-aware task scheduling and offloading using deep reinforcement learning in SDN-enabled IoT network." Computer Networks 210 (2022): 108957.

Koksal, E., Hegde, A. R., Pandiarajan, H. P., & Veeravalli, B. (2021). Performance characterization of reinforcement learning-enabled evolutionary algorithms for integrated school bus routing and scheduling problem. International Journal of Cognitive Computing in Engineering, 2, 47-56.

Saglam, Baturay, et al. "Estimation error correction in deep reinforcement learning for deterministic actor-critic methods." 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2021.

Ren, J., Ye, C., & Yang, F. (2021). Solving flow-shop scheduling problem with a reinforcement learning algorithm that generalizes the value function with neural network. Alexandria Engineering Journal, 60(3), 2787-2800.

Li, XG., Wei, X. An Improved Genetic Algorithm-Simulated Annealing Hybrid Algorithm for the Optimization of Multiple Reservoirs. Water Resour Manage 22, 1031–1049 (2008). https://doi.org/10.1007/s11269-007-9209-5

Rotaeche R, Ballesteros A, Proenza J. Speeding Task Allocation Search for Reconfigurations in Adaptive Distributed Embedded Systems Using Deep Reinforcement Learning. Sensors. 2023; 23(1):548. https://doi.org/10.3390/s23010548