Detecting Cloud Misconfigurations with RAG and Intelligent Agents: A Natural Language Understanding Approach
Main Article Content
Abstract
Misconfigurations of cloud services stay a critical security factor that is regularly followed by downtimes, breaches, and financial losses. In the case of detection, it becomes apparent that the traditional methods like manual auditing and preprogrammed rule-based systems are not easy to scale, and are not very adaptive in their nature, while on the other hand, the more advanced methods like machine learning models have drawbacks of their own including those of data requirements and of generalizing the model. In this study, we discuss the use of large language models, Google’s Gemini in particular, for the zero-shot cloud misconfigurations detection. Based on the natural language understanding of LLMs, the study shows how such systems can be used to discover subtleties in cloud dynamics and new security threats. The proposed framework called SARGE takes a set of cloud configuration files, analyses them for misconfigurations, and generates recommendations that do not need to be particular to specific tasks. The implementation includes cloud platforms, using Terraform for testing purposes, Docker for scalability purposes. Experiments demonstrate that proposed LLMs achieve higher accuracy than traditional approaches in identifying new misconfigurations while operating at scale and being easy to interpret. To the best of the author’s knowledge, this research fills gaps in the existing literature and provides a novel solution for cloud security that alleviates challenges of previous solutions. It also creates the basis for further studies concerning LLMs incorporation into cloud-native security solutions for enhancing efficient threat detection and mitigation.
Article Details

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.