Analyzing Student Interactions in Online Judge Systems with Explainable Artificial Intelligence

Main Article Content

P. R. Sudha Rani, M. Reshma, A. Seenu

Abstract

Online Judge (OJ) systems are commonly employed in programming courses to facilitate quick and objective assessments of students' code submissions. Typically, these evaluations yield a simple binary result—indicating whether the submission meets the assignment criteria according to a specified rubric. However, in educational contexts, such a singular outcome may not suffice. This study aims to explore additional methodologies for analyzing data gathered by OJ systems to generate constructive feedback regarding students' progress. In particular, it use learning-based strategies, including Multi-Case Learning and customary AI techniques, to analyze and demonstrate understudy ways of behaving. Besides, Reasonable Man-made consciousness (XAI) is consolidated to guarantee that the criticism gave is clear and significant. The model was validated using a dataset consisting of 2,500 submissions from around 90 students enrolled in a Computer Science programming course. The findings reveal that the model can effectively predict assignment results (pass or fail) by analyzing behavioral patterns derived from submission data within the OJ system. Additionally, this methodology facilitates the identification of at-risk student groups and behavior profiles, offering meaningful feedback that can support students in their learning journey and assist instructors in refining their teaching approaches.

Article Details

Section
Articles