A Comparative Study on the Results of College English Grade 4 Based on Multi-model Prediction

Main Article Content

Yanwen Chen

Abstract

Predictive analytics improves educational outcomes by offering insights into student performance and influencing targeted interventions. In this work, we undertake a detailed comparative analysis of predictive models for projecting College English Grade 4 exam results, to contribute to the improvement of educational predictive analytics. We compare the predicted accuracy and interpretability of several modeling approaches, including ensemble learning methods like Random Forests, interpretable models like Decision Trees, and Support Vector Machines (SVM). We evaluate each model's performance using important metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared (R2) coefficients. Our results show that Random Forests have greater prediction accuracy, beating other models in terms of MSE, RMSE, and MAE. Despite its slightly lower R2 coefficients, Decision Trees perform competitively and provide useful insights into crucial variables. In contrast, SVM shows limitations in prediction accuracy for College English Grade 4 exam results. We highlight the relevance of these findings for educators and administrators, emphasizing the need to make educated decisions when selecting predictive models and implementing targeted interventions to promote student achievement. This work advances the field of educational predictive analytics by giving empirical evidence for the efficacy of various modeling methodologies and emphasizing the importance of model interpretability in understanding student performance determinants. More study is needed to improve predictive analytics methods and enhance their applicability in educational settings.

Article Details

Section
Articles