TY - JOUR AU - Miankamnerd, Kasidis AU - Kangkhuntod, Taechasith PY - 2024 TI - Comparative Analysis of GPT-4o and Gemini 1.5 Pro in Thai Exam Settings JF - Journal of Computer Science VL - 21 IS - 1 DO - 10.3844/jcssp.2025.203.211 UR - https://thescipub.com/abstract/jcssp.2025.203.211 AB - This study presents a comparative analysis of two advanced AI models, GPT-4o and Gemini 1.5 Pro, within the context of Thai standardized exams. The selected tests include POSN Biology, POSN Mathematics, A-Level Thai Language, and A-Level Social Studies, chosen based on consultations with educational experts to ensure relevance. Each AI model was tested three times on these exams to ensure consistency and reliability in the results. The primary metrics for evaluation were accuracy, measured by the percentage of correct answers, and efficiency, determined by the response time. Our findings reveal that GPT-4o generally outperforms Gemini 1.5 Pro in both accuracy and efficiency across most subjects. Specifically, GPT-4o demonstrated quicker response times and higher consistency in performance. Conversely, Gemini 1.5 Pro showed stronger performance in the Thai language exam, indicating its proficiency in language comprehension and contextual understanding. Despite these observations, the differences in both accuracy and response time between the two models were not statistically significant, suggesting that while GPT-4o appears to have practical advantages, the overall performance difference is limited. This study contributes to the growing body of knowledge on the practical utility of AI models, offering insights into their strengths and limitations. Future research should expand the scope by exploring additional subjects and incorporating a broader range of standardized tests to provide a more comprehensive evaluation.