Research Article Open Access

Comparative Analysis of GPT-4o and Gemini 1.5 Pro in Thai Exam Settings

Kasidis Miankamnerd1 and Taechasith Kangkhuntod1
  • 1 Gifted Science Mathematics Programme, Ratchasima Witthayalai School, Nakhon Ratchasima, Thailand

Abstract

This study presents a comparative analysis of two advanced AI models, GPT-4o and Gemini 1.5 Pro, within the context of Thai standardized exams. The selected tests include POSN Biology, POSN Mathematics, A-Level Thai Language, and A-Level Social Studies, chosen based on consultations with educational experts to ensure relevance. Each AI model was tested three times on these exams to ensure consistency and reliability in the results. The primary metrics for evaluation were accuracy, measured by the percentage of correct answers, and efficiency, determined by the response time. Our findings reveal that GPT-4o generally outperforms Gemini 1.5 Pro in both accuracy and efficiency across most subjects. Specifically, GPT-4o demonstrated quicker response times and higher consistency in performance. Conversely, Gemini 1.5 Pro showed stronger performance in the Thai language exam, indicating its proficiency in language comprehension and contextual understanding. Despite these observations, the differences in both accuracy and response time between the two models were not statistically significant, suggesting that while GPT-4o appears to have practical advantages, the overall performance difference is limited. This study contributes to the growing body of knowledge on the practical utility of AI models, offering insights into their strengths and limitations. Future research should expand the scope by exploring additional subjects and incorporating a broader range of standardized tests to provide a more comprehensive evaluation.

Journal of Computer Science
Volume 21 No. 1, 2025, 203-211

DOI: https://doi.org/10.3844/jcssp.2025.203.211

Submitted On: 27 May 2024 Published On: 30 December 2024

How to Cite: Miankamnerd, K. & Kangkhuntod, T. (2025). Comparative Analysis of GPT-4o and Gemini 1.5 Pro in Thai Exam Settings. Journal of Computer Science, 21(1), 203-211. https://doi.org/10.3844/jcssp.2025.203.211

  • 238 Views
  • 45 Downloads
  • 0 Citations

Download

Keywords

  • GPT-4o
  • Gemini 1.5 Pro
  • Thai Exam Performance
  • Standardized Testing
  • Comparative Analysis