Navigating the Fog: The Effectiveness of Personalised Conversational GenAI Models for Supporting Ancient Language Learning

Authors

DOI:

https://doi.org/10.64946/aiantiquity.v1i1.002

Keywords:

ancient language learning, generative artificial intelligence , Latin, OpenAI , Gemini, AI Ethics

Abstract

Hallucinations (misleading, inaccurate predicted text presented as fact) are a critical problem for using generative artificial intelligence (GenAI) tools to support ancient language teaching and learning. For a teacher, significant editing time is required to correct any inaccuracies or misrepresentations prior to making use of AI-generated content to support their teaching practice. For students, these convincing errors may not be recognised, and this may lead to misconceptions in their knowledge formation. OpenAI and Google released public-facing, customizable conversational AI models which allow users to upload their own datasets to create personalised AI chat agents, known as GPTs (2023) and Gems (2024) respectively. This presents an opportunity for teachers to personalize their own models to streamline their students’ experiences. However, can personalised conversational AI tools provide a fine-tuned experience that reduces the major, problematic ancient history and ancient language hallucinations that we see in standard ChatGPT and Gemini outputs?

This paper discusses the creation of a personalised Latin Tutor GPT and Gem through the development of a series of exhaustive Latin vocabulary spreadsheets. We tested these personalised tools against their standard GenAI counterpart to determine if personalisation improved their efficacy and efficiency for supporting ancient language learning. The development of the spreadsheets and testing process both closely addressed current GenAI ethical issues, including copyright, environmental impact, and content restrictions. The results of these tests found that personalised GPTs and Gems made small efficacy and efficiency improvements, but the time and energy required greatly outweighed the results.

Downloads

Published

2025-09-24

Issue

Section

Articles