Published July 12, 2025 | Version v1
Conference paper Open

Improving Course Recommendation Systems with Explainable AI: LLM-Based Frameworks and Evaluations

  • 1. University of Minnesota, USA
  • 2. Weizmann Institute of Science, Israel
  • 3. CNR-ITD, Italy
  • 4. University of Palermo, Italy
  • 5. University of Illinois at Urbana-Champaign, USA

Description

Deep learning-based course recommendation systems often suffer from a lack of interpretability, limiting their practical utility for students and academic advisors. To address this challenge, we propose a modular, post-hoc explanation framework leveraging Large Language Models (LLMs) to enhance the transparency of deep learning-driven recommenders. Our approach utilizes course descriptions, social science theories, and structured explanation formats to generate human-readable justifications, improving the interpretability and trustworthiness of recommendations. This study aims to enhance the AI-generated course recommendations by empirically evaluating the different LLM-based explanations for course recommendations. With the proposed explanation generation pipeline, four LLM-based explanations were generated and surveys were collected from course instructors to understand the efficiency of each prompt design. Evaluation with three instructors indicates that prompts integrating course context and the theory of relevance significantly enhance explanation quality and user satisfaction. Our findings highlight the importance of content-specific elements in interpretable AI-driven educational tools, with implications for enhancing explainability in learning analytics. This study provides insights for future fine-tuning of course recommendation systems supported by explainable artificial intelligence (XAI).

Files

2025.EDM.long-papers.221.pdf

Files (1.3 MB)

Name Size Download all
md5:dc87d7612fffbb60cd4944277bb51bdb
1.3 MB Preview Download