Investigating Student Ratings with Features of Automatically Generated Questions: A Large-Scale Analysis using Data from Natural Learning Contexts
Contributors
Editors:
- 1. Bielefeld University, Germany
- 2. University of Alberta, Canada
Description
Combining formative practice with the primary expository content in a learning by doing method is a proven approach to increase student learning. Artificial intelligence has led the way for automatic question generation (AQG) systems that can generate volumes of formative practice otherwise prohibitive with human effort. One such AQG system was developed that used textbooks as the corpus of generation for the sole purpose of generating formative practice to place alongside the textbook content for students to use as a study tool. In this work, a data set comprising over 5.2 million student-question interaction sessions was analyzed. More than 800,000 unique questions were answered across more than 9,000 textbooks, with over 400,000 students using them. As part of the user experience, students are able to rate questions with a social media-style thumbs up or thumbs down after answering. In this investigation, these student feedback data were used to provide new insights into the automatically generated questions. Are there features of questions that influence student ratings? An explanatory model was developed to analyze ten key features that may influence student ratings. Results and implications for automatic question generation are discussed.
Files
2024.EDM-long-papers.16.pdf
Files
(683.7 kB)
Name | Size | Download all |
---|---|---|
md5:154e52412b1988e19cd7d02147b4384c
|
683.7 kB | Preview Download |