Published July 18, 2022
| Version v1
Conference paper
Open
Does chronology matter? Sequential vs contextual approaches to knowledge tracing
Authors/Creators
Contributors
Editors:
- 1. University of Canterbury, NZ
- 2. University of Illinois Urbana–Champaign, US
Description
Deep learning architectures such as RNN and pure-attention based models have shown state-of-the-art performance in modeling student performance, yet the sources of the predictive power of such models remain an open question. In this paper, we investigate the predictive power of aspects of LSTM and pure attention-based architectures that model sequentiality. We design a knowledge tracing model based on a general transformer encoder architecture to explore the predictive power of sequentiality for attention-based models. For the LSTM-based Deep Knowledge Tracing model, we manipulate the state transition coefficient matrix to turn sequential modeling on and off. All models are evaluated on four public tutoring datasets from ASSISTments and Cognitive Tutor. Experimental results show that DKT and pure-attention based models are overall insensitive towards removing major sequential signals by disabling their sequential modeling parts but with the attention-based model about four times more sensitive. Lastly, we shed light on benefits and challenges of sequential modeling in student performance prediction.
Files
2022.EDM-posters.67.pdf
Files
(344.8 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:51fbc82983d567dcd1fd75572a71c608
|
344.8 kB | Preview Download |