Published September 4, 2025
| Version 1.0
Preprint
Open
Modeling and Inference of Human-Authored Multiple-Choice Sequences: A Teacher-Specific, Anchor-Conditioned Framework
Creators
Description
This preprint presents a reproducible framework for modeling and inferring answer sequences in human-authored multiple-choice examinations. The approach combines positional priors, sequential pattern mining, conservative augmentation, ensemble learning, anchor-conditioned inference, and historical-similarity corrections. The study demonstrates that instructor-specific answer sequences deviate from randomness in measurable ways and that such structure can be exploited for prediction, calibration, and uncertainty quantification. The manuscript includes formal definitions, algorithmic details, evaluation protocols, and an extensive discussion of applicability and limitations.
Files
Research.pdf
Files
(294.8 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:9384be48d60736915f5914e6d7c62a19
|
294.8 kB | Preview Download |
Additional details
References
- Rabinowitz, S., & Brandt, T. (2009). Test Development and Technical Documentation. Lord, F. M. (1980). Applications of Item Response Theory to Practical Testing Problems. Routledge. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer. Zhang, J., & Wu, Y. (2020). Statistical Modeling of Educational Assessments.