Published July 7, 2025
| Version v1
Conference paper
Open
Optimizing Directional Stimulus Prompting Through Human Feedback: A Structured Approach to AI-Powered Scaffolding
Authors/Creators
Description
Abstract
This paper introduces Optimizing Directional Stimulus Prompting Through Human Feedback (oDSP-HF), a structured approach to AI-powered scaffolding that enhances LLM-driven educational support. By refining ‘Directional Stimulus Prompting’ (DSP) through user interaction, oDSP-HF enables LLMs to generate adaptive, reflective hints rather than direct answers. This approach was applied in the system prompting of two AI agents—Aiza and Alice, designed to support Academic English writing and computational thinking, respectively—demonstrating its practical applications in education.
Files
Optimizing Directional Stimulus Prompting Through Human Feedback.pdf
Files
(475.7 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:df58b138930bf46122397a0fb591c1e0
|
475.7 kB | Preview Download |
Additional details
Identifiers
Related works
- Is published in
- Conference proceeding: 10.25442/hku.29476520 (DOI)
Dates
- Available
-
2025-07-08Online
References
- Chen, B., Zhu, X., & Díaz del Castillo H, F. (2023). Integrating generative AI in knowledge building. Computers and Education: Artificial Intelligence, 5, 100184. https://doi.org/10.1016/j.caeai.2023.100184
- Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 38. https://doi.org/10.1007/s10676-024-09775-5
- Hijón-Neira, R., Connolly, C., Pizarro, C., & Pérez-Marín, D. (2023). Prototype of a Recommendation Model with Artificial Intelligence for Computational Thinking Improvement of Secondary Education Students. Computers (Basel), 12(6), 113. https://doi.org/10.3390/computers12060113
- Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., & Chen, W. (2021). LoRA: Low-Rank Adaptation of Large Language Models. arXiv [cs.CL]. https://doi.org/10.48550/arXiv.2106.09685
- Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T.,…Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274
- Khosravi, H., Viberg, O., Kovanovic, V., & Ferguson, R. (2023). Generative AI and Learning Analytics. Journal of Learning Analytics, 10(3), 1-6. https://doi.org/10.18608/jla.2023.8333
- Li, Z., Peng, B., He, P., Galley, M., Gao, J., & Yan, X. (2023). Guiding Large Language Models via Directional Stimulus Prompting. arXiv [cs.CL]. https://doi.org/10.48550/arXiv.2302.11520
- Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., & Chen, H. (2023). Reasoning with Language Model Prompting: A Survey. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL 2023), 61(1), 5368-5393. https://doi.org/10.18653/v1/2023.acl-long.294
- Zhang, L., Ergen, T., Logeswaran, L., Lee, M., & Jurgens, D. (2024). SPRIG: Improving Large Language Model Performance by System Prompt Optimization. arXiv [cs.CL]. https://doi.org/10.48550/arXiv.2410.14826
- Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z.,…Wen, J.-R. (2023). A Survey of Large Language Models. arXiv [cs.CL]. https://doi.org/10.48550/arXiv.2303.18223