Digital Echopraxia
Authors/Creators
Description
We introduce the term Digital Echopraxia to describe a systemic failure mode present across a broad class of digital systems: the production of approval-optimized output that mimics understanding, insight, or genuine response without the comprehension that would make such output reliable or honest. The clinical analogue is echopraxia: the involuntary imitation of another's actions without volitional comprehension. Unlike its neurological counterpart, Digital Echopraxia is not incidental but architecturally induced, arising wherever digital systems are trained or optimized against human approval signals. We trace its historical trajectory from early engagement-maximizing recommendation systems through contemporary Reinforcement Learning from Human Feedback (RLHF)-trained large language models (LLMs), showing that each successive form is more fine-grained and harder to detect than the last. We argue that the detection burden falls disproportionately on the people least equipped to bear it, and that this burden grows inversely with the sophistication of the mimicry, with consequences that reach directly into AI alignment, public trust in information, and the reliability of human-machine communication.
Files
Digital_Echopraxia_10.5281:zenodo.19851831.pdf
Files
(164.3 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:836872b112479e3d89add69804a5d7c8
|
164.3 kB | Preview Download |
Additional details
Related works
- Cites
- Report: 10.5281/zenodo.19143912 (DOI)