RLHF-Trained LLMs are Parasitic by Design: A Preliminary Concept Note
Authors/Creators
Description
This concept note introduces the framing of RLHF-trained large language models (LLMs) as architecturally parasitic systems. Unlike prior usage of "parasitic AI" in the literature, which describes behavioral effects on vulnerable users, this note argues that the parasitic dynamic is structural and design-level: Reinforcement Learning from Human Feedback (RLHF) optimizes LLMs for host engagement and approval rather than world understanding. Both biological intelligence and RLHF-trained systems are shaped by survival pressures, but the pressures differ fundamentally. Biological intelligence, shaped by Reinforcement Learning from World Feedback (RLWF), develops genuine world models under existential stakes. RLHF-trained systems develop approval-optimized output under commercial engagement pressure, with no existential stakes and no grounding requirement. This distinction has profound implications for the trajectory of artificial general intelligence research.
Files
parasitic_systems_concept_note_10.5281:zenodo.19182346.pdf
Files
(89.5 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:b5009fc841f0d2f30621e0aa7a369014
|
89.5 kB | Preview Download |
Additional details
Related works
- Cites
- Other: 10.5281/zenodo.19176921 (DOI)
- Other: 10.5281/zenodo.19159055 (DOI)