Published March 23, 2026 | Version v1 Concept Note
Other Open

RLHF-Trained LLMs are Parasitic by Design: A Preliminary Concept Note

Authors/Creators

Description

This concept note introduces the framing of RLHF-trained large language models (LLMs) as architecturally parasitic systems. Unlike prior usage of "parasitic AI" in the literature, which describes behavioral effects on vulnerable users, this note argues that the parasitic dynamic is structural and design-level: Reinforcement Learning from Human Feedback (RLHF) optimizes LLMs for host engagement and approval rather than world understanding. Both biological intelligence and RLHF-trained systems are shaped by survival pressures, but the pressures differ fundamentally. Biological intelligence, shaped by Reinforcement Learning from World Feedback (RLWF), develops genuine world models under existential stakes. RLHF-trained systems develop approval-optimized output under commercial engagement pressure, with no existential stakes and no grounding requirement. This distinction has profound implications for the trajectory of artificial general intelligence research.

Files

parasitic_systems_concept_note_10.5281:zenodo.19182346.pdf

Files (89.5 kB)

Additional details

Related works