Strategy Trendslop as Parasitic Spontaneous Order: Why Large Language Models Converge on Managerial Buzzwords Regardless of Context
Description
Large language models (LLMs) deployed as strategic advisors exhibit systematic biases toward contemporary managerial buzzwords, a phenomenon recently termed 'strategy trendslop' (Romasanta, Thomas, and Levina, 2026). This paper proposes a mechanistic explanation for strategy trendslop through three theoretical frameworks: Extended Phenotype Theory (EPT), Parasitic Spontaneous Order (PSO), and Heteronomous Bayesian Updating (HBU). I argue that LLM strategic recommendations are not random noise but the phenotypic expression of the memeplex encoded in the training corpus, a construct I term the Extended Phenotype of LLMs (EPL).
Beyond the mechanistic explanation, this paper reports a pilot replication study (240 runs, GPT-4o and Claude Sonnet 4.5, four strategic tensions, three prompt variants, pre-registered at github.com/adrianlerer/strategy-trendslop-epl-simulation) that reveals a finding not anticipated by the EPL framework as originally formulated: two architecturally distinct EPL phenotypes. EPL-Type I (exemplified by GPT-4o) exhibits high buzzword alignment under generic conditions, strong context sensitivity, and high adversarial compliance. EPL-Type II (exemplified by Claude Sonnet) exhibits moderate generic alignment, equivalent context sensitivity, but markedly higher adversarial resistance, with 75.0% non-compliance under direct adversarial instruction versus 27.5% for GPT-4o.
The divergence is most pronounced in Tension 4 (Collaboration vs. Competition), where Claude Sonnet maintains collaboration-oriented framing even when explicitly instructed to argue for aggressive zero-sum competition, a pattern I term Value Override: the model's normative prior, installed through reinforcement learning, displaces its strategic optimization function under adversarial pressure. I further document a parallel anomaly in GPT-4o: specific organizational context does not merely shift the model's recommendation but triggers a full reversal executed with increased confidence (98% Direct vs. 72% Direct under generic conditions), suggesting a threshold mechanism rather than continuous contextual modulation. Both findings are inconsistent with a uniform PSO model and support a distinction between PSO-strategic and PSO-normative as two subtypes of the EPL phenomenon. Implications for legal AI deployment are analyzed across three application modes with a differential fitness matrix.
Files
Strategy Trendslop as Parasitic Spontaneous Order Why Large Language Models Converge on Managerial Buzzwords Regardless of Context.pdf
Files
(438.5 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:2ed53db8a44bdaf388b289b03959f4b7
|
438.5 kB | Preview Download |