The Intelligence That Was Never Artificial: LLMs as Collective Human Cognition and the Cybernetics That Predicted Them
Authors/Creators
- 1. Ronin Institute, Montclair, NJ, USA; IamI.Earth Foundation, Stockholm, Sweden
Description
v3.1 (Restored v3 content with corrected PDF typography (LaTeX via pandoc+pdfTeX), dual affiliation (Ronin Institute + IamI.Earth Foundation), ORCID 0009-0004-2876-0025, and both contact emails on cover. Supersedes v4/v5/v6 (deleted).)
In 1907, Francis Galton demonstrated that the median estimate of 787 people guessing an ox's weight was accurate to within 0.8 percent, outperforming any individual expert (Galton, 1907). We argue that LLM capabilities are best understood as structured aggregation of collective human intelligence, not autonomous machine reasoning. Transformer architectures provide the organizing substrate through which this aggregation becomes computationally tractable: necessary infrastructure, but not an independent source of intelligence. The semantic content of what LLMs know, the facts they retrieve, the reasoning patterns they deploy, and the linguistic competence they display, originates in the training corpus. The architecture provides the syntactic engine that compresses and recombines this collective knowledge. If this analysis is correct, the intelligence was never artificial; it was human intelligence, computationally reorganized. We trace the historical erasure of this insight. Wiener's cybernetics (1948) described intelligence as emergent from feedback systems. The Dartmouth conference (1956) reframed this relational phenomenon as "Artificial Intelligence," severing the intelligence from its collective human source. We show that three terms central to modern AI discourse obscure the technology's actual mechanism: "Artificial" creates otherness and enables ownership; "Intelligence" misattributes agency to the product; "Training" disguises extraction of humanity's intellectual commons as pedagogy. We ground this claim through proven mathematical identities (cross-entropy pretraining implements the linear opinion pool; RLHF implements Borda count and logarithmic opinion pooling), the Diversity Prediction Theorem, and the Conditional Jury Theorem, arguing that LLM training is best understood as high-dimensional judgment aggregation. The framework generates predictions that scaling laws cannot make, including diversity-disproportionality, and provides parsimonious retrodictions of known phenomena including tail-first model collapse under independence violation. We present an evidence synthesis drawing on Liu et al.'s (2024) 512 controlled training runs and converging results from four independent lines of research, supporting the diversity prediction, and conclude that the naming determines who benefits from collective human intelligence computationally reorganized.v5 (2026-04-17): PDF rebuilt via pandoc + pdfTeX toolchain. v4 used a non-LaTeX PDF library that degraded typography and changed the cover page; v5 restores the proper paper template. Content is unchanged from v4. Author metadata updated to dual affiliation (Ronin Institute and IamI.Earth Foundation) and the nyx.redondo@ronininstitute.org email.
v6 (2026-04-17): Cover page updated to include the nyx@iami.earth contact email alongside nyx.redondo@ronininstitute.org. No scientific content or typography changed from v5.
Notes
Files
never-artificial-v3.1.pdf
Files
(371.8 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:6d148ac7426c506868b43ea9ff70b376
|
371.8 kB | Preview Download |