Published March 4, 2026 | Version v1
Preprint Open

From Prompt to Precision: Conceptual Convergence in the Web of Knowledge of Large Language Models

Authors/Creators

  • 1. ROR icon American University of Beirut

Description

This paper investigates how large language models generate precise responses despite operating within vast networks of conceptual associations encoded in language. It proposes that prompts function as activation signals within a high-dimensional web of knowledge derived from statistical patterns in language, generating a space of potential semantic trajectories that are progressively constrained through attention mechanisms and probabilistic filtering — a process termed conceptual convergence. The paper introduces conceptual gravity as a subsidiary mechanism: differential density across the conceptual web causes semantic trajectories to be drawn toward high-density regions, explaining both reliable coherence and the structural predictability of hallucination. The account is situated in relation to adjacent positions (Bender et al., 2021; Shanahan, 2024; Harnad, 1990) and grounded in classical epistemological frameworks — Quine’s web of belief, Frege’s sense and reference, Peirce’s convergent inquiry, and Collins and Loftus’s spreading activation theory. The paper further develops multi-turn interaction dynamics through the concept of convergence lock-in, addresses the cultural specificity of the conceptual web, and identifies empirically testable predictions arising from the conceptual gravity proposal. The result is a philosophically grounded framework that characterizes prompting as guided conceptual navigation and illuminates the hybrid epistemic activity that emerges from sustained human–AI interaction.

Files

From Prompt to Precision - Matta 2026 v2.pdf

Files (270.3 kB)

Name Size Download all
md5:a5a4e6697496286e32f5654ff052ebe5
270.3 kB Preview Download