Subtextual Prompting: How Implicit Communication Increases Effective Context Utilization in Large Language Models and Why AI Amplifies Existing Capability Rather Than Democratizing It
Authors/Creators
Contributors
Researcher:
Description
The prevailing narrative frames generative AI as a democratizing force, a technology that levels the playing field by giving everyone access to expert-level writing, coding, and research capabilities. We challenge this narrative with a two-part thesis. First, we introduce the concept of subtextual prompting, the use of implicit, high-entropy communication to direct large language models (LLMs), and argue that it achieves dramatically higher effective context utilization per token than explicit, literal prompting. Second, we argue that the ability to communicate subtextually is a preexisting cognitive skill unevenly distributed in the population, which means AI functions as a capability amplifier rather than an equalizer: it makes the skilled more productive while making the unskilled more visibly deficient. We ground these claims in Gricean pragmatics, Shannon information theory, and emerging empirical evidence from AI-assisted scientific publishing and software development, including findings that AI-adopting researchers publish 3× more papers and receive 5× more citations (Evans et al., 2025, Nature), while rejection rates on preprint servers have tripled against a 376% increase in prolific new authors, revealing a bifurcation rather than a uniform quality decline. We propose a formal information-theoretic framework for measuring subtextual compression in human-AI interaction and derive three testable predictions about the relationship between prompting skill, model capability, and output quality. We conclude that AI did not change the quality distribution of researchers, it changed the clock speed, making the true capability distribution visible for the first time.
Files
subtextual_prompting_paper.pdf
Files
(175.1 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:440657b1265be253042ecc18149e03a0
|
175.1 kB | Preview Download |