Published January 29, 2026 | Version v1
Preprint Open

The Hidden Cost of Tokenization: Why (most) Non-English Speakers Pay More for Less

  • 1. ROR icon Humboldt-Universität zu Berlin
  • 2. ROR icon Weizenbaum Institute
  • 3. ROR icon Technische Universität Berlin
  • 4. ROR icon Zuse Institute Berlin

Description

Large Language Models are often celebrated as democratizing technologies, yet their fundamental infrastructure embeds systematic bias against most of the world’s languages. This paper argues that tokenization, defined as the process of converting text into model-readable units, creates profound and largely invisible inequalities. Drawing on the Sapir-Whorf hypothesis, we introduce the concept of cognitive friction to explain how misaligned tokenization degrades not merely efficiency but the fundamental quality of language understanding. We trace the cascading consequences of this inequity: a language tax that makes LLM based services used in non-English languages systematically more expensive, degraded reasoning performance as sequence lengths inflate, and disproportionate environmental costs carried by users of inefficiently tokenized languages. Current benchmarks and “multilingual” marketing claims obscure these disparities, creating an illusion of parity that does not exist. We call for transparency in tokenizer efficiency reporting, research into language-adaptive and equitable tokenization strategies, investment in language-specific foundation models, and policy frameworks that treat linguistic equity as a first-order concern. Tokenization is an active design decision that determines who benefits from AI and who bears hidden costs.

Files

Tokenization_preprint.pdf

Files (1.1 MB)

Name Size Download all
md5:ca10b075bf9d731ad6256db19f8e60c8
1.1 MB Preview Download