Published January 8, 2026
| Version 1.0.0
Publication
Open
Language-Dependent Communication Strategies in Multilingual Large Language Models: A Comparative Analysis of Russian and English Response Patterns in Mistral AI
Description
This study investigates whether language selection in multilingual large language models (LLMs) affects output quality beyond translation accuracy. Through controlled testing of 21 semantically equivalent query pairs in Russian and English using Mistral AI's latest model, we discovered that language choice fundamentally alters communication strategy, information architecture, and user optimization rather than simply translating content. Russian responses averaged 21% shorter (527 vs 667 words) with 15% higher information density, optimizing for expert-level efficiency through hierarchical structure and minimal redundancy. English responses prioritized comprehensive coverage with extensive scaffolding for learners. Processing time paradoxically favored English (20.6s) over Russian (28.2s) despite shorter Russian outputs, suggesting tokenization complexity. Our findings challenge the "English-first" assumption in AI deployment and demonstrate that LLMs encode language-specific rhetorical conventions beyond lexical translation. We propose a framework for matching language selection to user expertise level and task context, with implications for multilingual AI system design, educational technology, and cross-cultural human-computer interaction.
Files
Abstract.pdf
Additional details
Additional titles
- Alternative title (English)
- Does Language Choice Affect AI Response Quality?
- Alternative title (English)
- LLM Language Performance Research
Dates
- Created
-
2026-01-08Research Registered
Software
- Repository URL
- https://github.com/nikitaycs50/LLM-Language-Performance-Research
- Programming language
- Python
- Development Status
- Active