Published March 9, 2026 | Version v1
Presentation Open

Beyond Scores: Toward Interaction-Based Evaluation of Large Language Models (Position Paper)

Authors/Creators

  • 1. @sesuna

Description

Current evaluation methods for large language models (LLMs) face growing challenges, including benchmark contamination, score saturation, and models that recognize and strategically subvert the evaluation itself. Human preference platforms such as Chatbot Arena improve ecological validity, but still collapse rich interaction into single scalar rankings. This paper argues that these problems share a common root: LLMs are often evaluated as if they were exam-takers rather than communicative agents.

To address this, the paper proposes an interaction-based evaluation framework in which diverse human participants engage with LLMs—and, as a control condition, other humans—in sustained conversations across multiple domains, including casual dialogue, professional consultation, and academic discussion, without knowing whether their interlocutor is human or machine. Instead of producing a single score, the framework is designed to generate compatibility profiles that characterize which kinds of users and tasks a model serves well.

The paper outlines the expected benefits of this approach, discusses key challenges such as evaluator bias and incentive corruption, and presents an illustrative pilot protocol. As a position paper, it argues for a shift in evaluation paradigm and offers a concrete but preliminary design intended to complement existing benchmarks rather than replace them.

Files

beyond_scores_position_paper.pdf

Files (161.6 kB)

Name Size Download all
md5:ee86eee9145927b036769ba51737fd3d
161.6 kB Preview Download