Published February 21, 2024 | Version v1
Conference paper Open

Quantifying trust towards LLM-based chatbots: A mixed-method approach

  • 1. Bielefeld University, German Linguistics/Digital Linguistics Lab, Faculty of Linguistics and Literary Studies, Bielefeld University, Germany
  • 1. Universität Trier
  • 2. Universität Luxemburg
  • 3. Universität Passau
  • 4. Digital Humanities im deutschsprachigen Raum
  • 5. Universität zu Köln

Description

This paper investigates trustworthiness in the public debate about the reliability of large-language model-based chatbots, such as ChatGPT by applying a mixed-method approach to German data from the German Digital Dictionary (DWDS). The paper aims to account for how trustworthiness in human-machine interaction can be quantified and how to obtain a domain-independent set of trust-relevant linguistic cues. We use manual annotation in Maxqda to obtain trust-related linguistic markers and identify the level of trustworthiness indicated by them. Afterwards, given that trustworthiness is a complex trust-related phenomenon with cognitive and affective properties, we explore the correlation between the level of trustworthiness with sentiment scores for trust-related markers obtained by human ratings, machine learning and lexicon-based sentiment models. The results indicate a high correlation between positive sentiment scores obtained by sentiment models and trustworthiness obtained by human annotation. In this regard, sentiment analysis also provides evidence for the quantification of emotional aspects of trust.

Files

VO07_BELOSEVIC_Milena_Quantifying_trust_towards_LLM_based_chatbot.pdf

Additional details

Related works

Is part of
Book: 10.5281/zenodo.10686564 (DOI)