Published February 21, 2024
| Version v1
Conference paper
Open
Quantifying trust towards LLM-based chatbots: A mixed-method approach
Creators
- 1. Bielefeld University, German Linguistics/Digital Linguistics Lab, Faculty of Linguistics and Literary Studies, Bielefeld University, Germany
Contributors
Data managers:
- 1. Universität Trier
- 2. Universität Luxemburg
- 3. Universität Passau
- 4. Digital Humanities im deutschsprachigen Raum
- 5. Universität zu Köln
Description
This paper investigates trustworthiness in the public debate about the reliability of large-language model-based chatbots, such as ChatGPT by applying a mixed-method approach to German data from the German Digital Dictionary (DWDS). The paper aims to account for how trustworthiness in human-machine interaction can be quantified and how to obtain a domain-independent set of trust-relevant linguistic cues. We use manual annotation in Maxqda to obtain trust-related linguistic markers and identify the level of trustworthiness indicated by them. Afterwards, given that trustworthiness is a complex trust-related phenomenon with cognitive and affective properties, we explore the correlation between the level of trustworthiness with sentiment scores for trust-related markers obtained by human ratings, machine learning and lexicon-based sentiment models. The results indicate a high correlation between positive sentiment scores obtained by sentiment models and trustworthiness obtained by human annotation. In this regard, sentiment analysis also provides evidence for the quantification of emotional aspects of trust.
Files
VO07_BELOSEVIC_Milena_Quantifying_trust_towards_LLM_based_chatbot.pdf
Files
(76.4 kB)
Name | Size | Download all |
---|---|---|
md5:a4f8218fb53ed8c52332c38fd53ea7ae
|
28.6 kB | Preview Download |
md5:c9dbb822df3cdc15cc0c318b5a3242e4
|
47.8 kB | Preview Download |
Additional details
Related works
- Is part of
- Book: 10.5281/zenodo.10686564 (DOI)