Published July 17, 2024 | Version v1
Dataset Open

Are Large Language Models Reliable Argument Quality Annotators?

  • 1. ROR icon Bauhaus-Universität Weimar

Description

This is a dataset of 320 arguments, each annotated with 15 different argument quality dimensions. The annotations were performed by 2 groups of human annotators: expert and novice, as well as large language models with different prompt variations.  

Please find more details in the corresponding publication: https://webis.de/publications.html#mirzakhmedova_2024b

Code for experiments can be found at: https://github.com/webis-de/RATIO-24

Please use the following citation key: 

 

@InProceedings{mirzakhmedova:2024b,
  author =                   {Nailia Mirzakhmedova and Marcel Gohsen and Chia Hao Chang and Benno Stein},
  booktitle =                {1st International Conference on Recent Advances in Robust Argumentation Machines {(RATIO-24)}},
  doi =                      {10.1007/978-3-031-63536-6_8},
  editor =                   {Philipp Cimiano and Anette Frank and Michael Kohlhase and Benno Stein},
  month =                    jun,
  pages =                    {129--146},
  publisher =                {Springer},
  site =                     {Bielefeld, Germany},
  title =                    {{Are Large Language Models Reliable Argument Quality Annotators?}},
  volume =                   14638,
  year =                     2024
}

Files

data.zip

Files (4.6 MB)

Name Size Download all
md5:9304436aabf24f85085570533777358d
4.6 MB Preview Download

Additional details

Dates

Accepted
2024-07-17

Software