Published November 19, 2022 | Version 1.0.0
Dataset Open

Fleiss kappa for doc-2-doc relevance assessment

Description

Here we present a table summarizing the Fleiss’ kappa results. The Fleiss’ Kappa was calculated as a way to measure the degree of agreement between four annotators who evaluated the relevance of a set of documents (15 evaluation articles) regarding its corresponding “reference article”. The table contains 7 columns, the first one presents the topics, 8 in total. The second column shows the “reference articles”, represented by their PubMed-ID and organized by topic. The third column shows the Fleiss’ Kappa results. The fourth column shows the interpretation of the Fleiss' Kappa results being: i) “Poor” results <0.20, ii) “Fair” results within 0.21 - 0.40, and iii) “Moderate” results within 0.41 - 0.60. The fifth column shows the PubMed-IDs of evaluation articles rated by the four annotators as “Relevant” regarding its corresponding “reference article”. The sixth column shows the PubMed-IDs of evaluation articles rated by the four annotators as “Partially relevant” regarding its corresponding “reference article”. The seventh column shows the PubMed-IDs of evaluation articles rated by the four annotators as “Non-relevant” regarding its corresponding “reference article”.

 

Acknowledgements 
This work is part of the STELLA project funded by DFG (project no. 407518790). This work was supported by the BMBF-funded de.NBI Cloud within the German Network for Bioinformatics Infrastructure (de.NBI) (031A532B, 031A533A, 031A533B, 031A534A, 031A535A, 031A537A, 031A537B, 031A537C, 031A537D, 031A538A).
 

Files

Fleiss Kappa for document-to-document relevant assessment.csv

Files (3.3 kB)