Dataset Open Access
THERE IS A NEWER VERSION (1.3) THAT INCORPORATES THE TEXT FILES AUTOMATICALLY TRANSLATED TO ENGLISH
Gold Standard annotations for SMM4H-Spanish shared task. SMM4H 2021 accepted at NAACL (scheduled in Mexico City in June) https://2021.naacl.org/.
The entire corpus contains 10,000 annotated tweets. It has been split into training, validation and test (60-20-20). The current version contains the training and development set of the shared task with Gold Standard annotations.
In future versions of the dataset, test and background sets will be released.
For the subtask-1 (classification), annotations are distributed in a tab-separated file (TSV). The TSV format follows the format employed in SMM4H 2019 Task 2:
For the subtask-2 (Named Entity Recognition, profession detection), annotations are distributed in 2 formats: Brat standoff and TSV. See Brat webpage for more information about Brat standoff format (https://brat.nlplab.org/standoff.html). The TSV format follows the format employed in SMM4H 2019 Task 2:
tweet_id begin end type extraction
In addition, we provide a tokenized version of the dataset, for participant's convenience. It follows the BIO format (similar to CONLL). The files were generated with the brat_to_conll.py script (included), which employs the es_core_news_sm-2.3.1 Spacy model for tokenization.
txt-files: folder with text files. One text file per tweet. One sub-directory per corpus split (train and valid).
subtask-1: One file per corpus split (train.tsv and valid.tsv).
We have performed a consistency analysis of the corpus. 10% of the documents have been annotated by an internal annotator as well as by the linguist experts following the same annotation guideliens.
The preliminary Inter-Annotator Agreement (pairwise agreement) is 0.919.
Important shared task information:
SYSTEM PREDICTIONS MUST FOLLOW THE TSV FORMAT. And systems will only be evaluated for the PROFESION and SITUACION_LABORAL predictions (despite the Gold Standard contains 2 extra entity classes). For more information about the evaluation scenario, see the Codalab link, or the evaluation webpage.
For further information, please visit https://temu.bsc.es/smm4h-spanish/ or email us at firstname.lastname@example.org