There is a newer version of the record available.

Published May 9, 2022 | Version 5.1
Dataset Open

DisTEMIST corpus: detection and normalization of disease mentions in spanish clinical cases

Description

Please cite if you use this dataset:

Miranda-Escalada, A., Gascó, L., Lima-López, S., Farré-Maduell, E., Estrada, D., Nentidis, A., Krithara, A., Katsimpras, G., Paliouras, G., & Krallinger, M. (2022). Overview of DisTEMIST at BioASQ: Automatic detection and normalization of diseases from clinical texts: results, methods, evaluation and multilingual resources. Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings

@article{miranda2022overview,
title={Overview of DisTEMIST at BioASQ: Automatic detection and normalization of diseases from clinical texts: results, methods, evaluation and multilingual resources},
author={Miranda-Escalada, Antonio and Gascó, Luis and Lima-López, Salvador and Farré-Maduell, Eulàlia and Estrada, Darryl and Nentidis, Anastasios and Krithara, Anastasia and Katsimpras, Georgios and Paliouras, Georgios and Krallinger, Martin},
booktitle={Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings},
year={2022}
}

 

DisTEMIST corpus: training set + MULTILINGUAL RESOURCES + CROSSMAPPINGS + test_background set

  • DISTEMIST-entities: complete training set (750 clinical cases)
  • DISTEMIST-linking: part 1 of the training set (209 clinical cases)
  • DISTEMIST-linking: part 2 of the training set (375 clinical cases)

 

Use the training sets, Multilingual Resources and Crossmappings to train your systems.

Make predictions for the files in the test and background sets.

 

 

Introduction

The DisTEMIST corpus is a collection of 1000 clinical cases with disease annotations linked with Snomed-CT concepts. All documents are released in the context of the BioASQ DisTEMIST track for CLEF 2022. For more information about the track and its schedule, please visit the website.

 

File structure:

The DisTEMIST corpus has been randomly divided into a training set, containing 750 clinical cases, and a test set (584 in the case of subtrack2), consisting of 250 additional cases. Participants must train their systems using the train set and submit predictions for the test set, on which they will be evaluated.  The file structure of the corpus is as follows:

  • train_set:
    • text_files: Folder with plain text files of the clinical cases  
    • subtrack1_entities: It contains annotations in a tab-separated file (TSV) with the following columns:
      • filename: document name
      • mark: identifier mention id
      • label: mentions type (ENFERMEDAD)
      • off0: starting position of the mention in the document
      • off1: ending position of the mention in the document
      • span:  text span
    • subtrack2_linking: It contains annotations in a tab-separated file (TSV) with the following columns:
      • filename: document name
      • mark: identifier mention id
      • label: mentions type (ENFERMEDAD)
      • off0: starting position of the mention in the document
      • off1: ending position of the mention in the document
      • span:  text span
      • codes: List of Snomed-CT concept codes linked to the mention. If there is more than one code associated with a mention, they will be concatenated by the symbol “+”.
      • semantic relation: the relationship between the assigned code and the mention. It can be EXACT, when the code corresponds exactly with the mention, or NARROW, when the mention corresponds to a narrower concept than the Snomed-CT code. For instance, the concept “Chorioretinal lacunae” does not exist in Snomed-CT. Then, it is normalized to the Snomed-CT ID 302893000 (“Chorioretinal disorder”).

 

  • test_background/text_files: 3000 clinical cases (test + background). You have to make predictions for these 3000 files. You will be evaluated on a subset of these files.

 

  • multilingual-resources: we have generated the annotated training and validation sets in 6 languages: English, Portuguese, Catalan, Italian, French and Romanian. The process was:
    1. The text files were translated with a neural machine translation system.
    2. The annotations were translated with the same neural machine translation system.
    3. The translated annotations were transferred to the translated text files using an annotation transfer technology.

 

  • cross-mappings. We include the same entities as in DISTEMIST-linking but mapped to Snomed-CT, MeSH, ICD-10, HPO, and OMIM. The original mappings are manual and to Snomed-CT. The mapping to the other terminologies was done through the UMLS Metathesaurus.

 

Resources 

Notes

Funded by the Plan de Impulso de las Tecnologías del Lenguaje (Plan TL).

Files

distemist.zip

Files (14.6 MB)

Name Size Download all
md5:f3d69bcd062990322f61ab6695be0779
14.6 MB Preview Download