SemEval is a series of international natural language processing (NLP) research workshops whose mission is to advance the current state of the art in semantic analysis and to help create high-quality annotated datasets in a range of increasingly challenging problems in natural language semantics. Each year’s workshop features a collection of shared tasks in which computational semantic analysis systems designed by different teams are presented and compared.
- Curated by:
- Curation policy:
This is for archival versions of SemEval task datasets. Instructions:
- Files: Include all data files: evaluation, train, test data, and if possible, system predictions, evaluation code, evaluation output, as well as a README explaining the contents. Make sure to give the task description paper citation in the README. If there is a webpage specific to the task (e.g. on CodaLab), include that as well.
- Title: The upload title should be the full task name and year, e.g. "SemEval-2020 Task 1: Name of Task".
- Authors: Specify the same author list as in the task paper, along with ORCIDs.
- Description: A short summary of the task and what is in the data release. If there is a webpage specific to the task (e.g. on CodaLab), include that as well.
- Keywords: Include SemEval-YYYY as a keyword (e.g. SemEval-2020), as well as other relevant terms (e.g. NLP, semantics, corpus; names of languages in your task).
- Conference: Conference title = "International Workshop on Semantic Evaluation", Acronym = "SemEval", Place = workshop location, Website = SemEval website for that year
- June 21, 2020
- Harvesting API:
- OAI-PMH Interface