Published March 1, 2024 | Version v3
Dataset Open

A Comprehensive Dataset of Classified Citations with Identifiers from English Wikipedia (2024)

  • 1. University of Amsterdam

Description

2024 (new!)

This is a dataset of 44.766.800 (+9.2%)  citations extracted from the English Wikipedia February 2024 dump (https://dumps.wikimedia.org/enwiki/20240220/).

The same extraction and template harmonization pipeline was used as the year before. The published dataset fields are like in the previous dataset. A classification label is assigned to each citation (either 'news', 'book', 'journal' or 'other)' by the deterministic rule-based classifier that analyses available identifiers (see code documentation for details), revealing the following citation subgroups:

  1. The total number of news: 10.958.151 (+9.4%)
  2. The total number of books:* 3.277.629 (+8.6%)
  3. The total number of journals*: 2.248.748 (+8.7%)

* Please note that these numbers do not represent the overall number of book and journal citations, we count only citations with DOI, PMID, PMC and ISBN identifiers assigned by authors (prior to the lookup process that augments citations with missing identifiers).  

This dataset is not equipped with identifiers located via the lookup process (no 'acquired_ID_list' field). If there is interest in such an augmented version, see the source code for instructions or contact authors for assistance with this task.       

2023

This is a dataset of 40.664.485 citations extracted from the English Wikipedia February 2023 dump (https://dumps.wikimedia.org/enwiki/20230220/).

Version 1: en_citations.zip is a dataset of extracted citations 

Version 2: en_final.zip is the same dataset with classified citations augmented with identifiers  

The fields are as follows:

  • type_of_citation - Wikipedia template type used to define the citation, e.g., 'cite journal', 'cite news', etc.
  • page_title - title of the Wikipedia article from which the citation was extracted.
  • Title - source title, e.g., title of the book, newspaper article, etc.
  • URL - link to the source, e.g., webpage where news article was published, description of the book at the publisher's website, online library webpage, etc.
  • tld - top link domain extracted from the URL, e.g., 'bbc' for https://www.bbc.co.uk/... 
  • Authors - list of article or book authors, if available.
  • ID_list - list of publication identifiers mentioned in the citation, e.g., DOI, ISBN, etc.
  • citations - citation text as used in Wikipedia code
  • actual_label - 'book', 'journal', 'news', or 'other' label assigned based on the analysis of citation identifiers or top link domain.    
  • acquired_ID_list - identifiers located via Google Books and Crossref APIs for citations which are likely to refer to books or journals, i.e., defined using 'cite book', 'cite journal', 'cite encyclopedia', and 'cite proceedings' templates.
  1. The total number of news: 9.926.598
  2. The total number of books: 2.994.601
  3. The total number of journals: 2.052.172
  4. Augmented with IDs via lookup 929.601 (out of 2.445.913 book, journal, encyclopedia, and proceedings template citations not classified as books or journals via given identifiers). 

The source code to extract citations can be found here: https://github.com/albatros13/wikicite.

The code is a fork of the earlier project on Wikipedia citation extraction: https://github.com/Harshdeep1996/cite-classifications-wiki.

 

Files

en_citations-2024.zip

Files (8.5 GB)

Name Size Download all
md5:2baaa72c6332663f79cbebf893f90b48
8.5 GB Preview Download