This is a dataset of 40.664.485 citations extracted from English Wikipedia February 2023 dump (https://dumps.wikimedia.org/enwiki/20230220/).
Version 1: en_citations.zip is a dataset of extracted citations
Version 2: en_final.zip is the same dataset with classified citations augmented with identifiers
The fields are as follows:
- type_of_citation - Wikipedia template type used to define the citation, e.g., 'cite journal', 'cite news', etc.
- page_title - title of the Wikipedia article from which the citation was extracted.
- Title - source title, e.g., title of the book, newspaper article, etc.
- URL - link to the source, e.g., webpage where news article was published, description of the book at the publisher's website, online library webpage, etc.
- tld - top link domain extracted from the URL, e.g., 'bbc' for https://www.bbc.co.uk/...
- Authors - list of article or book authors, if available.
- ID_list - list of publication identifiers mentioned in the citation, e.g., DOI, ISBN, etc.
- citations - citation text as used in Wikipedia code
- actual_label - 'book', 'journal', 'news', or 'other' label assigned based on the analysis of citation identifiers or top link domain.
- acquired_ID_list - identifiers located via Google Books and Crossref APIs for citations which are likely to refer to books or journals, i.e., defined using 'cite book', 'cite journal', 'cite encyclopedia', and 'cite proceedings' templates.
- The total number of news: 9.926.598
- The total number of books: 2.994.601
- The total number of journals: 2.052.172
- Augmented with IDs via lookup 929.601 (out of 2.445.913 book, journal, encyclopedia, and proceedings template citations not classified as books or journals via given identifiers).
The source code to extract citations can be found here: https://github.com/albatros13/wikicite.
The code is a fork of the earlier project on Wikipedia citation extraction: https://github.com/Harshdeep1996/cite-classifications-wiki.