Published May 19, 2024 | Version 2024-05-19
Software Open

Source Code for the 'Corpus of Resolutions: UN Security Council' (CR-UNSC-Source)

  • 1. ROR icon Ludwig-Maximilians-Universität München
  • 2. ROR icon Sant'Anna School of Advanced Studies
  • 3. ROR icon King's College London



This code in the R programming language downloads and processes the full set of resolutions, drafts and meeting records rendered by the United Nations Security Council (UNSC), as published by the UN Digital Library, into a rich and structured human- and machine-readable dataset. It is the basis for the Corpus of Resolutions: UN Security Council (CR-UNSC).

All data sets created with this script will always be hosted permanently open access and freely available at Zenodo, the scientific repository of CERN. Each version is uniquely identified with a persistent Digitial Object Identifier (DOI), the Version DOI. The newest version of the data set will always available via the link of the Concept DOI:



The CR-UNSC will be updated at least once per year. In case of serious errors an update will be provided at the earliest opportunity and a highlighted advisory issued on the Zenodo page of the current version. Minor errors will be documented in the GitHub issue tracker and fixed with the next scheduled release.

The CR-UNSC is versioned according to the day of the last run of the data pipeline, in the ISO format YYYY-MM-DD. Its initial release version is 2024-05-03.

Notifications regarding new and updated data sets will be published on my academic website at or on the Fediverse at



  • New variant: EN_TXT_BEST containing a write-out of the English resolution texts equivalent to the CSV file text variable
  • New diagrams: bar charts of top M49 regions and sub-regions of countries mentioned in resolution texts
  • Fixed naming mix-up of BIBTEX and GRAPHML zip archives
  • Fixed whitespace character detection in citation extraction (adds ca. 10% more citations)
  • Fixed improper merging of weights in citation network
  • Fixed "cannot xtfrm data frames" warning
  • Improve REGEX detection for certain geographic entities
  • Improve Codebook (headings, citation network docs)



The pipeline will produce the following results and store them in the  output/ folder:

  • Codebook as PDF
  • Compilation Report as PDF
  • Quality Assurance Report as PDF
  • ZIP archive containing the main data set as a CSV file
  • ZIP archive containing only the metadata of the main data set as a CSV file
  • ZIP archive containing citation data and metadata as a GraphML file
  • ZIP archive containing bibliographic data as a BIBTEX file
  • ZIP archive containing all resolution texts as TXT files (OCR and extracted)
  • ZIP archive containing all resolution texts as PDF files (original and English OCR)
  • ZIP archive containing all draft texts as PDF files (original)
  • ZIP archive containing all meeting record texts as PDF files (original)
  • ZIP archive containing the full Source Code
  • ZIP archive containing all intermediate pipeline results ("targets")

 The integrity and veracity of each ZIP archive is documented with cryptographically secure hash signatures (SHA2-256 and SHA3-512). Hashes are stored in a separate CSV file created during the data set compilation process.


System Requirements

  • The reference data sets were compiled on a Debian host system. Running the Docker config on an SELinux system like Fedora will require modifications of the Docker Compose config file.
  • 40 GB space on hard drive
  • Multi-core CPU recommended. We used 8 cores/16 threads to compile the reference data sets. Standard config will use all cores on a system. This can be fine-tuned in the config file.
  • Given these requirements the runtime of the pipeline is approximately 40 hours.


Step 1: Prepare Folder

Copy the full source code to an empty folder, for example by executing:

$ git clone

Always use a dedicated and empty (!) folder for compiling the data set. The scripts will automatically delete all PDF, TXT and many other file types in its working directory to ensure a clean run.


Step 2: Create Docker Image

The Dockerfile contains automated instructions to create a full operation system with all necessary dependencies. To create the image from the Dockerfile, please execute:

$ bash


Step 3: Compile Dataset

If you have previously compiled the data set, whether successfuly or not, you can delete all output and temporary files by executing:

$ Rscript delete_all_data.R


You can compile the full data set by executing:

$ bash



The data set and all associated files are now saved in your working directory.


GNU General Public License Version 3

Copyright (C) 2024 Seán Fobbe, Lorenzo Gasbarri and Niccolò Ridi

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.

You should have received a copy of the GNU General Public Licensealong with this program.  If not, see


Author Websites

Personal Website of Seán Fobbe

Personal Website of Lorenzo Gasbarri

Personal Website of Niccolò Ridi



Did you discover any errors? Do you have suggestions on how to improve the data set? You can either post these to the Issue Tracker on GitHub or contact Seán Fobbe via



Files (448.3 MB)

Name Size Download all
578.0 kB Preview Download
6.9 kB Preview Download
5.9 MB Preview Download
4.4 MB Preview Download
437.5 MB Preview Download

Additional details

Related works

10.5281/zenodo.11212056 (DOI)
Is derived from
Dataset: (URL)


Repository URL
Programming language
Development Status