Planned intervention: On Wednesday June 26th 05:30 UTC Zenodo will be unavailable for 10-20 minutes to perform a storage cluster upgrade.

There is a newer version of the record available.

Published December 1, 2021 | Version 0.5
Report Open

AI for mapping multi-lingual academic papers to the United Nations' Sustainable Development Goals (SDGs)

  • 1. Vrije Universiteit Amsterdam
  • 2. Palacký University Olomouc

Description

PLEASE GO TO LATEST VERSION

In this report we demonstrate how we made the multi-lingual text classifier to match research papers to the Sustainable Development Goals (SDGs) of the United Nations.

We trained the BERT multi-language model to classify the 169 individual SDG Targets, based on the English abstracts in the corpus of 1.4 million research papers.  We gathered that data using Scopus with the Aurora SDG Query model v5, which has an evaluated average precision of 70% and recall of 14%.

This is a follow-up project of the query based Aurora SDG classification model v5. The purpose of this project is to try and tackle several issues: 1. to label also research output to an SDG that is written in a non-english language, 2. to include papers that use other terms than the exact terms used in keyword searches, 3. to have a classification model that works independent from any other database specific query language.

In this report we show how we decided to use the abstracts only and the mBERT model to train the classifier. Also we show why we trained 169 individual models, instead of 1 multi-label model, including the evaluation for prediction. We will show how to prepare the data for training, and how to run the code to train the models on multiple GPU cores. Next we show how to prepare the data for prediction and how to use the code to predict English and non-English texts. And finally we evaluate the model by reviewing a sample of non-English research papers, and provide some tips to increase the reliability of the predicted outcomes.

This collection will contain:

  1. Report / technical documentation describing the method and evaluating the models.
  2. Text classification models: a table containing the download urls for each of the mBERT models for each SDG-Target in .h5 format.
  3. Training data sample in .csv format. Containing abstract and columns of SDG-Targets with 1 or 0.
  4. Training code in python. Explaining what parameters we used to train the models on GPU hardware.
  5. Test Statistics on accuracy of each of the trained models
  6. Prediction data sample. Text file (UTF8) containing abstracts of papers in different languages on each row.
  7. Prediction code in python. Setup to run the models to classify a text fragment.

Notes

Acknowledgements

Many thanks to Maéva Vignes from University of South Denmark, to allow us to use their UCloud HPC facilities and budget to train the mBERT models on their GPU's.


Funded by

Funded by European Commission, Project ID: 101004013, Call: EAC-A02-2019-1, Programme: EPLUS2020, DG/Agency: EACEA


Read more

[ Project website | Zenodo Community | Github ]


Change log

2021-12-06 | v0.5 | added report and documentation. (NEED TO FINISH SECTIONS EVALUATION AND CONCLUSION)

2021-12-06 | v0.4 | added sample data and code for training and for predicting. To reproduce the models yourself and to make use of the trained models yourself.

2021-12-01 | v0.3 | added .csv file with accuracy statistics of the models

2021-11-30 | v0.2 | added .csv file with download urls of the models

2021-10-29 | v0.1 | added initial .md file as placeholder for this dataset

Files

AI for mapping multi-lingual research papers to the United Nations' Sustainable Development Goals (SDGs) (6).zip

Additional details