Dataset Open Access
Authors: T. Brown, D. Schlachtberger, A. Kies, S. Schramm, M. Greiner
The files in this record contain the scripts to build the model, input data and result summaries for the model PyPSA-Eur-Sec-30 described in the above publication.
The full results files (which include the post-processed input data) can be found in a companion Zenodo repository. (The supplementary data was split because of the size of the full results.)
To use the scripts, you need the following free software Python libraries:
and other standard libraries from the Python Package Index (PyPI), such as pandas, pyomo, countrycode, etc.
snakemake requires that all code runs with Python version 3. The code setup is known to work with the following versions: PyPSA 0.12.0, pandas 0.21.1, numpy 0.14.0, scipy 0.19.1, pyomo 5.2. You may need to downgrade your libraries to these versions for the scripts to work.
To solve the optimisation problem the scripts are coded to use the commercial solver Gurobi. To solve the problems in a reasonable time, you will need Gurobi or an equivalently fast solver such as CPLEX. Gurobi and CPLEX both have cost-free licences for academic users.
You will also need a computer with at least 64 GB of RAM, since pyomo and the solver are memory intensive.
The Python scripts in this repository (in the directory scripts/) are released under the GNU General Public Licence Version 3.0 (GPL 3.0).
The scripts build_*.py process all raw input data into a form where it can be used in the model.
make_options.py prepares the options.yml file for each model run.
prepare_network.py populates the PyPSA network for each model run with the input data.
solve_network.py solves the optimisation problem with Gurobi or the solver of your choice (this step takes several hours).
make_summary.py aggregates the results into CSV files in the directory results/ (also provided in this repository).
The scripts plot_*.py and paper_graphics*.py prepare graphical output.
All scripts are managed with the snakemake workflow management tool.
To run the scripts, adjust the parameters in config.yaml and cluster.yaml to your local configuration. Then simply execute
for the rule you want to run.
Since the jobs are computationally intensive you may want to run them on a cluster. To run the jobs on a cluster with Slurm, then execute e.g.
./snakemake_cluster --jobs 6
The cluster is configured in cluster.yaml. You will need to create the directory for the logs, i.e. logs/cluster/, before running the script.
All input data (in the directory scripts/) and results summaries (in the directory results/) are released under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0), except those where explicit sources and licences are mentioned in the data folders.
The input data include: