There is a newer version of the record available.

Published July 9, 2025 | Version v1
Dataset Open

Coupling ISSM and CUAS-MPI: example cases

  • 1. Deutsches Zentrum für Luft- und Raumfahrt
  • 2. ROR icon Technical University of Munich
  • 3. ROR icon Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung
  • 4. ROR icon Technical University of Darmstadt
  • 5. EDMO icon University of Bremen, Department of Geosciences

Description

Coupling of ISSM and CUAS-MPI with preCICE: Sample Cases

This data set contains samples cases for coupling of the Ice-sheet and Sea-level System Model (ISSM) with subglacial hydrology model CUAS-MPI, an MPI-parallel implementation of the Confined-Unconfined Aquifer System model. The coupling is performed by the coupling library preCICE. ISSM computes the state of the ice sheet (e.g., ice thickness, ice velocity, melting rates) and CUAS-MPI computes the effective water pressure that is included in the sliding law used by ISSM.

  • Thule: A synthetic ice sheet, based on the Thule geometry developed for the CalvingMIP project. The sample can be configured to run either a control simulation or an anomaly simulation with an additional water source in the form of a line. The sample is a simple demonstration of the capabilities of the adapters. The algorithmically generated geometry is just complex enough to see effects of coupling in areas of grounded and floating ice and at the margins between these areas. The setups were generated by simulating uncoupled in ISSM with the built-in effective pressure until the ice reaches steady state, then spinning up (also without coupling) a consistent hydrology based on the steady state geometry.
  • Greenland: A medium resolution model of the Greenland Ice Sheet using setups G1000 from Fischler et al. (2022) (ISSM) and G500 from Fischler et al. (2023) (CUAS-MPI). These setups are not entirely consistent, so the simulation result is not realistic. The sample is mostly intended to produce representative performance measurements, enabling comparison of parallel and serial coupling (see below.)

Requirements

To run the cases, the following software is required (does not include transitive dependencies):

Installation instructions can be found in the referenced websites and source code archives.

Execution

Sample cases include scripts to run the setups on a high-performance computing cluster. The scripts are specific for the Albedo cluster hosted by the Alfred Wegener Institute Helmholtz Center for Polar and Marine Research in Bremerhaven, but should run on other clusters with minor adaptations, including but not limited to:

  • Filesystem paths
  • Environment setup for participants in env.sh files (e.g. load modules)
  • Slurm directives for accounts, partitions, etc.
  • Replace srun with mpirun or any other equivalent command required by the cluster

For Greenland: edit the configuration in ./queue_job.sh (parallel or serial coupling, slurm variables), then execute the script to queue a SLURM job.
For Thule: edit the configuration in cuas.sh (control or anomaly forcing) and issm-cuas.sbatch (slurm variables), then queue with sbatch issm-cuas.sbatch

Performance measurements

The Greenland case is used for analyzing computational performance. Profiling output of performance runs are included, directory names follow the schema work-<coupling scheme>-<ISSM CPUs>-<CUAS CPUs>-<date>-<time>. For the profiling runs, builds of ISSM and CUAS-MPI were used that don't write any output, since variance of IO on the cluster is too high. Otherwise, the shell script were used as explained above. Profiling is enabled in the included config files.
Each experiment directory contains:

  • the scripts/configuration generated by queue_job.sh
  • logs written by each task of ISSM and CUAS-MPI
  • raw precice profiling data
  • processed profiling data generated by analyze.sh, see below.

The provided scripts were used to analyze the profile data:
- analyze.sh: run the precice-profiling tools included in the preCICE distribution to extract profiles and traces from the raw data.
- aggregate.py: load traces of one or more runs and compute statistics (e.g., averages over runs, tasks, and coupling windows).
- plot_agg.py: plot aggregate runtimes produced by aggregate.py over number of CPUs to see scaling.
- plot_timeline.py: plot timeline of coupling events for specific runs.
- archive.sh: compress experiment directories into an archive.

Files

issm-cuas-coupling-cases.zip

Files (4.8 GB)

Name Size Download all
md5:ce9b98f645678a7ce0b6607952b4c6a2
4.8 GB Preview Download

Additional details

Related works

References
Journal article: 10.5194/gmd-16-5305-2023 (DOI)
Journal article: 10.1029/2011JF002140 (DOI)
Journal article: 10.12688/openreseurope.14445.2 (DOI)
Requires
Software: 10.5281/zenodo.15785544 (DOI)
Software: 10.5281/zenodo.15782324 (DOI)