Published June 27, 2023 | Version 1.7
Project deliverable Open

D-JRA1.2 Methods for Holistic Test Reproducibility

  • 1. ROR icon Oldenburger Institut für Informatik
  • 2. ROR icon Technical University of Denmark
  • 3. ROR icon Center for Renewable Energy Sources and Saving
  • 4. ROR icon European Distributed Energy Resources Laboratories
  • 5. ROR icon University of Strathclyde

Description

This document presents uncertainty representation and validation methods. The ERIGrid 2.0 project aims to enhance the research infrastructures’ capabilities and supports the research and technology development towards smart energy systems in Europe. This work aims to present a set of statistical methods for the representation of uncertainty, propagation of uncertainty, and system validation and provide practical guideline material to show how these methods can be applied. In this context, some new tools were implemented, but the focus was more on making the methods practical available for potential users than on new research on the methods.

The motivation for this work stems from researchers in the domain of energy systems often employing laboratory experiments and computer simulations to evaluate their hypotheses. This process is riddled with different kinds of uncertainties, which often render results difficult to reproduce. The classical approach in the power systems domain has been to focus on deterministic experimental setups, but this approach limits the range of possible testing strategies. Since the potential impact of laboratory testing can be extended with the proper handling of uncertainties, a systematic approach to the analysis and accounting of uncertainty factors is required.

To support experimenters in considering, quantifying and potentially reducing uncertainty, we prepared a set of guiding questions. The following questions aim to categorise the experimental uncertainty issue at hand, in order to match the research challenge and identify a suitable approach. In each case, we give some guidelines and further references to help answering the question:

  1. (a) How can the uncertainty of the ‘main outcomes’ be quantified? This is the basic question, and in its answer, uncertainty will usually be specified in the form y +- Sigma_y , where y is the experiment’s outcome and Sigma_y is the expected deviation from this outcome.
    (b) How can the uncertainty be systematically characterised? To make the answer to the first question more precisely, uncertainty may be given as a (probability) distribution instead of just a single number. At the very least, this gives a specific meaning to the deviation Sigma_y in the previous question (for example, as the standard deviation of a normal distribution), but distributions that cannot be reduced to a single number are also possible.
  2. (a) Which uncertain input parameter is dominant and introduces the most uncertainty on the outputs? Having quantified the uncertainty, the next aim is often to reduce it. To concentrate one’s efforts, it is advisable to determine where the uncertainty is coming from. Often, the inputs to an experiment or simulation will already be certain. By performing a sensitivity analysis, one can find out which of these inputs has the biggest effect on the output, or rather, which input’s uncertainty is most relevant to the output’s uncertainty.
    (b) How can the uncertainty be further reduced (if possible)? When aiming to reduce the uncertainty, one needs to distinguish between aleatory uncertainty (which is due to truly random fluctuation in the setup, input values or measurement devices) and epistemic uncertainty (which is due to lack of knowledge). Only the latter can effectively be reduced, as truly random processes do not lose their randomness under closer study (if they do, they were partially epistemic to begin with).
  3. (a) How can the representational uncertainty of a computational test setup with respect to physical ground truth (real-world or laboratory) be characterised? One particularly tricky form of uncertainty is representational uncertainty which comes from the fact that no model and no lab experiment can ever capture reality perfectly. This question is about figuring out how much the chosen model differs from the real world, and the uncertainties in the outputs resulting from this deviation.
    (b) How can I estimate the accuracy of my experimental results by propagating representational uncertainty of a computational model? Having determined values for the parameters of a computational model, potentially with some uncertainty, one can then run the model on further inputs to extrapolate the effect of extrinsic uncertainty. Facing the problem of quantifying the uncertainty in the computed outputs is stemming from the uncertainty in the parameter values.

To help answering these questions, several approaches are introduced:

  • Guidelines on how to assess uncertainty, as an extension of the Holistic Test Description (HTD) template. For this, an additional Microsoft Office Excel template was  developed, which supports the user in the process.
  • An analytical approach for uncertainty propagation which requires an accurate mathematical description of the underlying model but no model evaluations.
  • Several sampling approaches, including purely random Monte Carlo (MC) sampling and quasi-random Sobol sequences. These can be applied to any system, though they can be costly due to the high number of model evaluations that might be necessary. For using these different approaches, a toolbox was developed, which allows instantiating multiple simulation runs based on a JavaScript Object Notation (JSON) configuration file. Depending on the configuration, sampling approaches and analysis of data are automatically applied.
  • The method of Sobol indices for sensitivity analysis.
  • The tool MoReSQUE for uncertainty propagation of stateless simulators integrated into a co-simulation setup.
  • System validation guidelines for investigating the representational uncertainty between a simulation model and an experimental setup.

For all approaches, examples are shown for a small educational example with uncertain irradiance and a PV system. Some more complex examples are used to highlight the advanced capabilities of the approaches in more detail. Additionally, the multi-energy benchmark is used as an example. The guideline example material and some of the use cases are available in an Open Access (OA)/Open Source (OS) repository.

Files

ERIGrid2-D102-Test-Reproducibility.pdf

Files (11.3 MB)

Name Size Download all
md5:d46270f98b09d5d13d3d2e1a6aca467f
11.3 MB Preview Download

Additional details

Funding

European Commission
ERIGrid 2.0 - European Research Infrastructure supporting Smart Grid and Smart Energy Systems Research, Technology Development, Validation and Roll Out – Second Edition 870620