ekrell/geoscience-attribution-benchmarks: unicov benchmark: analyze variance of XAI methods
Authors/Creators
Description
The release includes the benchmark unicov, which is a set of scripts and config files for generating, actually, a set of XAI benchmarks. The purpose is to quantitatively demonstrate that XAI outputs can vary across replicated model training runs. That is, the only change is the initial seed of the NN model weights, but the XAI outputs can be very different. We show that an increase in the strength of correlation provides many potential learned functions that achieve high performance. Since the model has so many options to learn within the data, each trained model can yield very different explanations.
This release corresponds to the results shown at AGU 2023 and AMS 2024. The poster is available here.
Files
ekrell/geoscience-attribution-benchmarks-v2.0.1.zip
Files
(5.7 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:477cee21ffd0a3e06a910a5741eca1a0
|
5.7 MB | Preview Download |
Additional details
Related works
- Is supplement to
- Software: https://github.com/ekrell/geoscience-attribution-benchmarks/tree/v2.0.1 (URL)