There is a newer version of the record available.

Published February 23, 2024 | Version v2.0.1
Software Open

ekrell/geoscience-attribution-benchmarks: unicov benchmark: analyze variance of XAI methods

Authors/Creators

Description

The release includes the benchmark unicov, which is a set of scripts and config files for generating, actually, a set of XAI benchmarks. The purpose is to quantitatively demonstrate that XAI outputs can vary across replicated model training runs. That is, the only change is the initial seed of the NN model weights, but the XAI outputs can be very different. We show that an increase in the strength of correlation provides many potential learned functions that achieve high performance. Since the model has so many options to learn within the data, each trained model can yield very different explanations.

This release corresponds to the results shown at AGU 2023 and AMS 2024. The poster is available here.

Files

ekrell/geoscience-attribution-benchmarks-v2.0.1.zip

Files (5.7 MB)

Additional details