Software Open Access

Perfectly Parallel Fairness Certification of Neural Networks - Artifact

Urban, Caterina; Christakis, Maria; Wüstholz, Valentin; Zhang, Fuyuan

Recently, there is growing concern that machine-learned software, which currently assists or even automates decision making, reproduces, and in the worst case reinforces, bias present in the training data. The development of tools and techniques for certifying fairness of this software or describing its biases is, therefore, critical. In this paper, we propose a *perfectly parallel* static analysis for certifying *fairness* of feed-forward neural networks used for classification of tabular data. When certification succeeds, our approach provides definite guarantees, otherwise, it describes and quantifies the biased input space regions. We design the analysis to be *sound*, in practice also *exact*, and configurable in terms of scalability and precision, thereby enabling *pay-as-you-go certification*. We implement our approach in an open-source tool called Libra and demonstrate its effectiveness on neural networks trained on popular datasets.

This is the artifact accompanying the published paper. 

Files (70.1 MB)
Name Size
Libra.zip
md5:0f6b4b3d35107dfb03550a0b0307bb98
70.1 MB Download
85
0
views
downloads
All versions This version
Views 8585
Downloads 00
Data volume 0 Bytes0 Bytes
Unique views 7979
Unique downloads 00

Share

Cite as