File uploads: We have fixed an issue which caused file uploads to fail. We apologise for the inconvenience it may have caused.

Published September 22, 2022 | Version v1
Other Open

Learning Visual Explanations for DCNN-Based Image Classifiers Using an Attention Mechanism

  • 1. CERTH-ITI

Description

In this paper two new learning-based eXplainable AI (XAI) methods for deep convolutional neural network (DCNN) image classifiers, called L-CAM-Fm and L-CAM-Img, are proposed. Both methods use an attention mechanism that is inserted in the original (frozen) DCNN and is trained to derive class activation maps (CAMs) from the last convolutional layer's feature maps. During training, CAMs are applied to the feature maps (L-CAM-Fm) or the input image (L-CAM-Img) forcing the attention mechanism to learn the image regions explaining the DCNN's outcome. Experimental evaluation on ImageNet shows that the proposed methods achieve competitive results while requiring a single forward pass at the inference stage. Moreover, based on the derived explanations a comprehensive qualitative analysis is performed providing valuable insight for understanding the reasons behind classification errors, including possible dataset biases affecting the trained classifier.

Notes

Accepted for publication; to be included in Proc. ECCV Workshops 2022. The version posted here is the "submitted manuscript" version

Files

2209.11189.pdf

Files (6.0 MB)

Name Size Download all
md5:dd867a608a5c0512496ef69caadaeeba
6.0 MB Preview Download

Additional details

Funding

CRiTERIA – Comprehensive data-driven Risk and Threat Assessment Methods for the Early and Reliable Identification, Validation and Analysis of migration-related risks 101021866
European Commission
AI4Media – A European Excellence Centre for Media, Society and Democracy 951911
European Commission