Published February 6, 2026 | Version 1.0
Preprint Open

Hybrid Semantic Bottleneck Networks for Interpretable Deep Learning

Authors/Creators

Description

Deep learning models often achieve high predictive accuracy at the cost of interpretability, limiting their applicability in safety-critical and regulated domains. Concept Bottleneck Models (CBMs) address this issue by enforcing decisions through human-interpretable concepts, but frequently suffer from a significant loss in predictive performance due to excessive representational constraints.

 

In this work, we propose a hybrid semantic bottleneck architecture that explicitly separates human-defined, auditable concepts from unconstrained latent representations. The proposed model enforces interpretability where semantic supervision is available, while preserving residual capacity for performance-critical information.

 

Experiments on the FashionMNIST dataset demonstrate that the hybrid approach recovers most of the accuracy of a standard convolutional baseline while maintaining over 98% accuracy on human-supervised concepts. Qualitative analysis further shows that the model activates human concepts only when semantically applicable and explicitly refrains from producing explanations when no known concept applies.

 

These results suggest that interpretability and performance are not inherently conflicting objectives, provided that semantic constraints are applied selectively rather than globally.

 

This record is supplemented by an executable Google Colab notebook.

Google Colab Examples

Files

HGC-Net.pdf

Files (317.4 kB)

Name Size Download all
md5:136ab4a6f2943532d9af15dc26d4b72a
317.4 kB Preview Download