Published August 23, 2023 | Version v1
Preprint Open

Layer-wise feedback propagation

  • 1. ROR icon Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute
  • 2. Technical university of Berlin
  • 3. Fraunhofer-Institut für Nachrichtentechnik Heinrich-Hertz-Institut HHI
  • 4. Fraunhofer-Institut für Nachrichtentechnik Heinrich-Hertz-Institut

Description

In this paper, we present Layer-wise Feedback Propagation (LFP), a novel training approach for neural-network-like predictors that utilizes explainability, specifically Layer-wise Relevance Propagation (LRP), to assign rewards to individual connections based on their respective contributions to solving a given task. This differs from traditional gradient descent, which updates parameters towards an estimated loss minimum. LFP distributes a reward signal throughout the model without the need for gradient computations. It then strengthens structures that receive positive feedback while reducing the influence of structures that receive negative feedback. We establish the convergence of LFP theoretically and empirically, and demonstrate its effectiveness in achieving comparable performance to gradient descent on various models and datasets. Notably, LFP overcomes certain limitations associated with gradient-based methods, such as reliance on meaningful derivatives. We further investigate how the different LRP-rules can be extended to LFP, what their effects are on training, as well as potential applications, such as training models with no meaningful derivatives, e.g., step- function activated Spiking Neural Networks (SNNs), or for transfer learning, to efficiently utilize existing knowledge.

Files

WEBER2024.pdf

Files (2.3 MB)

Name Size Download all
md5:4589d02ea255e4d1d3a88eaa3da5cded
2.3 MB Preview Download