Published October 23, 2025 | Version 1.0
Model Open

Preprint, models and results: Biologically informed neural network models are robust to spurious interactions via self-pruning

  • 1. ROR icon Karolinska Institutet
  • 1. ROR icon Karolinska Institutet
  • 2. MIT

Description

For our paper Biologically informed neural network models are robust to spurious interactions via self-pruning

We have a github https://github.com/AvlantNilssonLab/LEMBAS_GPU

This githubs offer a very short turtorial how to repoduce the main results one of the steps if you dont wish to redo all the compute models and results is to get them trained from Zenodo.

 

The data set include: 

All results for the figures, such as the cross validation and sup figures such as more indept self pruning. 

The trained models for the self pruning study.

 

 

The abstract for the paper is:

Computational models of cellular networks hold promise to uncover disease mechanisms and guide therapeutic strategies. Biology-informed neural networks (BINNs) is an emerging approach to create such models by combining the predictive power of deep learning with prior knowledge, a vital aspect of biological research. The architectures of BINN’s enforces a network structure from which mechanism can ideally be inferred. However, a key challenge is to evaluate the reliability of these mechanisms, as cells are inherently complex, involving intricate and sometimes unknown interactions. Currently, analysis has mainly focused on selected pathways rather than a more comprehensive perspective. In this work we demonstrate an improved, holistic approach: we measure to which extent purposefully introduced spurious interactions are removed by a BINN during training (self-pruning). This metric is scalable and generalizable, as it does not depend on manual curation and so can be translated into diverse network settings. To enable the necessary rapid network-wide testing, we reimplemented LEMBAS (Large-scale knowledge-EMBedded Artificial Signaling-networks), our recurrent neural network framework for intracellular signaling dynamics, with full GPU acceleration. Our implementation achieves a >7-fold speedup compared to the original CPU version while preserving predictive accuracy. We evaluated self-pruning in 3 different datasets and found that when spurious interactions are introduced at random, the model prunes these to a much larger extent than those from the prior knowledge network (PKN), provided that the model is regularized with a sufficiently large L2 norm. This suggests that BINNs are robust to uncertainty in the PKN and is a quantitative sign that they learn real aspects of the modeled systems through training.

Files

Files (363.1 MB)

Name Size Download all
md5:a53b7bb0ba78902478a220cd4b988cd8
6.3 kB Download
md5:a8302b80a5720e29da7c721b803b1a95
378.3 kB Download
md5:b034a83cb48452c99d6a448b4089b486
282.2 kB Download
md5:efa81a77203b425aac96634fba868d4e
362.5 MB Download

Additional details

Dates

Created
2024-09-01/2025-10-01
Models and results

Software

Repository URL
https://github.com/AvlantNilssonLab/LEMBAS_GPU
Programming language
Python