There is a newer version of the record available.

Published March 23, 2026 | Version v1
Preprint Open

Nord v4.2: Brain-Inspired Spiking Neural Network Language Model with Emergent Zonal Specialization at 618M Scale

  • 1. Search or create affiliation

Description

Large language models (LLMs) based on the Transformer architecture have demonstrated
remarkable capabilities in natural language processing. However, these models activate 100% of
parameters for every input token, leading to high computational and energy costs. Spiking Neural
Networks (SNNs), inspired by biological neural computation, offer a fundamentally different
approach: neurons communicate through discrete binary spikes, and most neurons remain silent
most of the time.
Several SNN language models have been proposed. SpikeGPT (Zhu et al., 2023) demonstrated
language generation at 216M parameters using an RWKV-based architecture. BrainTransformers
(LumenScope, 2024) achieved competitive benchmark scores at 3B parameters using an
ANN-to-SNN training pipeline. SpikeLLM converts pretrained LLaMA weights to spiking form.
However, none of these architectures exhibit emergent functional specialization of architectural
zones during training.
Nord v4.2 introduces a zonal SNN architecture where Sensory, Association, Memory, and Executive
zones develop functionally distinct firing rate patterns from uniform initialization through standard
gradient-based training with spike homeostasis regulation. This emergent self-organization mirrors
biological cortical organization, where prefrontal cortex exhibits higher baseline firing rates than
primary sensory cortex (Mountcastle, 1997).

Files

nord_v42_preprint.pdf

Files (15.8 kB)

Name Size Download all
md5:a8c8ea5234eb3f586de9c9e9b0e6161c
15.8 kB Preview Download