Published February 28, 2026 | Version v1
Preprint Open

On the Representational Equivalence of Threshold Networks and NP-Verifiers: Expressive Power, Hardness Barriers, and the Limits of Boolean Encodability

Description

 

This paper provides a self-contained, formally rigorous treatment of a classical but frequently misunderstood result in computational complexity and neural network theory: threshold networks are universal Boolean circuits, yet this universality confers no computational advantage.

The work consolidates three classical lines of results — with complete proofs, explicit gate constructions, uniformity statements, and diagrams — that jointly refute the common but incorrect inference that representational capacity implies computational tractability:

  1. Universal representation: every Boolean function on n inputs is computed by a depth-3 threshold network of size at most 2ⁿ + n + 1 with integer weights of magnitude at most n.
  2. Uniform NP encoding: every NP-verifier compiles, in polynomial time, into a polynomial-size threshold network family via a formally uniform Cook–Levin reduction.
  3. Two hardness barriers: deciding whether any input satisfies a threshold network (NetworkSAT) is NP-complete, and training even a three-node network to consistency is NP-complete.

No new theorems are claimed. The contribution lies in precision, uniformity, and completeness of presentation, providing a single self-contained pedagogical reference that sharply delineates what threshold networks can represent from what they can solve or learn efficiently.

 

MSC 2020: 68Q17, 68T07, 94C10

 

Files

Neural_Networks_and_the_NP_Question_2026.pdf

Files (582.0 kB)

Name Size Download all
md5:91b9d7fc7411fe6126e7cd805e062e79
497.4 kB Preview Download
md5:b7278f30f4878c3f493d530099668c92
84.6 kB Download