Published November 13, 2025 | Version v1
Conference paper Open

Remarks on the Universal Approximation Property of Feedforward Neural Networks

  • 1. ROR icon University of Ostrava

Description

Abstract


This paper presents a structured overview and novel insights into the universal approximation property of
feedforward neural networks. We categorize existing results based on the characteristics of activation functions
— ranging from strictly monotonic to weakly monotonic and continuous almost everywhere — and examine
their implications under architectural constraints such as bounded depth and width. Building on classical results
by Cybenko [1], Hornik [2], and Maiorov [3], we introduce new activation functions that enable even simpler
neural network architectures to retain universal approximation capabilities. Notably, we demonstrate that
single-layer networks with only two neurons and fixed weights can approximate any continuous univariate
function, and that two-layer networks can extend this capability to multivariate functions. These findings refine
the known lower bounds of neural network complexity and offer constructive approaches that preserve strict
monotonicity, improving upon prior work that relied on relaxed monotonicity conditions. Our results contribute
to the theoretical foundation of neural networks and open pathways for designing minimal yet expressive
architectures.

Files

paper18.pdf

Files (273.8 kB)

Name Size Download all
md5:752714160e63ea172acb8ed69f397814
273.8 kB Preview Download

Additional details

Funding

European Union
Research of Excellence on Digital Technologies and Wellbeing CZ. 02.01.01/00/22_008/0004583