Function Approximation with Spiked Random Networks
Creators
- 1. IITIS-PAN
- 2. University of Pittsburgh
- 3. Chinese Academy of Sciences
Description
This paper examines the function approximation properties of the “random neural-network model” (RNN) whose output is computed from the firing probabilities of selected neurons. We consider a feedforward Bipolar Random Neural Network (BGNN) model which has both “positive and negative neurons” in the output layer, and prove that it is a universal function approximator for bounded and continuous functions. Specifically, for any continuous and bounded function f, we constructively prove that there exists a feedforward BGNN which approximates f uniformly with error less than a given fixed epsilon. We also show that after some appropriate clamping operation on its output, the feedforward RNN, without the artifice of negative neurone, is also a universal function approximator.
Files
RNN-Fct-Approx.pdf
Files
(231.7 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:5d29905f646e443602c783a9fc6fcf86
|
231.7 kB | Preview Download |