Spike neuron optimization using deep reinforcement learning
Description
Deep reinforcement learning (DRL) which involved reinforcement learning and artificial neural network allows agents to take the best possible actions to achieve goals. Spiking neural network (SNN) faced difficulty in training due to the non-differentiable spike function of spike neuron. In order to overcome the difficulty, deep Q network (DQN) and deep Q learning with normalized advantage function (NAF) are proposed to interact with a custom environment. DQN is applied for discrete action space whereas NAF is implemented for continuous action space. The model is trained and tested to validate its performance in order to balance the firing rate of excitatory and inhibitory population of spike neuron by using both algorithms. Training results showed both agents able to explore in the custom environment with OpenAI Gym framework. The trained model for both algorithms capable to balance the firing rate of excitatory and inhibitory of the spike neuron. NAF achieved 0.80% of the average percentage error of rate of difference between target and actual neuron rate whereas DQN obtained 0.96%. NAF attained the goal faster than DQN with only 3 steps taken for actual output neuron rate to meet with or close to target neuron firing rate.
Files
20 20839 1570678890.pdf
Files
(370.0 kB)
Name | Size | Download all |
---|---|---|
md5:56fc909ed2137ced45717809ee169e4d
|
370.0 kB | Preview Download |