Within-layer Diversity Reduces Generalization Gap
- 1. Faculty of Information Technology and Communication Sci- ences, Tampere University, Tampere, Finland
- 2. Programme for En- vironmental Information, Finnish Environment Institute, Jyvaskyla, Finland
- 3. Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark
Description
Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization. At each optimization step, neurons at a given layer receive feedback from neurons belonging to higher layers of the hierarchy. In this paper, we propose to complement this traditional ’between-layer’ feedback with additional ’within-layer’ feedback to encourage diversity of the activations within the same layer. To this end, we measure the pairwise similarity between the outputs of the neurons and use it to model the layer’s overall diversity. By penalizing similarities and promoting diversity, we encourage each neuron to learn a distinctive representation and, thus, to enrich the data representation learned within the layer and to increase the total capacity of the model. We theoretically and empirically study how the within-layer activation diversity affects the generalization performance of a neural network and prove that increasing the diversity of hidden activations reduces the generalization gap.
Notes
Files
26CameraReadyNeural_networks_Diversity_camera_Ready.pdf
Files
(447.5 kB)
Name | Size | Download all |
---|---|---|
md5:2dc44f3a27a86cd21a21cb59511fe765
|
447.5 kB | Preview Download |