Published April 30, 2020 | Version v1
Journal article Open

Novel Pruning Techniques in Convolutional-Neural Networks

  • 1. Assistant Professor in the Department of Computer Science and Engineering in Northern India Engineering College affiliated to GGSIPU, Delhi
  • 2. Pursuing B.Tech, Department of Computer Engineering at Delhi Technological University, India.
  • 1. Publisher

Description

Deep Learning allows us to build powerful models to solve problems like image classification, time series prediction, natural language processing, etc. This is achieved at the cost of huge amounts of storage and processing requirements which are sometimes not possible in machines with limited resources. In this paper, we compare different methods which tackle this problem with network pruning. Selected few pruning methodologies from the deep learning literature were implemented to display their results. Modern neural architectures have a combination of different layers like convolutional layers, pooling layers, dense layers, etc. We compare pruning techniques for dense layers (such as unit/neuron pruning, and weight Pruning), and convolutional layers as well (using L1 norm, taylor expansion of loss to determine importance of convolutional filters, and Variable Importance in Projection using Partial Least Squares) for the image classification task. This study aims to ease the overhead in terms of optimization of the model for academic, as well as commercial, use of deep neural networks.

Files

D8397049420.pdf

Files (832.7 kB)

Name Size Download all
md5:b209dc2542c50c7b8b70f918b4078af3
832.7 kB Preview Download

Additional details

Related works

Is cited by
Journal article: 2249-8958 (ISSN)

Subjects

ISSN
2249-8958
Retrieval Number
D8397049420/2020©BEIESP