Published July 30, 2023 | Version CC BY-NC-ND 4.0
Journal article Open

Implications of Deep Compression with Complex Neural Networks

  • 1. Department of Electrical and Computer Engineering, University of Colorado Colorado Springs, 1420 Austin Bluffs Parkway, Colorado Springs, CO 80918, USA.

Contributors

Contact person:

  • 1. Department of Electrical and Computer Engineering, University of Colorado Colorado Springs, 1420 Austin Bluffs Parkway, Colorado Springs, CO 80918, USA.

Description

Abstract: Deep learning and neural networks have become increasingly popular in the area of artificial intelligence. These models have the capability to solve complex problems, such as image recognition or language processing. However, the memory utilization and power consumption of these networks can be very large for many applications. This has led to research into techniques to compress the size of these models while retaining accuracy and performance. One of the compression techniques is the deep compression three-stage pipeline, including pruning, trained quantization, and Huffman coding. In this paper, we apply the principles of deep compression to multiple complex networks in order to compare the effectiveness of deep compression in terms of compression ratio and the quality of the compressed network. While the deep compression pipeline is effectively working for CNN and RNN models to reduce the network size with small performance degradation, it is not properly working for more complicated networks such as GAN. In our GAN experiments, performance degradation is too much from the compression. For complex neural networks, careful analysis should be done for discovering which parameters allow a GAN to be compressed without loss in output quality.

Notes

Published By: Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP) © Copyright: All rights reserved.

Files

C36130713323.pdf

Files (656.2 kB)

Name Size Download all
md5:d9afaa1bfbc497b483c07f40275ec12f
656.2 kB Preview Download

Additional details

Related works

Is cited by
Journal article: 2231-2307 (ISSN)

References

  • S. Han, H, Mao, and W J. Dally, "Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization, and Huffman Coding", (ICLR) 2016
  • J.Luo, et al., "ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression", IEEE International Conference on Computer Vision, 2017.
  • H.Li, et al., "Pruning Filter for Efficient ConvNets", International Conference in Learning Represenations (ICLR), 2017.
  • Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778
  • "Trim insignificant weights | TensorFlow Model Optimization," https://www.tensorflow.org/model_optimization/guide/pruning (accessed Dec. 12, 2022).
  • "Quantization aware training in Keras example | TensorFlow Model Optimization," https://www.tensorflow.org/model_optimization/guide/quantization/training_example (accessed Dec. 12, 2022).
  • "RNN, LSTM & GRU," dProgrammer lopez, Apr. 06, 2019. http://dprogrammer.org/rnn-lstm-gru
  • Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative adversarial networks. Commun. ACM 63, 11 (November 2020), 139–144.

Subjects

ISSN: 2231-2307 (Online)
https://portal.issn.org/resource/ISSN/2231-2307#
Retrieval Number: 100.1/ijsce.C36130713323
https://www.ijsce.org/portfolio-item/C36130713323/
Journal Website: www.ijsce.org
https://www.ijsce.org/
Publisher: Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP)
https://www.blueeyesintelligence.org/