The full text of this article is unavailable through your IP address: 172.17.0.1
Contents Online
Mathematics, Computation and Geometry of Data
Volume 3 (2023)
Number 1
Simultaneous compression and stabilization of neural networks through pruning
Pages: 1 – 28
DOI: https://dx.doi.org/10.4310/MCGD.2023.v3.n1.a1
Authors
Abstract
Despite their widespread success, deep neural networks still suffer from some drawbacks, in particular their large model size tends to render them unstable—made evident, for instance, by their weakness to adversarial attacks. The large model size itself also prevents efficient deployment on edge devices. Significant efforts were made for model compression and for model stabilization, but these two issues were mostly considered as orthogonal objectives. Partly because of this belief there have been very few attempts to (re)train a neural network for both purposes. Among those, pruning has recently emerged as a promising candidate showing preliminary success. In this paper, through a combination of systematic numerical experiments and analytical arguments, we provide a mechanistic explanation of the impact of pruning on model accuracy, compression, and stability of neural networks. Our results suggest that pruning with adequately chosen thresholds not only compresses the networks while maintaining accuracy, but also drastically increases stability. We observe that excessive (re)training systematically causes network instability. Based on those findings, we propose a meta-algorithm to set the pruning threshold and number of fine-tuning epochs, thus enabling joint model compression and stabilization with a manageable computational overhead.
Keywords
deep neural networks, pruning, stability, model compression
2010 Mathematics Subject Classification
68Txx
Received 11 May 2020
Received revised 17 January 2021
Published 16 May 2023