Auto-balanced filter pruning for efficient convolutional neural networks

Ding, Xiaohan and Ding, Guiguang and Han, Jungong and Tang, Sheng (2018) Auto-balanced filter pruning for efficient convolutional neural networks. In: THE THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE THE THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE THE EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE. :. THE THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE THE THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE THE EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE. . AAAI, USA, pp. 6797-6804. ISBN 9781577358008

Full text not available from this repository.

Abstract

In recent years considerable research efforts have been devoted to compression techniques of convolutional neural networks (CNNs). Many works so far have focused on CNN connection pruning methods which produce sparse parameter tensors in convolutional or fully-connected layers. It has been demonstrated in several studies that even simple methods can effectively eliminate connections of a CNN. However, since these methods make parameter tensors just sparser but no smaller, the compression may not transfer directly to acceleration without support from specially designed hardware. In this paper, we propose an iterative approach named Auto-balanced Filter Pruning, where we pre-train the network in an innovative auto-balanced way to transfer the representational capacity of its convolutional layers to a fraction of the filters, prune the redundant ones, then re-train it to restore the accuracy. In this way, a smaller version of the original network is learned and the floating-point operations (FLOPs) are reduced. By applying this method on several common CNNs, we show that a large portion of the filters can be discarded without obvious accuracy drop, leading to significant reduction of computational burdens. Concretely, we reduce the inference cost of LeNet-5 on MNIST, VGG-16 and ResNet-56 on CIFAR-10 by 95.1%, 79.7% and 60.9%, respectively. Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Item Type:
Contribution in Book/Report/Proceedings
Subjects:
?? convolutiondigital arithmeticneural networkstensorscompression techniquescomputational burdenconvolutional neural networkfloating point operationsfully-connected layersiterative approachpruning methodsresearch effortsiterative methods ??
ID Code:
132189
Deposited By:
Deposited On:
11 Jul 2019 11:55
Refereed?:
Yes
Published?:
Published
Last Modified:
16 Jul 2024 04:33