Luan, Shangzhen and Chen, Chen and Zhang, Baochang and Han, Jungong and Liu, Jianzhuang (2018) Gabor Convolutional Networks. IEEE Transactions on Image Processing, 27 (9). pp. 4357-4366. ISSN 1057-7149
GaborCNNTIP.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial.
Download (1MB)
Abstract
In steerable filters, a filter of arbitrary orientation can be generated by a linear combination of a set of “basis filters”. Steerable properties dominate the design of the traditional filters e.g., Gabor filters and endow features the capability of handling spatial transformations. However, such properties have not yet been well explored in the deep convolutional neural networks (DCNNs). In this paper, we develop a new deep model, namely Gabor Convolutional Networks (GCNs or Gabor CNNs), with Gabor filters incorporated into DCNNs such that the robustness of learned features against the orientation and scale changes can be reinforced. By manipulating the basic element of DCNNs, i.e., the convolution operator, based on Gabor filters, GCNs can be easily implemented and are readily compatible with any popular deep learning architecture. We carry out extensive experiments to demonstrate the promising performance of our GCNs framework and the results show its superiority in recognizing objects, especially when the scale and rotation changes take place frequently. Moreover, the proposed GCNs have much fewer network parameters to be learned and can effectively reduce the training complexity of the network, leading to a more compact deep learning model while still maintaining a high feature representation capacity. The source code can be found at https://github.com/bczhangbczhang .