Wu, G. and Han, Jungong and Guo, Y. and Liu, L. and Ding, Guiguang and Ni, Q. and Shao, L. (2019) Unsupervised Deep Video Hashing via Balanced Code for Large-Scale Video Retrieval. IEEE Transactions on Image Processing, 28 (4). pp. 1993-2007. ISSN 1057-7149
TIP_author_accepted_manuscript.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial.
Download (3MB)
Abstract
This paper proposes a deep hashing framework, namely, unsupervised deep video hashing (UDVH), for large-scale video similarity search with the aim to learn compact yet effective binary codes. Our UDVH produces the hash codes in a self-taught manner by jointly integrating discriminative video representation with optimal code learning, where an efficient alternating approach is adopted to optimize the objective function. The key differences from most existing video hashing methods lie in: 1) UDVH is an unsupervised hashing method that generates hash codes by cooperatively utilizing feature clustering and a specifically designed binarization with the original neighborhood structure preserved in the binary space and 2) a specific rotation is developed and applied onto video features such that the variance of each dimension can be balanced, thus facilitating the subsequent quantization step. Extensive experiments performed on three popular video datasets show that the UDVH is overwhelmingly better than the state of the arts in terms of various evaluation metrics, which makes it practical in real-world applications. © 1992-2012 IEEE.