TUCH : Turning Cross-view Hashing into Single-view Hashing via Generative Adversarial Nets

Zhao, Xin and Ding, Guiguang and Guo, Yuchen and Han, Jungong and Gao, Yue (2017) TUCH : Turning Cross-view Hashing into Single-view Hashing via Generative Adversarial Nets. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence :. IJCAI, Melbourne, pp. 3511-3517. ISBN 9780999241103

[thumbnail of ijcai2017_submission_xinzhao]
PDF (ijcai2017_submission_xinzhao)
ijcai2017_submission_xinzhao.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial.

Download (1MB)


Cross-view retrieval, which focuses on searching images as response to text queries or vice versa, has received increasing attention recently. Cross-view hashing is to efficiently solve the cross-view retrieval problem with binary hash codes. Most existing works on cross-view hashing exploit multi-view embedding method to tackle this problem, which inevitably causes the information loss in both image and text domains. Inspired by the Generative Adversarial Nets (GANs), this paper presents a new model that is able to Turn Cross-view Hashing into single-view hashing (TUCH), thus enabling the information of image to be preserved as much as possible. TUCH is a novel deep architecture that integrates a language model network T for text feature extraction, a generator network G to generate fake images from text feature and a hashing network H for learning hashing functions to generate compact binary codes. Our architecture effectively unifies joint generative adversarial learning and cross-view hashing. Extensive empirical evidence shows that our TUCH approach achieves state-of-the-art results, especially on text to image retrieval, based on image-sentences datasets, i.e. standard IAPRTC-12 and large-scale Microsoft COCO.

Item Type:
Contribution in Book/Report/Proceedings
ID Code:
Deposited By:
Deposited On:
17 Oct 2017 08:28
Last Modified:
21 Apr 2024 23:30