Diff-Tracker : Text-to-Image Diffusion Models are Unsupervised Trackers

Zhang, Zhengbo and Xu, Li and Peng, Duo and Rahmani, Hossein and Liu, Jun (2024) Diff-Tracker : Text-to-Image Diffusion Models are Unsupervised Trackers. In: Computer Vision – ECCV 2024. ECCV 2024. :. Lecture Notes in Computer Science . Springer, Cham, pp. 319-337. ISBN 9783031733895

Full text not available from this repository.

Abstract

We introduce Diff-Tracker, a novel approach for the challenging unsupervised visual tracking task leveraging the pre-trained text-to-image diffusion model. Our main idea is to leverage the rich knowledge encapsulated within the pre-trained diffusion model, such as the understanding of image semantics and structural information, to address unsupervised visual tracking. To this end, we design an initial prompt learner to enable the diffusion model to recognize the tracking target by learning a prompt representing the target. Furthermore, to facilitate dynamic adaptation of the prompt to the target’s movements, we propose an online prompt updater. Extensive experiments on five benchmark datasets demonstrate the effectiveness of our proposed method, which also achieves state-of-the-art performance.

Item Type:
Contribution in Book/Report/Proceedings
ID Code:
232973
Deposited By:
Deposited On:
05 Dec 2025 13:45
Refereed?:
Yes
Published?:
Published
Last Modified:
05 Dec 2025 23:05