Unsupervised Domain Adaptation within Deep Foundation Latent Spaces

Kangin, Dmitry and Angelov, Plamen (2024) Unsupervised Domain Adaptation within Deep Foundation Latent Spaces. In: ICLR 2024 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo), 2024-05-07 - 2024-05-11.

Full text not available from this repository.

Abstract

The vision transformer-based foundation models, such as ViT or Dino-V2, are aimed at solving problems with little or no finetuning of features. Using a setting of prototypical networks, we analyse to what extent such foundation models can solve unsupervised domain adaptation without finetuning over the source or target domain. Through quantitative analysis, as well as qualitative interpretations of decision making, we demonstrate that the suggested method can improve upon existing baselines, as well as showcase the limitations of such approach yet to be solved. The code is available at: https://github.com/lira-centre/vit_uda/

Item Type:
Contribution to Conference (Paper)
Journal or Publication Title:
ICLR 2024 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo)
Uncontrolled Keywords:
Research Output Funding/yes_externally_funded
Subjects:
?? yes - externally funded ??
ID Code:
228480
Deposited By:
Deposited On:
25 Mar 2025 16:50
Refereed?:
Yes
Published?:
Published
Last Modified:
25 Mar 2025 16:50