Hui, Xiaofei and Wu, Qian and Qu, Haoxuan and Mirmehdi, Majid and Rahmani, Hossein and Liu, Jun (2026) When Visual Privacy Protection Meets Multimodal Large Language Models. International Journal of Computer Vision, 134 (4): 167. ISSN 0920-5691
11263_2026_2761_MOESM1_ESM.pdf - Published Version
Available under License Creative Commons Attribution.
Download (729kB)
11263_2026_Article_2761.pdf - Published Version
Available under License Creative Commons Attribution.
Download (6MB)
Abstract
The emergence of Multimodal Large Language Models (MLLMs) and the widespread usage of MLLM cloud services such as GPT-4V raised great concerns about privacy leakage in visual data. As these models are typically deployed in cloud services, users are required to submit their images and videos, posing serious privacy risks. However, how to tackle such privacy concerns is an under-explored problem. Thus, in this paper, we aim to conduct a new investigation to protect visual privacy when enjoying the convenience brought by MLLM services. We address the practical case where the MLLM is a “black box”, i.e., we only have access to its input and output without knowing its internal model information. To tackle such a challenging yet demanding problem, we propose a novel framework, in which we carefully design the learning objective with Pareto optimality to seek a better trade-off between visual privacy and MLLM’s performance, and propose critical-history enhanced optimization to effectively optimize the framework with the black-box MLLM. Our experiments show that our method is effective on different benchmarks.