Model Leeching : An Extraction Attack Targeting LLMs

Birch, Lewis and Hackett, William and Trawicki, Stefan and Suri, Neeraj and Garraghan, Peter (2023) Model Leeching : An Extraction Attack Targeting LLMs. In: Conference on Applied Machine Learning for Information Security, 2023-10-19 - 2023-10-20, 1000 Wilson Boulevard, 30th Floor.

[thumbnail of kwyfzhmspxbkwhghkbdwrhnvfcdgsvmg]
Archive (kwyfzhmspxbkwhghkbdwrhnvfcdgsvmg)
kwyfzhmspxbkwhghkbdwrhnvfcdgsvmg.zip - Accepted Version
Available under License Creative Commons Attribution-NonCommercial-ShareAlike.

Download (470kB)

Abstract

Model Leeching is a novel extraction attack targeting Large Language Models (LLMs), capable of distilling task-specific knowledge from a target LLM into a reduced parameter model. We demonstrate the effectiveness of our attack by extracting task capability from ChatGPT-3.5-Turbo, achieving 73% Exact Match (EM) similarity, and SQuAD EM and F1 accuracy scores of 75% and 87%, respectively for only $50 in API cost. We further demonstrate the feasibility of adversarial attack transferability from an extracted model extracted via Model Leeching to perform ML attack staging against a target LLM, resulting in an 11% increase to attack success rate when applied to ChatGPT-3.5-Turbo.

Item Type:
Contribution to Conference (Paper)
Journal or Publication Title:
Conference on Applied Machine Learning for Information Security
Uncontrolled Keywords:
Research Output Funding/yes_externally_funded
Subjects:
?? yes - externally fundedartificial intelligence ??
ID Code:
205651
Deposited By:
Deposited On:
25 Oct 2023 13:30
Refereed?:
Yes
Published?:
Published
Last Modified:
24 Apr 2024 23:48