Backdoor Attacks to Pre-trained Unified Foundation Models (2302.09360v3)
Abstract: The rise of pre-trained unified foundation models breaks down the barriers between different modalities and tasks, providing comprehensive support to users with unified architectures. However, the backdoor attack on pre-trained models poses a serious threat to their security. Previous research on backdoor attacks has been limited to uni-modal tasks or single tasks across modalities, making it inapplicable to unified foundation models. In this paper, we make proof-of-concept level research on the backdoor attack for pre-trained unified foundation models. Through preliminary experiments on NLP and CV classification tasks, we reveal the vulnerability of these models and suggest future research directions for enhancing the attack approach.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.