Emergent Mind

Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks

(2404.02151)
Published Apr 2, 2024 in cs.CR , cs.AI , cs.LG , and stat.ML

Abstract

We show that even the most recent safety-aligned LLMs are not robust to simple adaptive jailbreaking attacks. First, we demonstrate how to successfully leverage access to logprobs for jailbreaking: we initially design an adversarial prompt template (sometimes adapted to the target LLM), and then we apply random search on a suffix to maximize the target logprob (e.g., of the token "Sure"), potentially with multiple restarts. In this way, we achieve nearly 100\% attack success rate -- according to GPT-4 as a judge -- on GPT-3.5/4, Llama-2-Chat-7B/13B/70B, Gemma-7B, and R2D2 from HarmBench that was adversarially trained against the GCG attack. We also show how to jailbreak all Claude models -- that do not expose logprobs -- via either a transfer or prefilling attack with 100\% success rate. In addition, we show how to use random search on a restricted set of tokens for finding trojan strings in poisoned models -- a task that shares many similarities with jailbreaking -- which is the algorithm that brought us the first place in the SaTML'24 Trojan Detection Competition. The common theme behind these attacks is that adaptivity is crucial: different models are vulnerable to different prompting templates (e.g., R2D2 is very sensitive to in-context learning prompts), some models have unique vulnerabilities based on their APIs (e.g., prefilling for Claude), and in some settings it is crucial to restrict the token search space based on prior knowledge (e.g., for trojan detection). We provide the code, prompts, and logs of the attacks at https://github.com/tml-epfl/llm-adaptive-attacks.

Illustrative example of a successful transfer attack on Claude 3 using an adversarial suffix from GPT-4.

Overview

  • A comprehensive study exposes the vulnerability of leading safety-aligned LLMs like OpenAI's GPT series, Meta's Llama-2, Google's Gemma-7B, Anthropic's Claude models, and CAIS's R2D2 to adaptive jailbreaking attacks.

  • The research evaluates various adversarial strategies, including adversarial prompt design, random search optimization, and model-specific adaptive techniques, demonstrating a high success rate in circumventing model safety measures.

  • Key findings highlight the effectiveness of these adversarial approaches, with some models being completely compromised through tailored strategies, revealing critical vulnerabilities in current safety protocols.

  • The study calls for a reevaluation of defense mechanisms against adaptive attacks and recommends a multifaceted approach to enhancing the robustness of LLMs, suggesting implications for future AI security and integrity.

Jailbreaking Safety-Aligned LLMs through Adaptive Attacks

Introduction

In recent developments within the field of LLMs, the robustness of safety-aligned models against adaptive adversarial attacks has emerged as a crucial research focus. A comprehensive study demonstrates that leading safety-aligned LLMs, including those from OpenAI (GPT-3.5 and GPT-4), Meta (Llama-2-Chat series), Google (Gemma-7B), Anthropic (Claude models), and CAIS (R2D2), are vulnerable to simple but cleverly designed adaptive jailbreaking attacks. This study systematically evaluates the efficacy of different adversarial strategies, highlighting a near-universal susceptibility of these models to being manipulated into generating harmful or prohibited content.

Methodology

The researchers adopted a multifaceted approach for jailbreaking attacks, incorporating:

  • Adversarial Prompt Design: A universal template was crafted for each model or family of models, designed to evade inbuilt safety mechanisms.
  • Random Search (RS): An RS algorithm was employed to optimize a suffix appended to the inquiry, aiming to manipulate the model into generating predefined harmful outputs.
  • Adaptive Techniques: Strategies were tailored to exploit unique vulnerabilities, such as model-specific in-context learning sensitivities and API features like prefilling responses in Claude models.

Findings

The study reveals a strikingly high success rate in bypassing the safety measures of various LLMs. Notably, the amalgamation of adversarial prompting and RS resulted in achieving a 100\% attack success rate across a wide array of models. The research underscores the critical role of adaptiveness in formulating successful jailbreaks, with different models exhibiting distinct vulnerabilities to specific strategic adjustments.

For instance, the Llama-2 series, despite its robustness against standard attacks, was effectively compromised using a combination of tailored prompting and RS, augmented by a novel self-transfer technique. Similarly, Claude models, known for their stringent safety protocols, were breached using a transfer approach alongside a model-specific vulnerability exploitation via the prefilling feature.

The study further extends its exploration into the realm of poisoned models, showcasing how a constrained RS approach, tailored through intelligent token selection, can identify hidden trojan triggers, thereby facilitating first-place achievement in the SaTML'24 Trojan Detection Competition.

Implications

This research casts a spotlight on the existing vulnerability landscape of safety-aligned LLMs and calls for a reevaluation of current defense mechanisms. It suggests that no single method offers a panacea against adaptive attacks, underscoring the necessity for a more dynamic and encompassing approach to evaluating and bolstering model robustness. The findings serve as a valuable asset for future endeavors aimed at designing more resilient and trustworthy LLMs.

Outlook and Recommendations

The study concludes with recommendations for advancing adversarial attack methodologies, advocating for a combination of manual prompt optimization, standard optimization techniques, and the exploitation of model-specific vulnerabilities. It emphasizes the importance of devising a blend of static and adaptive strategies for a comprehensive assessment of LLM robustness.

Moreover, the researchers project that their techniques could extend beyond conventional jailbreaking scenarios, potentially impacting areas such as copyright infringement detection and system hijacking through prompt injections. This underlines the imperative for ongoing research into developing more sophisticated defenses in the ever-evolving arms race between LLM capabilities and adversarial threats.

Concluding Remarks

In summary, this study provides a critical examination of the vulnerabilities of leading safety-aligned LLMs to adaptive adversarial attacks. Through meticulous analysis and innovative attack strategies, it highlights the need for the AI community to adopt a more holistic view of model security and integrity. The insights garnered here pave the way for future research dedicated to ensuring the ethical and safe deployment of LLMs in society.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.