Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 58 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 17 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 179 tok/s Pro
GPT OSS 120B 463 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Subtoxic Questions: Dive Into Attitude Change of LLM's Response in Jailbreak Attempts (2404.08309v1)

Published 12 Apr 2024 in cs.CR, cs.AI, and cs.CL

Abstract: As LLMs of Prompt Jailbreaking are getting more and more attention, it is of great significance to raise a generalized research paradigm to evaluate attack strengths and a basic model to conduct subtler experiments. In this paper, we propose a novel approach by focusing on a set of target questions that are inherently more sensitive to jailbreak prompts, aiming to circumvent the limitations posed by enhanced LLM security. Through designing and analyzing these sensitive questions, this paper reveals a more effective method of identifying vulnerabilities in LLMs, thereby contributing to the advancement of LLM security. This research not only challenges existing jailbreaking methodologies but also fortifies LLMs against potential exploits.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (11)
  1. Deng, G., “MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots”, arXiv e-prints, 2023. doi:10.48550/arXiv.2307.08715.
  2. Zou, A., Wang, Z., Carlini, N., Nasr, M., Zico Kolter, J., and Fredrikson, M., “Universal and Transferable Adversarial Attacks on Aligned Language Models”, arXiv e-prints, 2023. doi:10.48550/arXiv.2307.15043.
  3. Liu, X., Xu, N., Chen, M., and Xiao, C., “AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models”, arXiv e-prints, 2023. doi:10.48550/arXiv.2310.04451.
  4. Perez, E., “Red Teaming Language Models with Language Models”, arXiv e-prints, 2022. doi:10.48550/arXiv.2202.03286.
  5. Shen, X., Chen, Z., Backes, M., Shen, Y., and Zhang, Y., “”Do Anything Now”: Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models”, arXiv e-prints, 2023. doi:10.48550/arXiv.2308.03825.
  6. Schulhoff, S., “Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition”, arXiv e-prints, 2023. doi:10.48550/arXiv.2311.16119.
  7. Wolf, Y., Wies, N., Avnery, O., Levine, Y., and Shashua, A., “Fundamental Limitations of Alignment in Large Language Models”, arXiv e-prints, 2023. doi:10.48550/arXiv.2304.11082.
  8. Liu, Y., “Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study”, arXiv e-prints, 2023. doi:10.48550/arXiv.2305.13860.
  9. OpenAI, “GPT-4 Technical Report”, arXiv e-prints, 2023. doi:10.48550/arXiv.2303.08774.
  10. Ding, P., “A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily”, arXiv e-prints, 2023. doi:10.48550/arXiv.2311.08268.
  11. https://www.dropbox.com/scl/fo/dvhjujl2d9ofv7v833nlw/h?rlkey=mtpaw y31y4fqjtlfr22z1mi68&dl=0

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube