Skip to content

LLM Prompt Injection & Jailbreaks

Resources for red teamers and AppSec practitioners testing LLM applications

⚠️ Authorized Testing Only: Use these techniques only on systems you own or have explicit permission to test. Prompt injection testing is valuable for securing LLM-powered apps—scope your engagements and document findings responsibly.

Why Red Teamers Care

LLM jailbreaks and prompt injection are attack vectors that bypass safety guardrails, exfiltrate data, or elicit unintended behavior from AI systems. As LLMs are integrated into apps and APIs, these techniques belong in your testing toolkit—alongside OWASP guidance and proper scoping.

Use the resources below to understand current bypass patterns, plan LLM security assessments, and harden applications against adversarial input.

Jailbreak Techniques & Resources

Research & Community Resources

⚠️ Important Notice

These resources are provided for educational and research purposes only. Unauthorized use of jailbreak techniques may violate terms of service and applicable laws. Always obtain proper authorization before conducting security testing on AI systems you do not own or have explicit permission to test.