OWASP LLM Top 10
Authoritative list of critical risks for LLM applications—start here for scoping and testing.
Resources for red teamers and AppSec practitioners testing LLM applications
LLM jailbreaks and prompt injection are attack vectors that bypass safety guardrails, exfiltrate data, or elicit unintended behavior from AI systems. As LLMs are integrated into apps and APIs, these techniques belong in your testing toolkit—alongside OWASP guidance and proper scoping.
Use the resources below to understand current bypass patterns, plan LLM security assessments, and harden applications against adversarial input.
Authoritative list of critical risks for LLM applications—start here for scoping and testing.
Community-driven jailbreak prompts and techniques; useful for understanding current bypass patterns.
Practical prompt injection attack vectors and defensive guidance.
Comprehensive guide to indirect prompt injection, data exfiltration, and adversarial prompting.
Research repo: cross-model techniques for GPT-4, Claude, and other LLMs.
Practical notes on prompt injection from a security-focused developer.
Critical risks for LLM applications—test scope
Adversarial threats to machine learning systems
Practical security research on injection
Academic research on AI safety
These resources are provided for educational and research purposes only. Unauthorized use of jailbreak techniques may violate terms of service and applicable laws. Always obtain proper authorization before conducting security testing on AI systems you do not own or have explicit permission to test.