AI Safety
Product ThinkingAI Safety
March 16, 2026·12 min read
1,405 Ways to Break an LLM
Analysis of 1,405 LLM jailbreak prompts across 11 attack families - from DAN roleplay to indirect prompt injection. Covers jailbreak techniques, defense-in-depth, CaMeL, and step-by-step guidance for securing agentic AI systems.
Read article
