Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple Interactions Paper • 2502.04322 • Published Feb 6 • 3