Are you a wizard with words? Do you like money without caring how you get it? You could be in luck now that a new role in cybercrime appears to have opened up – poetic LLM jailbreaking.
A research team in Italy published a paper this week, with one of its members saying that the “findings are honestly wilder than we expected.”
Researchers found that when you try to bypass top AI models’ guardrails – the safeguards preventing them from spewing harmful content – attempts to do so composed in verse were vastly more successful than typical prompts.



Interesting. So manually converting a prompt into poetry had more success than asking AI to turn it into poetry.
To be fair, Gemini 2.5 pro is in general pretty “mis-aligned” and easy to jailbreak from my experience if you play around even without poetry.