Are you a wizard with words? Do you like money without caring how you get it? You could be in luck now that a new role in cybercrime appears to have opened up – poetic LLM jailbreaking.
A research team in Italy published a paper this week, with one of its members saying that the “findings are honestly wilder than we expected.”
Researchers found that when you try to bypass top AI models’ guardrails – the safeguards preventing them from spewing harmful content – attempts to do so composed in verse were vastly more successful than typical prompts.



Maybe the Vogon’s were on to something?