

At least this example grew out of actual humans being suspicious.
Dozens of academics have raised concerns on social media about manuscripts and peer reviews submitted to the organizers of next year’s International Conference on Learning Representations (ICLR), an annual gathering of specialists in machine learning. Among other things, they flagged hallucinated citations and suspiciously long and vague feedback on their work.
Graham Neubig, an AI researcher at Carnegie Mellon University in Pittsburgh, Pennsylvania, was one of those who received peer reviews that seemed to have been produced using large language models (LLMs). The reports, he says, were “very verbose with lots of bullet points” and requested analyses that were not “the standard statistical analyses that reviewers ask for in typical AI or machine-learning papers.”
We seem to be in a situation where everybody knows that the review process has broken down, but the “studies” that show it are criti-hype.
Welcome to the abyss. It sucks here (academic edition).




This chatbot-generated house floor plan gave me a 45-second giggle fit.
(The original post does seem to be satirical in intent.)