

At first, experts smiled. Then, they grew genuinely concerned. In August 2023, Guillaume Cabanac, a professor at Université Toulouse-III and a keen hunter of dishonest practices in academic publishing, came across something strange. In the middle of a physics paper, he read a strange phrase, "regenerate response," which he recognized instantly. It was a direct copy of the text on a button from a website everyone had been talking about for months: ChatGPT. He had just identified the first proof of the popular chatbot being used to write academic papers in a matter of seconds. The article, published by a reputable independent publisher, the Institute of Physics, was retracted in September 2023.
A few months later, it was a rat with a giant penis, illustrating a biology paper and betraying the use of artificial intelligence (AI) to generate fanciful images, that made headlines. That article, too, was retracted. "My immediate reaction was amusement – some of the examples are hysterical, none more so than the creature I refer to as 'Ratsputin' – but the more serious implications quickly became clear. If material like this can survive peer review, then peer review isn't doing its job, at least in these cases," said Alex Glynn, a librarian at the University of Louisville, Kentucky. Since the rise of generative AI, Glynn has been compiling cases of suspected misuse, searching for typical AI-generated phrases like "according to my latest knowledge update." He has already cataloged more than 500 cases, some involving major publishers such as Elsevier, Nature Springer and Wiley.
In response, these publishers have released best practice guidelines: Authors are not necessarily banned from using these tools, but must disclose their use. Two publishers contacted by Le Monde, Elsevier and Nature Springer, sought to reassure readers. "We believe that AI can be a benefit for research and researchers," said a spokesperson for Nature Springer. "Overall, we view AI as a powerful enabler that, when responsibly integrated, strengthens research integrity and accelerates innovation," added a spokesperson for Elsevier. Both emphasized that such use must be "ethical" and "with human oversight." Like many other publishers, they also use AI tools themselves to detect other AI usage (for images, plagiarism, and so on).
You have 48.08% of this article left to read. The rest is for subscribers only.