


T he 217-year-old publisher of numerous scientific journals, Wiley, is dealing with an existential crisis. It recently shuttered 19 journals and retracted more than 11,300 papers upon discovering large-scale research fraud afflicting them.
The fraud wasn’t limited to the usual suspects: the academically dubious social-science “grievance studies” fields (e.g., gender studies, race studies, and fat studies). Nearly 900 fraudulent papers were from the physical-sciences publisher IOP Publishing.
“That really crystallized for us, everybody internally, everybody involved with the business,” Kim Eggleton, head of peer review and research integrity at the publisher, told the Wall Street Journal. “This is a real threat.”
Much of the fake science could be traced to paper mills — entities where scientists can pay to be listed as an author on a fabricated paper to get the bogus work published for a quick résumé boost. To find jobs and secure grant money, scientists are often judged simply by their number of publications; thus, many are happy to pay such bribes. Paper mills often simply use artificial intelligence to swap the language in papers to avoid plagiarism detectors so that the same content can be used repeatedly. The goal is to boost quantity-dependent rankings for scientists.
“Generative AI has just handed [the paper mills] a winning lottery ticket,” Eggleton continued. “They can do it really cheap, at scale, and the detection methods are not where we need them to be. I can only see that challenge increasing.”
One recent study found that at least 1 percent of all academic papers published in 2023, not just those published through Wiley, likely contain AI-generated text. Another study’s findings suggest 17.5 percent of recent computer-science papers and 6.3 percent of mathematics papers and science papers in Nature’s portfolio contain AI writing.
In some cases, academic papers contain phrases that are blatant giveaways. One paper published in the International Journal of Modern Agriculture and Environment, for example, reads in part, “As an AI language model, I don’t have direct access to current research articles or studies.” That paper was published in 2021, before ChatGPT launched.
The use of AI text in academic publishing has become far more pervasive since then. A ridiculous AI-generated diagram of a rat with outlandish phallic anatomy in a (now-retracted) scientific paper in the journal Cell Development and Biology (which bills itself as the “world’s most cited developmental biology journal”) went semi-viral for showcasing how absurd AI material can make it past the alleged safeguard of peer review.
But the problems with academic publishing extend far beyond AI-authored text and diagrams. Issues with scientific integrity sadly predate AI, which is just one factor among many. Two other publishers have retracted hundreds of suspect papers each. The incentive structure of science is no longer aligned with truth-seeking.
The problem is worse in the social sciences, where to be considered significant, a finding must have a 95 percent chance of not being random. That might sound like a robust threshold, but it isn’t. There’s enormous pressure to get flashy results that academic journals will find publication-worthy. Some estimates suggest the majority of social-science research getting published is flat-out false, or inconclusive at best. According to a 2020 survey by DARPA (the U.S. Defense Advanced Research Projects Agency), in 2009, 53.4 percent — over half — of social-science papers had “failed to replicate,” meaning that efforts to reproduce their results had not succeeded. And the problem is getting worse, not better. By 2018, that figure had risen to 55.8 percent.
As I wrote in an earlier deep-dive on how ideology undermines rigor for National Review:
The problem of irreproducible results is worsened by biased experimenters who torture the evidence until it confesses whatever their ideological commitments demand. This could explain why the replication crisis is getting worse as the left-wing echo chamber of academia becomes more politically extreme and experimenters find creative new ways to spin evidence into confirming their preconceptions — for example, by running slight variations of the same experiment until the politically desirable outcome appears.
Parapsychologists are able to meet scientific standards far higher than those demanded of most conventional scientists, which certainly doesn’t prove the existence of psychic powers. What it proves is that humans are really good at cheating, sometimes perhaps subconsciously. Confirmation bias (favoring data that support one’s desired result and downplaying or discarding those that don’t), lowered standards, and increasingly extreme ideology among social scientists and the political Left more broadly have combined to create a fertile environment for entire academic fields of a questionable nature.
Academic papers are getting more incomprehensible, useless, and obviously politically biased. AI makes it easier than ever to generate nonsense research. But it’s hardly the main issue marring academic integrity. It shouldn’t be played up as a factor to avoid conversations about academia’s ideological bubble.
Democrats often claim they’re the “party of science.” But the increasing partisanship among the country’s scientists and related partisan trust gap as conservatives’ and liberals’ views of science diverge is proof that science has become politicized. Studies confirm that people of all political persuasions are more likely to support suppressing research that clashes with their ideological priors, although the minority of politically conservative academics is more likely to value “academic rigor” and “advancing knowledge.”
This group-bias effect doesn’t just show up in politics. It’s also why 99.8 percent of studies in Chinese journals find that acupuncture is effective, while Western research finds little to no effectiveness. When large groups of scientists strongly share a consensus, they become effectively an ideological immune system, rejecting alternative hypotheses that do not align with their ideological priors.
Today’s overwhelmingly lefty academics prefer research conclusions that conveniently line up with their policy preferences. Add in the broader replication crisis in the social sciences, new challenges from AI, and out-of-touch ideologues’ agenda-pushing increasingly infecting even the hard sciences, and it’s clear that the current state of academic research is a five-alarm fire.