Today, I wanted to share this piece from economist Scott Cunningham (Baylor University), who wrote about how AI is widening the gap between research and publishing. Or, in economics terms (emphasis mine):

But what happens when the same productivity shock hits a system where the bottleneck was never really production in the first place, but rather was a hierarchical journal structure that depended immensely on editor time, skill, discretion and voluntary workers with the same talents called referees for screening quality deemed sufficient for publication?

The post mentions the Autonomous Policy Evaluation project—the end-to-end AI paper pipeline I wrote about a few weeks ago—and discusses the likely consequences of this flood of AI-generated papers. Assuming the number of publication slots in reputable journals is relatively fixed, AI-generated papers should add a very large amount of mass to the left side of the paper quality distribution. Acceptance rates will plummet and journals may rely on other signals of quality (name recognition, pedigree, institution) to thin the herd before actually reviewing content. As always, the rich get richer!

But this is imperfect, not to mention unfair, and so desk rejection gets noisier: good papers get killed by tired editors and marginally lower quality papers slip through to referees. It’s a cascading failure: volume breaks editors, broken editing wastes referees, wasted referees slow science.

An interesting wrinkle: apparently it is common for economics journals to charge a submission fee (“The average submission fee is $112.”), which I have never heard of in other fields (exorbitant fees upon acceptance, particularly for open access publication, are standard). AI submission spam could be very profitable for journals charging a submission fee. I wonder if submission fees will be adopted in journals for other fields, as well, if for no other reason than to profit off the flood of AI-generated submissions (and maybe slow them down a little).

The essay concludes:

The binding constraint on science is shifting from production to evaluation. The queue to get evaluated — not the difficulty of doing the work — becomes what determines how fast knowledge advances.

For a complimentary piece on the effect of AI on academic publishing, see Prof. Jessica Hullman’s post last week following up on her earlier post on multiverse analysis. She makes some similar observations as Prof. Cunningham and gives the following predictions on how AI’s ability to replicate analyses (with different assumptions/modelling decisions) will affect publishing:

  • In the short term, acceptance rates will drop
  • Standardization of AI checks will incentivize AI in production
  • Papers will be “safer” in certain ways
  • Nuance will be lost
  • Policy experiments will be possible on a much shorter time scale
  • Scientific self-play in the (slightly) longer term

We can see the early sparks of a lot of these items in the projects like the aforementioned Autonomous Policy Evaluation.