News

Back
01/08/2025

Journal article shows three scenarios of the effects of large language models on publication and citation behaviour.

The paper "The Bot Delusion. Large language models and anticipated consequences for academics’ publication and citation behavior" by O. Wieczorek, I. Steinhardt, R. Schmidt, S. Mauermeister & C. Schneijderberg discusses the extent to which Large Language Models (LLMs) may affect the scientific enterprise, reinforcing or mitigating existing structural inequalities expressed by the Matthew Effect and introducing a “bot delusion” in academia.

The authors first focus on the academic publication and citation system and develop three scenarios of the anticipated consequences of using LLMs: reproducing content and status quo (Scenario 1), enabling content coherence evaluation (Scenario 2) and content evaluation (Scenario 3). Second, they discuss the interaction between the use of LLMs and academic (counter)norms for citation selection and their impact on the publication and citation system. Finally, they introduce communal counter-norms to capture academics’ loyal citation behavior and develop three future scenarios that academia may face when LLMs are widely used in the research process, namely status quo future of science, mixed-access future, and open science future.


Oliver Wieczorek, Isabel Steinhardt, Rebecca Schmidt, Sylvi Mauermeister, Christian Schneijderberg (2025): The Bot Delusion. Large language models and anticipated consequences for academics’ publication and citation behavior, Futures, 166, 103537,

link: https://www.sciencedirect.com/science/article/pii/S0016328724002209?via%3Dihub