Most of the top 100 medical journals provide guidance on the use of artificial intelligence (AI) during the peer review process, with many explicitly banning its use, a study suggests.
Of the 78 major journals that provide this guidance, 59% prohibit its use in peer review, while the rest allow its use if confidentiality is maintained and copyright is respected, Jian reported -Ping Liu, PhD, of Beijing University of Chinese Medicine. and co-authors.
Additionally, 91% of journals prohibited the uploading of manuscript-related content to AI, and 32% allowed restricted use of AI that reviewers mandated disclose in review reports, they noted in their letter research in Open JAMA Network.
In their introduction, Liu and colleagues pointed out that “the rapid growth of medical research publishing and preprint servers appears to be straining the peer review process, potentially causing a shortage of qualified reviewers and a slowdown in evaluations.”
“Innovative solutions are urgently needed,” they added. “Recent advances in artificial intelligence, particularly generative AI (GenAI), offer potential to improve peer review, but its integration into this workflow varies depending on journal policy .”
Co-author Zhi-Qiang Li, MPH, PhD, also of Beijing University of Chinese Medicine, said Page Med today that “it was striking to discover that, despite AI’s potential to increase the efficiency of peer review, 91% of journals prohibited the submission of manuscript-related content to AI. This highlights increased awareness of protecting the confidentiality and integrity of manuscripts.”
He noted that there was considerable divergence between different journals’ AI policies, with many identifying a few main reasons for choosing to limit the use of AI, including a desire to protect privacy manuscripts; concerns about the introduction of incorrect, incomplete or biased information by AI; and the potential for violation of data privacy rights.
“This study indicates that the impact of AI on the scientific publication process and medical research is a double-edged sword,” said Li. “On the one hand, AI has the potential to improve the effectiveness of peer review, but on the other hand, it raises concerns about bias and confidentiality violations.”
“Journals’ different positions toward the use of AI can significantly influence researchers’ decisions when writing and submitting their articles,” he added.
For this study, the authors used Scimago.org data for the top 100 medical journals to determine the existence and nature of their AI guidance during peer review. They searched journal websites for AI-related policies on June 30 and August 10. If a journal did not have its own AI guidelines but was tied to its publisher’s guidelines, authors used those guidelines for analysis.
Of the 78 journals, 41% linked to their publisher’s website that had preferences for using AI. Wiley and Springer Nature favored limited use of AI, while Elsevier and Cell Press prohibited any use of AI in peer review.
Notably, 22% of journals also provided links to statements from the International Committee of Medical Journal Editors or the World Association of Medical Editors, which permit limited use of AI. However, the authors noted that five of these reviews contained specific guidelines that contradicted these organizations’ statements.
Liu and colleagues said they only considered the policies of the top 100 medical journals, which could have missed other trends or attitudes in the policies of lower-ranked journals. They also noted that relying on editors’ shared advice as a proxy for all journals might have overestimated the number of specific AI advice.
Disclosures
The study was supported by grants from the National Administration of Traditional Chinese Medicine.
The authors reported no conflicts of interest.
Main source
Open JAMA Network
Source reference: Li ZQ et al “Use of artificial intelligence in peer review among top 100 medical journals” JAMA Netw Open 2024; DOI: 10.1001/jamanetworkopen.2024.48609.