Feature

Is ChatGPT a friend or foe of medical publishing?


 

Researchers may use artificial intelligence (AI) language models such as ChatGPT to write and revise scientific manuscripts, according to a new announcement from the International Committee of Medical Journal Editors. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.

These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.

At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.

What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”

A change in medical publishing

OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:

“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”

Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.

There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.

The consensus is that AI has no place on the author byline.

“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.

Pages

Next Article: