Latest News

ChatGPT in medicine: The good, the bad, and the unknown


 

AT DDW 2023

ChatGPT and other artificial intelligence (AI)–driven natural language processing platforms are here to stay, so like them or not, physicians might as well figure out how to optimize their role in medicine and health care. That’s the takeaway from a three-expert panel session about the technology held at the annual Digestive Disease Week® (DDW).

The chatbot can help doctors to a certain extent by suggesting differential diagnoses, assisting with clinical note-taking, and producing rapid and easy-to-understand patient communication and educational materials, they noted. However, it can also make mistakes. And, unlike a medical trainee who might give a clinical answer and express some doubt, ChatGPT (Open AI/Microsoft) clearly states its findings as fact, even when it’s wrong.

Known as “hallucinating,” this problem of AI inaccuracy was displayed at the packed DDW session.

When asked when Leonardo da Vinci painted the Mona Lisa, for example, ChatGPT replied 1815. That’s off by about 300 years; the masterpiece was created sometime between 1503 and 1519. Asked for a fact about George Washington, ChatGPT said he invented the cotton gin. Also not true. (Eli Whitney patented the cotton gin.)

In an example more suited for gastroenterologists at DDW, ChatGPT correctly stated that Barrett esophagus can lead to adenocarcinoma of the esophagus in some cases. However, the technology also said that the condition could lead to prostate cancer.

So, if someone asked ChatGPT for a list of possible risks for Barrett’s esophagus, it would include prostate cancer. A person without medical knowledge “could take it at face value that it causes prostate cancer,” said panelist Sravanthi Parasa, MD, a gastroenterologist at Swedish Medical Center, Seattle.

“That is a lot of misinformation that is going to come our way,” she added at the session, which was sponsored by the American Society for Gastrointestinal Endoscopy.

The potential for inaccuracy is a downside to ChatGPT, agreed panelist Prateek Sharma, MD, a gastroenterologist at the University of Kansas Medical Center in Kansas City, Kansas.

“There is no quality control. You have to double check its answers,” said Dr. Sharma, who is president-elect of ASGE.

ChatGPT is not going to replace physicians in general or gastroenterologists doing endoscopies, said Ian Gralnek, MD, chief of the Institute of Gastroenterology and Hepatology at Emek Medical Center in Afula, Israel.

Even though the tool could play a role in medicine, “we need to be very careful as a society going forward ... and see where things are going,” Dr. Gralnek said.

How you ask makes a difference

Future iterations of ChatGPT are likely to produce fewer hallucinations, the experts said. In the meantime, users can lower the risk by paying attention to how they’re wording their queries, a practice known as “prompt engineering.”

It’s best to ask a question that has a firm answer to it. If you ask a vague question, you’ll likely get a vague answer, Dr. Sharma said.

ChatGPT is a large language model (LLM). GPT stands for “generative pretrained transformer” – specialized algorithms that find long-range patterns in sequences of data. LLMs can predict the next word in a sentence.

“That’s why this is also called generative AI,” Dr. Sharma said. “For example, if you put in ‘Where are we?’, it will predict for you that perhaps the next word is ‘going?’ ”

The current public version is ChatGPT 3.5, which was trained on open-source online information up until early 2022. It can gather information from open-access scientific journals and medical society guidelines, as well as from Twitter, Reddit, and other social media. It does not have access to private information, like electronic health records.

The use of ChatGPT has exploded in the past 6 months, Dr. Sharma said.

“ChatGPT has been the most-searched website or platform ever in history since it was launched in December of 2022,” he said.

Pages

Next Article: