Conference Coverage

Artificial intelligence presents opportunities, challenges in neurologic practice


 

AT AANEM 2023

PHOENIX – Artificial intelligence (AI) is poised to dramatically alter health care, and it presents opportunities for increased production and automation of some tasks. However, it is prone to error and ‘hallucinations’ despite an authoritative tone, so its conclusions must be verified.

Those were some of the messages from a talk by John Morren, MD, an associate professor of neurology at Case Western Reserve University, Cleveland, who spoke about AI at the 2023 annual meeting of the American Association for Neuromuscular and Electrodiagnostic Medicine (AANEM).

He encouraged attendees to get involved in the conversation of AI, because it is here to stay and will have a big impact on health care. “If we’re not around the table making decisions, decisions will be made for us in our absence and won’t be in our favor,” said Dr. Morren.

He started out his talk by asking if anyone in the room had used AI. After about half raised their hands, he countered that nearly everyone likely had. Voice assistants like SIRI and Alexa, social media with curated feeds, online shopping tools that provide product suggestions, and content recommendations from streaming services like Netflix all rely on AI technology.

Within medicine, AI is already playing a role in various fields, including medical imaging, disease diagnosis, drug discovery and development, predictive analytics, personalized medicine, telemedicine, and health care management.

It also has potential to be used on the job. For example, ChatGPT can generate and refine conversations towards a specific length, format, style, and level of detail. Alternatives include Bing AI from Microsoft, Bard AI from Google, Writesonic, Copy.ai, SpinBot, HIX.AI, and Chatsonic.

Specific to medicine, Consensus is a search engine that uses AI to search for, summarize, and synthesize studies from peer-reviewed literature.

Trust, but verify

Dr. Morren presented some specific use cases, including patient education and responses to patient inquiries, as well as generating letters to insurance companies appealing denial of coverage claims. He also showed an example where he asked Bing AI to explain to a patient, at a sixth- to seventh-grade reading level, the red-flag symptoms of myasthenic crisis.

AI can generate summaries of clinical evidence of previous studies. Asked by this reporter how to trust the accuracies of the summaries if the user hasn’t thoroughly read the papers, he acknowledged the imperfection of AI. “I would say that if you’re going to make a decision that you would not have made normally based on the summary that it’s giving, if you can find the fact that you’re anchoring the decision on, go into the article yourself and make sure that it’s well vetted. The AI is just good to tap you on your shoulder and say, ‘hey, just consider this.’ That’s all it is. You should always trust, but verify. If the AI is forcing you to say something new that you would not say, maybe don’t do it – or at least research it to know that it’s the truth and then you elevate yourself and get yourself to the next level.”

Pages

Next Article: