Feature

When could you be sued for AI malpractice? You’re likely using it now


 

The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.

“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”

What are the top AI legal dangers of today?

A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.

This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.

“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.

“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”

Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.

It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.

When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”

Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.

“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”

In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.

Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.

“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”

The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.

As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.

“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”

So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.

“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”

Pages

Next Article: