Feature

When could you be sued for AI malpractice? You’re likely using it now


 

How you can prevent AI-related lawsuits

The first step to preventing an AI-related claim is being aware of when and how you are using AI.

Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.

“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”

Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.

When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.

“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.

Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.

“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.

In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.

It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.

“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”

While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.

“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”

A version of this article first appeared on Medscape.com.

Pages

Next Article: