Clinical Review

Neuroimaging in the Era of Artificial Intelligence: Current Applications

Author and Disclosure Information

 

References

Viz LVO automatically detects large vessel occlusions, flags the occlusion on CT angiogram, alerts the stroke team (interventional radiologist, neuroradiologist, and neurologist), and transmits images through a secure application to the stroke team members’ mobile devices—all in less than 6 minutes from study acquisition to alarm notification.48 Additional software can quantify and measure perfusion in affected brain areas.48 This could have implications for quantifying and targeting areas of ischemic penumbra that could be salvaged after a stroke and then using that information to plan targeted treatment and/or intervention. Because many trials (DAWN/DEFUSE3) have shown benefits in stroke outcome by extending the therapeutic window for the endovascular thrombectomy, the ability to identify appropriate candidates is essential.58,59 Development of AI tools in assessing ischemic penumbra with quantitative parameters (mean transit time, cerebral blood volume, cerebral blood flow, mismatch ratio) using AI has benefited image interpretation. Medtronic RAPID software can provide quantitative assessment of CT perfusion. AI tools could be used to provide an automatic ASPECT score, which provides a quantitative measure for assessing potential ischemic zones and aids in assessing appropriate candidates for thrombectomy.

Several FDA-approved AI tools help quantify brain structures in neuroradiology, including quantitative analysis through MRI for analysis of anatomy and PET for analysis of functional uptake, assisting in more accurate and more objective detection and monitoring of conditions such as atrophy, dementia, trauma, seizure disorders, and MS.48 The growing number of FDA-approved AI technologies and the recent CMS-approved reimbursement for an AI tool indicate a changing landscape that is more accepting of downstream applications of AI in neuroradiology. As AI continues to integrate into medical regulation and finance, we predict AI will continue to play a prominent role in neuroradiology.

Practical and Ethical Considerations

In any discussion of the benefits of AI, it is prudent to address its shortcomings. Chief among these is overfitting, which occurs when an AI is too closely aligned with its training dataset and prone to error when applied to novel cases. Often this is a byproduct of a small training set.60 Neuroradiology, particularly with uncommon, advanced imaging methods, has a smaller number of available studies.61 Even with more prevalent imaging modalities, such as head CT, the work of collecting training scans from patients with the prerequisite disease processes, particularly if these processes are rare, can limit the number of datapoints collected. Neuroradiologists should understand how an AI tool was generated, including the size and variety of the training dataset used, to best gauge the clinical applicability and fitness of the system.

Another point of concern for AI clinical decision support tools’ implementation is automation bias—the tendency for clinicians to favor machine-generated decisions and ignore contrary data or conflicting human decisions.62 This situation often arises when radiologists experience overwhelming patient loads or are in underresourced settings, where there is little ability to review every AI-based diagnosis. Although AI might be of benefit in such conditions by reducing physician workload and streamlining the diagnostic process, there is the propensity to improperly rely on a tool meant to augment, not replace, a radiologist’s judgment. Such cases have led to adverse outcomes for patients, and legal precedence shows that this constitutes negligence.63 Maintaining awareness of each tool’s limitations and proper application is the only remedy for such situations.

Ethically, we must consider the opaqueness of ML-developed neuroimaging AIs. For many systems, the specific process by which an AI arrives at its conclusions is unknown. This AI “black box” can conceal potential errors and biases that are masked by overall positive performance metrics. The lack of understanding about how a tool functions in the zero-failure clinical setting understandably gives radiologists pause. The question must be asked: Is it ethical to use a system that is a relatively unknown quantity? Entities, including state governments, Canada, and the European Union, have produced an answer. Each of these governments have implemented policies requiring that health care AIs use some method to display to end users the process by which they arrive at conclusions.64-68

The 21st Century Cures Act declares that to attain approval, clinical AIs must demonstrate this explainability to clinicians and patients.69 The response has been an explosion in the development of explainable AI. Systems that visualize the areas where AI attention most often rests with heatmaps, generate labels for the most heavily weighted features of radiographic images, and create full diagnostic reports to justify AI conclusions aim to meet the goal of transparency and inspiring confidence in clinical end users.70 The ability to understand the “thought process” of a system proves useful for error correction and retooling. A trend toward under- or overdetecting conditions, flagging seemingly irrelevant image regions, or low reproducibility can be better addressed when it is clear how the AI is drawing its false conclusions. With an iterative process of testing and redesigning, false positive and negative rates can be reduced, the need for human intervention can be lowered to an appropriate minimum, and patient outcomes can be improved.71

Data collection raises another ethical concern. To train functional clinical decision support tools, massive amounts of patient demographic, laboratory, and imaging data are required. With incentives to develop the most powerful AI systems, record collection can venture down a path where patient autonomy and privacy are threatened. Radiologists have a duty to ensure data mining serves patients and improves the practice of radiology while protecting patients’ personal information.62 Policies have placed similar limits on the access to and use of patient records.64-69 Patients have the right to request explanation of the AI systems their data have been used to train. Approval for data acquisition requires the use of explainable AI, standardized data security protocol implementation, and adequate proof of communal benefit from the clinical decision support tool. Establishment of state-mandated protections bodes well for a future when developers can access enormous caches of data while patients and health care professionals are assured that no identifying information has escaped a well-regulated space. On the level of the individual radiologist, the knowledge that each datum represents a human life. These are people who has made themselves vulnerable by seeking relief for what ails them, which should serve as a lasting reminder to operate with utmost care when handling sensitive information.

Conclusions

The demonstrated applications of AI in neuroimaging are numerous and varied, and it is reasonable to assume that its implementation will increase as the technology matures. AI use for detecting important neurologic conditions holds promise in combatting ever greater imaging volumes and providing timely diagnoses. As medicine witnesses the continuing adoption of AI, it is important that practitioners possess an understanding of its current and emerging uses.

Pages

Next Article: