Clinical Review

Artificial Intelligence: Review of Current and Future Applications in Medicine

Author and Disclosure Information

Background: The role of artificial intelligence (AI) in health care is expanding rapidly. Currently, there are at least 29 US Food and Drug Administration-approved AI health care devices that apply to numerous medical specialties and many more are in development.

Observations: With increasing expectations for all health care sectors to deliver timely, fiscally-responsible, high-quality health care, AI has potential utility in numerous areas, such as image analysis, improved workflow and efficiency, public health, and epidemiology, to aid in processing large volumes of patient and medical data. In this review, we describe basic terminology, principles, and general AI applications relating to health care. We then discuss current and future applications for a variety of medical specialties. Finally, we discuss the future potential of AI along with the potential risks and limitations of current AI technology. Conclusions: AI can improve diagnostic accuracy, increase patient safety, assist with patient triage, monitor disease progression, and assist with treatment decisions.


 

References

Artificial Intelligence (AI) was first described in 1956 and refers to machines having the ability to learn as they receive and process information, resulting in the ability to “think” like humans.1 AI’s impact in medicine is increasing; currently, at least 29 AI medical devices and algorithms are approved by the US Food and Drug Administration (FDA) in a variety of areas, including radiograph interpretation, managing glucose levels in patients with diabetes mellitus, analyzing electrocardiograms (ECGs), and diagnosing sleep disorders among others.2 Significantly, in 2020, the Centers for Medicare and Medicaid Services (CMS) announced the first reimbursement to hospitals for an AI platform, a model for early detection of strokes.3 AI is rapidly becoming an integral part of health care, and its role will only increase in the future (Table).

Key Historical Events in Artifical Intelligence Development With a Focus on Health Care Applications Table

As knowledge in medicine is expanding exponentially, AI has great potential to assist with handling complex patient care data. The concept of exponential growth is not a natural one. As Bini described, with exponential growth the volume of knowledge amassed over the past 10 years will now occur in perhaps only 1 year.1 Likewise, equivalent advances over the past year may take just a few months. This phenomenon is partly due to the law of accelerating returns, which states that advances feed on themselves, continually increasing the rate of further advances.4 The volume of medical data doubles every 2 to 5 years.5 Fortunately, the field of AI is growing exponentially as well and can help health care practitioners (HCPs) keep pace, allowing the continued delivery of effective health care.

In this report, we review common terminology, principles, and general applications of AI, followed by current and potential applications of AI for selected medical specialties. Finally, we discuss AI’s future in health care, along with potential risks and pitfalls.

AI Overview

AI refers to machine programs that can “learn” or think based on past experiences. This functionality contrasts with simple rules-based programming available to health care for years. An example of rules-based programming is the warfarindosing.org website developed by Barnes-Jewish Hospital at Washington University Medical Center, which guides initial warfarin dosing.6,7 The prescriber inputs detailed patient information, including age, sex, height, weight, tobacco history, medications, laboratory results, and genotype if available. The application then calculates recommended warfarin dosing regimens to avoid over- or underanticoagulation. While the dosing algorithm may be complex, it depends entirely on preprogrammed rules. The program does not learn to reach its conclusions and recommendations from patient data.

In contrast, one of the most common subsets of AI is machine learning (ML). ML describes a program that “learns from experience and improves its performance as it learns.”1 With ML, the computer is initially provided with a training data set—data with known outcomes or labels. Because the initial data are input from known samples, this type of AI is known as supervised learning.8-10 As an example, we recently reported using ML to diagnose various types of cancer from pathology slides.11 In one experiment, we captured images of colon adenocarcinoma and normal colon (these 2 groups represent the training data set). Unlike traditional programming, we did not define characteristics that would differentiate colon cancer from normal; rather, the machine learned these characteristics independently by assessing the labeled images provided. A second data set (the validation data set) was used to evaluate the program and fine-tune the ML training model’s parameters. Finally, the program was presented with new images of cancer and normal cases for final assessment of accuracy (test data set). Our program learned to recognize differences from the images provided and was able to differentiate normal and cancer images with > 95% accuracy.

Advances in computer processing have allowed for the development of artificial neural networks (ANNs). While there are several types of ANNs, the most common types used for image classification and segmentation are known as convolutional neural networks (CNNs).9,12-14 The programs are designed to work similar to the human brain, specifically the visual cortex.15,16 As data are acquired, they are processed by various layers in the program. Much like neurons in the brain, one layer decides whether to advance information to the next.13,14 CNNs can be many layers deep, leading to the term deep learning: “computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction.”1,13,17

ANNs can process larger volumes of data. This advance has led to the development of unstructured or unsupervised learning. With this type of learning, imputing defined features (ie, predetermined answers) of the training data set described above is no longer required.1,8,10,14 The advantage of unsupervised learning is that the program can be presented raw data and extract meaningful interpretation without human input, often with less bias than may exist with supervised learning.1,18 If shown enough data, the program can extract relevant features to make conclusions independently without predefined definitions, potentially uncovering markers not previously known. For example, several studies have used unsupervised learning to search patient data to assess readmission risks of patients with congestive heart failure.10,19,20 AI compiled features independently and not previously defined, predicting patients at greater risk for readmission superior to traditional methods.

Artificial Intelligence Health Care Applications Figure

A more detailed description of the various terminologies and techniques of AI is beyond the scope of this review.9,10,17,21 However, in this basic overview, we describe 4 general areas that AI impacts health care (Figure).

Pages

Next Article: