Outcomes Research in Review

Can the Use of Siri, Alexa, and Google Assistant for Medical Information Result in Patient Harm?


 

References

Not only do these findings make important contributions to the literature of health information–seeking behaviors and limitations via conversational assistants, the study design highlights relevant approaches to evaluating interactions between users and conversational assistants and other voice-activated platforms. The authors designed a range of everyday task scenarios that real-life users may be experiencing and that can lead to querying home or smartphone devices to seek health- or medical-related information. These scenarios were also written with a level of real-life complexity that incorporated multiple facts to be considered for a successful resolution and the potential of harmful consequences should the correct course of action not be taken. In addition, they allowed study participants to interpret these task scenarios and query the conversational assistants in their own words, which further aligned with how users would typically interact with their devices.

However, this study also had some limitations, which the authors highlighted. Eligibility was limited to only English-speakers and the study sample was skewed towards younger, more educated individuals with high health literacy. Combined with the small convenience sample used, findings may not be generalizable to other/broader populations and further studies are needed, especially to highlight potential differences in population subgroups (eg, race/ethnicity, age, health literacy).

Applications for Clinical Practice

Because of the increased prevalence of online health-information–seeking behaviors by patients, clinicians must be prepared to adequately address, and in some cases, educate patients on the accuracy or relevance of medical/health information they find. Conversational assistants pose an important risk in health care as they incorporate natural language interfaces that can simulate and be misinterpreted as counseling systems by patients. As the authors highlight, laypersons cannot know what the full, detailed capabilities of conversational assistants are, either concerning their medical expertise or the aspects of natural language dialogue the conversational assistants can handle. Therefore, it is critical that clinicians and other providers emphasize the limitations of these technologies to patients and that any medical recommendations should be confirmed with health care professionals before they are acted on.

Katrina F. Mateo, MPH

Pages

Next Article: