Conference Coverage

Free U.K. tool could help guide COVID-19 care for cancer patients


 

An online support tool for health care professionals that recommends whether to admit or discharge a cancer patient with COVID-19, based on their risk of a severe complication, has been developed by researchers from Manchester.

The team used machine learning on data from more than 900 cancer patients with COVID-19, conducting multiple analyses to arrive at a set of features that could accurately predict the need for admission or oxygen therapy, as well as the risk of death.

Dr. Rebecca Lee, The Christie NHS Foundation Trust, Manchester, and colleagues then developed thresholds to derive a score that recommended admission in 95% of patients who went on to need oxygen and an even greater proportion of those who later died.

The research was presented at the 2021 American Society of Clinical Oncology (ASCO) Annual Meeting on June 4.

CORONET

The resulting COVID-19 Risk in Oncology Evaluation Tool (CORONET) model “performs very well at predicting admission and severity of COVID-19 in patients with cancer,” Dr. Lee said. “We have set pragmatic and clinically relevant thresholds that focus on the safety regarding an admission versus discharge decision.”

To help health care professionals, the researchers have built a free online support tool that allows them to enter data and receive a recommendation “as to whether their patient should be considered for discharge, considered for admission, or is at high risk of having a severe outcome of coronavirus,” Dr. Lee explained.

“The health care professional can then explore the recommendation by seeing how their patient … compares with the rest of the cohort.”

The tool also includes a “diagram showing which features are most important to recommend a discharge decision versus an admission decision for each individual patient.”

Clinically intuitive

Dr. Alexi Wright, associate professor, Dana-Faber Cancer Institute, Boston, who was not involved in the study, commented that there were many things that were “really nice about the study.”

“First and foremost that they were establishing a tool to efficiently triage [patients] presenting with COVID,” she said, adding that it was “clinically intuitive” that the team made “pragmatic choices,” and the use of a random forest algorithm means the results are “very interpretable.”

However, Dr. Wright wondered whether the results can be replicated.

Alongside a lack of information on the deaths in the cohort, she pointed out that “ideally you have three data sets, with a training set, a testing set, and a validation set.”

The CORONET model was, however, trained and evaluated on the same dataset, “so it really needs external validation before it would be ready for direct clinical application.”

She continued that there is a “critical need to establish that studies can both be reproduced and replicated,” noting that a recent review showed that 85% of machine-learning studies that were used to detect COVID-19 using chest radiographs “failed fundamental reproducibility and quality checks.”

Risk factors

Dr. Lee began her presentation by reminding the audience that cancer patients are at increased risk of severe COVID-19 and death, with older age, male sex, nosocomial infection, higher ECOG performance status, and active cancer among the risk factors for mortality.

“However, outcomes are very heterogeneous, ranging from patients without symptoms at all to cases with multi-organ failure and death,” she said.

It is consequently “very important for the treating clinician to determine which patients could be safely discharged to the community versus those who need additional support in being admitted to hospital.”

To develop a tool that could distinguish between those two groups of patients, the researchers collected data on 1,743 cancer patients, which was reduced down to 920 patients after excluding those without laboratory confirmed COVID-19 and those with missing data.

Using recursive feature elimination, they selected 10 key patient features associated with prognosis, then compared a lasso regression model with a random forest model, with the latter performing the best.

The team then divided their patients into four cohorts, with the model trained on three cohorts and tested on the fourth. This resulted in the CORONET score, with the final model determined by testing it against the entire patient population.

Next, thresholds were determined for assessing patients for admission versus discharge, as well as for severity of illness, giving the final CORONET model, from which the online tool was developed.

Checking performance

The results showed that the model was able to predict admission with an area under the receiver operating characteristics curve (AUROC) of 0.82 for admission, 0.85 for oxygen requirement, and 0.79 for death.

Further analysis revealed that the most important feature at the time of presentation for determining outcome was the National Early Warning Score 2 (NEWS2), “which is a composite score of heart rate, respiratory rate, saturations and confusion level,” Dr. Lee said.

In addition, C-reactive protein levels, albumin, age, and platelet counts “were also very important features,” she continued, “and these have also been shown in a number of different studies to be important at determining the outcome from coronavirus.”

To examine the performance of the CORONET score further, they applied it to a European hospital dataset, ESMO-CoCARE registry data, and a U.S. cohort, the COVID-19 and Cancer Consortium Registry (CCC19). They found that the score discriminated between patients, but it did so with some degree of heterogeneity.

This was largely driven by higher patient age among the U.S. patients, a higher NEWS2 score, and lower albumin levels, Dr. Lee said.

To ensure the score’s applicability to clinical practice, the team set pragmatic thresholds to determine whether or not a patient required admission or whether they were at risk of dying.

For admission, they set a sensitivity of 85% and a specificity of 56%, while for mortality they set a sensitivity of 43% and a specificity of 92%.

When this was converted into a decision support tool, the model recommended hospital admission for 95% of patients who eventually required oxygen and 97% of patients who died.

The study was funded by The Christie Charitable Foundation. Dr. Lee declares relationships with AstraZeneca and Bristol-Myers Squibb (Inst). Dr. Wright declares relationships with NCCN/AstraZeneca (Inst).

A version of this article first appeared on Medscape.com.

Next Article: