The failure to persuade
My original plan was to emphasize the power of the RCT. Despite strong associations of low vitamin D levels with poor outcomes, the trials show no benefit to treatment. This strongly suggests (or nearly proves) that low vitamin D levels are akin to premature ventricular complexes after myocardial infarction: a marker for risk but not a target for therapy.
But I now see the more important issue as why scientists, funders, clinicians, and patients are not persuaded by clear evidence. Every day in clinic I see patients on vitamin D supplements; the journals keep publishing vitamin D studies. The proponents of vitamin D remain positive. And lately there is outsized attention and hope that vitamin D will mitigate SARS-CoV2 infection – based only on observational data.
You might argue against this point by saying vitamin D is natural and relatively innocuous, so who cares?
I offer three rebuttals to that point: Opportunity costs, distraction, and the insidious danger of poor critical appraisal skills. If you are burning money on vitamin D research, there is less available to study other important issues. If a patient is distracted by low vitamin D levels, she may pay less attention to her high body mass index or hypertension. And on the matter of critical appraisal, trust in medicine requires clinicians to be competent in critical appraisal. And these days, what could be more important than trust in medical professionals?
One major reason for the failure of persuasion of evidence is spin – or language that distracts from the primary endpoint. Here are two (of many) examples:
A meta-analysis of 50 vitamin D trials set out to study mortality. The authors found no significant difference in that primary endpoint. But the second sentence in their conclusion was that vitamin D supplements reduced the risk for cancer deaths by 15%. That’s a secondary endpoint in a study with nonsignificance in the primary endpoint. That is spin. This meta-analysis was completed before the Australian D-Health trial found that cancer deaths were 15% higher in the vitamin D arm, a difference that did not reach statistical significance.
The following example is worse: The authors of the VITAL trial, which found that vitamin D supplements had no effect on the primary endpoint of invasive cancer or cardiovascular disease, published a secondary analysis of the trial looking at a different endpoint: A composite incidence of metastatic and fatal invasive total cancer. They reported a 0.4% lower rate for the vitamin D group, a difference that barely made statistical significance at a P value of .04.
But everyone knows the dangers of reanalyzing data with a new endpoint after you have seen the data. What’s more, even if this were a reasonable post hoc analysis, the results are neither clinically meaningful nor statistically robust. Yet the fatally flawed paper has been viewed 60,000 times and picked up by 48 news outlets.
Another way to distract from nonsignificant primary outcomes is to nitpick the trials. The vitamin D dose wasn’t high enough, for instance. This might persuade me if there were one or two vitamin D trials, but there are hundreds of trials and meta-analyses, and their results are consistently null.