HOLIDAY BREAK - Next shipping 01/02

Learn about how it works

The Global Burden of Disease

The Global Burden of Disease

Garth Brown |

The Global Burden of Disease

I have been experiencing something of an epistemic crisis, which is a fancy way of saying I’m less sure about not just what I know, but why I think I know it. And I’m not talking about the pressing but obviously intractable issues facing the country, like how to bridge the partisan divide or build a robust social safety net without compromising economic dynamism. I’m having trouble knowing what to think about the value of a widespread sort of science.

To explain why I need to first talk a bit about the Global Burden Of Diseases study. The GBD is a massive, ongoing effort to capture worldwide data about deaths, illnesses, and their causes. It includes thousands of researchers in 156 countries and territories, all working to collate information to not just get a snapshot of geographical differences but to also spot changes in trends across time: which diseases are increasing in prevalence and which are decreasing, are accidental deaths and injuries more or less common. This strikes me as tremendously valuable, and unless I’m given good reason to believe differently, I see no reason to doubt the validity of this foundational data.

The trouble starts with the move from what to why. Once we know that a large number of people are dying from a particular cause, it’s natural to start wondering about the causes of that cause; we don’t just want to know that some number of people died of a particular disease, we want to know the reasons for the disease, so that we can work to prevent or reduce it. In certain cases this may be reasonably obvious. For example, the dynamics of a tuberculosis outbreak are well understood. But in other cases, chronic diseases in particular, matters are much more vexed.

A clear example of the difficulties is a systematic analysis of risk factors contributing to the GBD, which attempts to answer exactly this set of questions, by ascribing a certain number of deaths as well as disability-affected life years to 87 different risk factors. In other words, it’s taking final causes of death, like heart disease, cancer, and so on, and attempting to go back one step further, to ascribe those in turn to behavioral factors like smoking or diet.

And this is where things go off the rails. As a detailed response also published in The Lancet points out, the previous risk factor analysis, from 2017, attributed 25,000 deaths to unprocessed red meat, with a 95% uncertainty range of 11,000 to 40,000, while the 2019 version (the one linked above) increased the attributed deaths 36-fold to 896,000, with an uncertainty range from 536,000 to 1,250,000. ‘Uncertainty range’ is basically equivalent to a confidence interval, which is an estimate of probability. Basically, in the above example, the 2017 authors are 95% sure the true number lies somewhere between 11 and 40 thousand. Yes, there’s a 5% chance it’s higher or lower, but a bell shaped distribution suggests even on the high end it couldn’t get close to the 2019 range.

What’s most striking is the obvious internal inconsistency. In 2017, the range for deaths is given in the low tens of thousands. A couple years later the range for the same risk factor has been increased from a low in the hundreds of thousands all the way up to a high of a million and a quarter. In other words, both cannot be correct; either the 2017 95% uncertainty range is completely wrong, or the the more recent estimate is. (Or both are wrong, though that’s a post for another day.) There is no conceivable world in which meat went from mostly benign to lethal with such rapidity.

Yet in the introduction to the most recent risk factor analysis the authors write:

Overall, the record for reducing exposure to harmful risks over the past three decades is poor. Success with reducing smoking and lead exposure through regulatory policy might point the way for a stronger role for public policy on other risks in addition to continued efforts to provide information on risk factor harm to the general public.

In other words, we policymakers should be looking at ways to reduce other risk factors by employing the same sorts of approaches that have worked to reduce lead exposure and smoking, which I assume would mean taxes and perhaps outright bans.

The problem is that the risks of lead exposure and smoking have stronger evidence backing up their harmfulness. But when the authors point to other factors they think should be corrected, like particulate air pollution, why should I trust that their analysis of its risk is correct when they have revised so radically from their previous position in another area, particularly when the weighting of all these risk factors is linked?

Put another way, why are the authors comfortable giving their findings an imprimatur of scientific rigor if they are liable to be radically revised? Why should we not just trust these numbers, but base public policy on them? What good does it do to pretend a complex system (87 risk factors! Dozens of outcomes!) can be captured by such a model when the year-to-year revisions suggest it cannot?

The trouble is that the people who did this report – who generated the computer model, decided on the parameters that would govern it, and then wrote up the results – are both smarter and far more knowledgeable than I am about this area. So what do I do with the fact that I remain extremely skeptical that it is an accurate reflection of real-world risk?

One common, but misguided, conclusion I often see people make is that questionable science proves its opposite. The fact that something strange is going on with the risk ascribed to red meat in this analysis does not mean that eating red meat is necessarily healthy. (I happen to believe it is, but not because this study claiming the opposite is flawed.)

It also does not mean that just because the scientific mainstream makes mistakes that the alternative health sphere is better. As I suggested in my post on the evidence for the (un)healthfulness of seed oils, prominent figures who position themselves as counter to the mainstream often deploy even shoddier scientific justifications for their arguments.

So, as I said at the outset, I am left in a quandary. The best way forward is not a blanket rejection of all science, but an attempt at nuanced skepticism. When it comes to straightforward questions, like whether a particular treatment for an infectious disease works or not, I don’t see a lot of ambiguity. But the more complex a model becomes, the more variables it seeks to account for, and the longer the time frame it requires to show an effect, the less I am inclined to trust it. In a world that privileges scientific claims to the extent ours does, it can be hard to simply say, “I don’t know if that’s true.”

But if the goal is to see reality, paradoxical though it may seem, we all need to get a bit more comfortable with uncertainty.

Leave a comment

Please note: comments must be approved before they are published.