A common way to assess how much various factors contribute to health is to estimate how much variation in health across the country is explained by each of those factors. But explaining variation is not as useful as many may think. This is the third and final post in a series on Nancy Krieger’s ‘s American Journal of Public Health paper. (The prior two posts are here and here.)
[T]the percentage of variation in outcomes that is ‘explained’ by particular factors is not equivalent to the proportion of risk causally attributable to these factors.
This is true, but not obvious. So, let’s make it obvious with a simple example.
Imagine a world in which U.S. health outcomes are determined by just two factors. Though this is clearly fictitious, just to be concrete, let’s say that life expectancy after age 25 is causally driven by how much we invest in education and how much we invest in housing for families with children.
With data we could estimate two kinds of relationships between life expectancy and its determinants: (1) how much variation in life expectancy is explained by each determinant and (2) the causal effect of each determinant on life expectancy. These are different things.
Here’s what “causal effect” means: We’re pretending education investments causally influence life expectancy. By how much? If we spend $100 billion more in education annually, how much longer do we expect they’ll live? That exact translation is the causal effect size.
If $100 billion per year spent on education causes people to live 3 additional years longer, on average, we’d say the causal effect is 3 years of life per $100 billion per year in education. One could draw that relationship as a line through the data, as shown just below, where the horizontal axis is education and the vertical axis is life expectancy.
Here’s what “explaining variation” means: Life expectancy varies, which you can see in the blue dots in the chart above. Though life expectancy in this hypothetical example is caused by only education and housing investments, it’s correlated with lots of other things, including where people live. We can readily observe geographic variation in life expectancy. People in, say, Minnesota live longer than those in, say, Kentucky. (This is true!)
Investment in education varies geographically too. (Again, look at the blue dots, imagining each represents a different zip code, for example.) And, though it’s causally related to life expectancy, the relationship isn’t perfect. That’s why there are dots in the chart that don’t fall on the line.
How close are those blue dots to falling on the line? That’s what “explaining variation” tells you. The more they fall along a line, the more variation in life expectancy is explained by educational spending. Notice this has nothing to do with the causal effect (the slope of the line).
Which matters more, explaining variation or causal effect? Perhaps there are important questions answered by how much variation is explained by different factors, but it’s not easy to think of them.
More commonly, we care about how long people live and helping them live longer. Just as an example, suppose our goal is to raise the life expectancy of people who live in zip codes with the shortest life spans — boosting the bottom. Should we invest more in education or housing?
Let’s look at the data. Suppose it looks like the chart below, where the vertical axis is, again, life expectancy. Investment in (dollars spent on) education is the horizontal axis for the right panel and housing for the left. The charts are drawn to the same scale.
Because the slope of the line on the right is steeper than the one on the left (specifically it is about three times steeper), that means that we can increase longevity more, on average, by investing a dollar in education than in housing (specifically three times more).
However, notice that the dots in the left-hand panel are closer to the line than those on the right. Housing investment explains more of the variation in life expectancy than education investment. (You can’t easily tell from the chart, but it’s over four times more.) If we thought that we should invest more in the factor that explained more variation, then we’d invest in housing, not education. That would be a mistake, for this hypothetical example.
The bottom line: It’s a lot easier to estimate how much various factors explain variation in health than it is to estimate causal relationships. There is a temptation to substitute the former for the latter when considering policy applications. That can lead to the wrong conclusions if factors that explain more variation are those with less causal influence (and vice versa).