Cause vs. Consequence

I have, for some time, wanted to respond to Sandro Galea’s essay on a Consequentialist Epidemiology, published in 2013 (!) in the American Journal of Epidemiology. It is hard to argue against the importance of consequentialism for academic epidemiology. It is equally hard to dismiss Galea’s concerns about our lack of influence with funders and policy makers. However, my views beyond this diverge from Professor Galea’s, and in the spirit of his “provocation,” I would like to respectfully offer an alternate perspective.

Galea seems primarily concerned that academic epidemiology is focused too much on identifying causes of disease, and not enough on actual intervention. He further worries that “our focus on causal rather than pragmatic thinking” risks marginalizing us with funders and policy makers. His solution to this problem is to promote a “consequentialist epidemiology” that is intensely focused on improving health outcomes, even “at the expense of the development of epidemiological methods and novel approaches.”

Setting up the need for a consequentialist epidemiology, Galea points to key epidemiological textbooks and journals as evidence that we are too focused on etiology and not focused enough on intervention. The former, he observes, don’t teach students how to intervene, and the latter are too focused on etiological research and publish relatively few papers about actual interventions. Though his observations are surely accurate, I think they reflect too narrow a view of academic epidemiology.

We train in schools of public health, where we are expected to study a range of topics that often includes intervention design and evaluation. We typically work in the same, or similarly diverse, institutions, where our expertise contributes to a variety of public health research programs, and is published widely. The interventions we contribute to have a better chance of appearing in “high impact” journals, while our etiological research and innovative methods are appropriately showcased for our peers in the “core epidemiological” journals. It is thus no suprise that “our” journals are focused on method, and in no way does this reflect a lack of consequentialism.

All of that being said, of course we are preoccupied with etiology and cause! Definitive answers to causal questions are rare, but if we want to intelligently inform intervention efforts, we must continue to pursue them - even in observational data, and even in the face of heroic assumptions. The only alternative is to say nothing at all. We are not content to simply describe the world we live in. We want to overcome these barriers to causal inference, as best as possible. Consequently, epidemiologists have built tools for causal inference that are improving applied research, even in “gold-standard” RCTs. Causal thinking is thus not a detraction to pragmatic thinking – it is a consequence of it.

Cause is our niche. Successful public health interventions are born of long, complex processes. A different expertise is required at each step in this process, and no person, or academic discipline, can master them all. Thus, if an academic discipline is going to contribute to this process, it must occupy a useful space. The application of causal thinking is ours.

From this perspective, I am troubled that we would sacrifice the continued development of epidemiological methods, especially those aimed at improving our causal inferences. I believe that any sacrifice to methodological rigour and innovation will do nothing but hasten us down the path to irrelevance. The solutions to our problems lie in more causal thinking, not less.

Galea provides examples of the kind of intervention research he would like to see published more often in our core journals. I believe that it is facile to say that we should engage more with interventions without clarifying our contribution. At a glance, we can see that Galea’s examples ask fundamentally causal questions; and even though these papers are about interventions, the questions being asked are about relationships in observational data. A better goal then would be to bring our expertise to intervention research, or, if needed, to show interventionists what we bring to the table. Consequently, any sacrifice of our method would be counter-productive.

I also fail to see how reducing methodological rigor and innovation will cure external dissatisfaction with “epidemiologic description and correlation”. Most of this dissatisfaction has to do with the never-ending reporting of weak, often conflicting associations among every exposure/disease dyad under the sun (see the Hypothesis Generating Machine). But this situation isn’t the result of obsession with cause, but rather the opposite (combined with perverse incentives to publish, and poor application of tools for statistical inference). Surely we should instead respond to our critics by continuing to improve the way we make inferences about correlations, and apply those methods more consistently as a profession. This of course requires more causal thinking, not less.

I also strongly disagree that the lack of “big wins” is an indictment of causal thinking. I offer a more likely explanation: we have already picked the low hanging fruit. All the big risk-ratios have been found, simply because they are the easiest to find. Now we are looking for needles in haystacks, and our necessary search for small effect sizes means that we should be striving for more methodological rigour, not less. If funders are walking away because we haven’t found the next tobacco-sized health risk, whom will they turn to? And how long will it take them to return once they realize poorer methodological rigour leads to even more ghost chases?

If the funders and policy makers do not appreciate our contributions, perhaps we must instead strive to help them appreciate what it is that we do. The field of economics doesn’t seem to have produced any big wins as of late, but they do undoubtedly have more sway with policy makers. Perhaps this is simply because more people know, or at least think they know, what an economist is. Conversely, my own mother still suspects that I am “some kind of skin doctor”. This is our problem.

Finally, I too agree that we must change the way we train epidemiologists, but what does an outcome orientated epidemiology training program look like? Which outcomes will programs focus on? How will these decisions be made? The beauty of training students on “our methods as tools” is that these can be applied to a variety of problems. I would argue instead for a greater emphasis on causal thinking at all levels of epidemiological training.

To this point I have argued that epidemiology in practice is already consequentialist in nature, and that our embrace of causal thinking and continued investment in methodological innovation is a reflection of this. Though I must agree with Professor Galea that our field faces important challenges, following from my position, I see no solutions in the dulling of our expertise. Quite the opposite. So in epidemiology, just like in life, there is no consequence without cause.

[First draft, Feb 06, 2017]

Related