Leaving behind values disguised as science
On sharpening the distinction between our values and our data.
In my book, Healthier, I proposed that the work of public health is best served not just by the generation of data but also by the promotion of values, and that positive change happens at the intersection of science and values. We shape a healthier world by building a base of knowledge while working within the broader culture to advance the values of public health. This means working to build collective engagement with the foundational determinants of health—the social, political, commercial, environmental, and technological forces that shape the health of populations—with special care for how these forces affect the marginalized and vulnerable. These are our values, the first principles of our field. Shaping a healthier world is as much the work of these values as it is of our data.
I have long seen this as a reasonable paradigm for the work of public health. We are a field that is grounded in the pursuit of data that informs population health science. This pursuit is characterized by a consequentialist focus. We are concerned with taking actions and pursuing research that maximize our capacity to bring about positive change. Values are inherent in this commitment; they help guide our efforts so that, of all possible options, we choose to take steps that do the most good for health. It was with this in mind that I have written of public health’s responsibility to help shift the Overton window—a term that has now become widely used—as a way of shifting values. For population health science to be most consequential, it needs to happen in a context where the values of the public align with those of public health. For this reason, public health must work, always, to shape values as well as to advance science. All this I continue to think is right, a correct framework for public health.
However, I realize now that perhaps I missed a key thought in this formulation. In order for this balance of science and values to work, we need to have credibility in both spheres. This means drawing a clear line between the generation of science and the promotion of our values. We need our science to reflect a process of reason and analysis, to be as free as possible from the biases and priors that can sway our conclusions away from the findings of pure empiricism. At the same time, we must remain firm in our values. We should not be afraid of moral clarity in our statements and actions. Yet we should also not be afraid of pursuing science that may lead to conclusions that contradict or complicate our preferred narratives and the values these narratives reflect. It is important to maintain a distinction between our science and our values, even as we remain committed to shaping a field that is founded on both. This distinction helps preserve the integrity of our science and the credibility of our values. When we let this distinction blur, it undermines our science by allowing it to seem biased, untrustworthy, and it warps the capacity of our values to support our work by undercutting the empirical basis of what we do. A biased science is a faulty science and a distrusted science. Public health needs and deserves a stronger foundation than that.
This is particularly true in the present moment, as public health faces a credibility problem. Americans are reporting lower trust in public health in the wake of the pandemic, and about 30 states have passed laws limiting public health authority, moves supported in part by a backlash to public health’s overreach—real and perceived—during the pandemic. This credibility problem should be of central concern to the public health community. It points to a level of distrust that could impede our ability to take actions that promote the public’s health in the future. There are many reasons for this problem. Certainly, one of the reasons has been the willful spread of misinformation about public health and a concerted effort by bad faith actors to discredit our work for political gain. These efforts reflect an attempt to undermine both the science and the values of public health and pose an undeniable problem for our work. It is important to acknowledge this, to be clear-eyed about the challenge it represents. But I have never been comfortable always pointing fingers at “the enemy” as a means of advancing the work of public health. While recognizing that there are those who have intentions that run counter to ours, and while being clear about the need to push back against this, I am much more interested in what we do, towards the end of doing better as a field. And one area where we can do better indeed is in shoring up the distinction between our science and our values, to improve our standing with the public and the integrity of our efforts.
It is not hard to see why the line between our science and our values may, at times, blur. Science is driven by values as much as it is by data. The work of science is, fundamentally, the work of people and people are driven by values, as well as by a range of other biases and assumptions. A little over a decade ago, Pew Research Center published data on the political leanings of scientists, finding 55 percent identified as Democrats, 32 percent identified as Independents, and six percent identified as Republicans. While these data are not the most current, they reflect a political bias which is still, I think, largely operative within science. This bias, of course, is not the only way in which science lacks a balance of perspectives. There is also, for example, a significant gender gap in research. According to data from the UNESCO Institute for Statistics, only about 30 percent of researchers worldwide are women. This absence of full diversity within science—diversity of thought as well as of identity—can mean that we do not see when our values have begun to shape our assumptions, to say nothing of our conclusions. Instead, we simply imagine these assumptions to be the empirical truth, with no one from outside our bubble to suggest this might not always be so.
I have noticed this on a personal level. I am occasionally struck in discussions about topics core to health, topics which are far from settled in the public conversation, to find that sometimes in our field we seem to be unaware of the full range of opinion on some matters and unaware that there is even a debate about them outside of our echo chamber. Or, if we are aware of a controversy, we assume that challenges to our way of thinking comes from a place of bias, rather than from the perspective that we are seeing some good faith, honestly felt criticism. This is complicated by the fact that the health and rights of many of the marginalized populations that we serve really are under attack at the moment. By refusing to engage in good-faith debate and the difficult conversations that follow, we lose our credibility to dismiss true attacks. And, by disengaging, we risk ensuring that those who wish to attack us will have at their disposal, mixed in with their bad faith, solid data and lines of legitimate critique that we have chosen not to see.
Why do we sometimes struggle to maintain a distinction between our science and our values? To say this is entirely due to progressive bias in public health strikes me as too pat. Looking deeper, it is possible to see how this challenge is shaped by the nature of our science itself. Public health is often concerned with the social sciences. In engaging with the social sciences, we can find ourselves doing work that is fraught on the grounds of values—where what we believe as a field pushes us in a particular policy direction, but where we still must practice empirical, open-to-all-possible-conclusions analysis.
This tension is clearly present in epidemiology, my core area of scientific inquiry. Many branches of epidemiology are based on the study of diseases (e.g., cancer epidemiology, infectious disease epidemiology). The central aim of this study area is to find the causes of these diseases, so we can intervene to support health. This focus has contributed to the understanding, supported by a weight of empirical evidence, that the root causes of disease do not stop at the biological processes from which sickness emerges. If we are committed to following the data wherever they lead, they inevitably take us to the social conditions that create a context for disease to take hold. This has informed the emergence of other branches of epidemiology—social epidemiology, lifecourse epidemiology, environmental epidemiology, etc.—which focus on how these conditions shape health. Just as data from “classic” epidemiology point to the importance of engaging with these conditions, the data generated by these new fields point to the need for certain policies and political priorities as the key to ameliorating poor health.
Those who work in these fields can hardly be expected to engage in generating data while remaining completely neutral about what the data say. A consequentialist epidemiology—a consequentialist public health—is one which acts on its knowledge, to shape a healthier world. In this sense, the very act of pursuing our science, of learning more about the foundations of health, is a spur to values-based action. This can create challenges when the public sees us seeming to step over the line between science and values. This crisis is compounded, perhaps, by the fact that science is sometimes nonreplicable in the areas where population health science operates, leaving the results of such work without a key safeguard against charges that a given conclusion may have been nothing more than the product of a small group of biased researchers.
To my mind, this leaves us in a place where we have an extra responsibility to be aware of our biases and to make sure our science is conducted as dispassionately as possible. As I wrote last week, fulfilling the potential of science means understanding its limits and proceeding, always, with care. As much as we might aim to separate our science from our values, they will always be to some extent intertwined. Good scientists are aware of their own priors and work to minimize the influence of preconceived notions, but the presence of bias and values-based assumptions can never be fully removed as long as science is practiced by humans with all their complicated, often unconscious, motivations. I do not subscribe to the idea that scientists set out to prove points, shaping their research to fit the conclusions they wish to reach. This may happen in a minority of cases which are then amplified by our critics, but, in the main, the vast majority of scientists try to do good, unbiased work. To do such work, particularly in values-laden areas, we need enormous dispassion, the courage to publish results that do not support our hypotheses, and for journals to publish null findings and for scientists to publish unpopular data that go against their priors.
Do we currently do this? I am not sure we do. As we reexamine the core philosophical underpinnings of our field, it is important to include an honest look at the interplay of science and values, the need for a balance between them, and the consequences of failing to strike it. Consider, for example, the potential role of nuclear power in addressing one of the key threats to public health—climate change. Many climate advocates have expressed reservations about nuclear power, even as the data suggest these reservations should not outweigh the potential benefits of this technology. Opposition to nuclear power has been so entangled with the values of many who advocate for climate action that it has been difficult to engage with the science with dispassion, though this may be beginning to change. Are we, as a field, capable of having conversations about key issues that advance such change by balancing data and values? Our values point towards the better, healthier world we wish to see, but getting there requires us to proceed one careful step at a time. We must be scrupulously aware of our biases, reckon with them, and conduct science that stands up to the scrutiny of all who engage with it.
__ __ __
Also this week.
As part of our Public Health Conversation Starter series I recently had the opportunity to speak with Marion Nestle about her latest book Slow Cooked, An Unexpected Life in Food Politics.
Thank you for a very interesting read, as always. I do wonder though, if this discussion often arcs wide of a value-informed scientific openmindedness to a an embrace of so-called 'value-free' science that regards all human values as a form of bias. As a qualitative social scientist myself, I find this position troubling. Science is *always* informed by values, whether they are internal to the researcher e.g., a commitment to social justice; or imposed externally through e.g., prioritisation of a particular research focus by funders. I agree that a certain degree of dispassion is necessary to perform empirically rigorous research, but worry that we sometimes slip towards the conflation of values with political (or other) bias. Much to think about!
Thank you for the great insights, Professor Galea! I have learned a lot!