When our biases get the better of us
Reckoning with the often-invisible biases that shape our work.
The work of creating the conditions that generate health is, at its best, about the pursuit of truth. To improve the health of populations we must engage with the truth about the world in which we live—the reality of the systems and structures that generate health. Just as we would not build an aircraft based on distorted design specifications, the work of public health needs to be grounded in foundations that reflect reality. Yet, if we are honest with ourselves, it is hard to escape the conclusion that we sometimes fall short of this ideal. We are human, and, as humans, our relationship with the truth is refracted through how we see the world. And we do not always see the world the way it is. Much of the time we see the world through our own lens, and that lens can be colored by biases. Perhaps my favorite definition of bias is “a preference or an inclination, especially one that inhibits impartial judgment.” That seems about right. It is our own take on things, and that take inhibits us from seeing the world as it should be. This is true in our daily lives, in all we do, and, for today’s reflection, in our science.
In recent years, there has been substantial discussion of bias in science, and the conflicts that it can elicit. Much of this discussion has centered around one of the most fundamental biases to which we are subject: the bias towards financial rewards. This is evident in literature on the commercial determinants of health, which looks at the intersection of public health and industry. It is there that we see tobacco companies bankrolling their own smoking-friendly science, it is there we see bought-and-paid-for climate change skepticism. There is no question that money can influence science, and that it is among the clearest forms of bias we can encounter in evaluating our field.
But for all its influence, financial bias has always seemed to me to be a relatively small part of the larger issue of bias. More central are the biases that are harder to see, the ones we may not even notice, as the proverbial fish does not notice the water in which it swims. These include the biases to which we are susceptible as individuals and the systemic biases that are embedded in the structures that support our work. These biases can cause us to make choices which, collectively, can steer our priorities away from the pursuit of truth and our work away from full effectiveness. I have written previously in this newsletter about the challenge of bias. The next two essays will revisit this challenge, addressing first how we, as individuals, can be biased in our work. Then, next week, we will consider the structural biases that influence what we do.
For ease of categorization, I will here consider five ways that we are susceptible to bias, as relevant to the work of shaping a new science for health in this post-war moment.
First, we can be biased towards what we think will best serve our career interests. The work of science depends, in large part, on having the professional status that comes with a strong publication record, institutional support, a good reputation among fellow scientists, and the capacity to attract funding. Accessing these resources means sometimes making choices with an eye towards advancement. This may mean a bias towards publishing positive findings in the hope that they will attract the kind of attention that supports a successful career. Or perhaps we will find ourselves choosing what we try to publish, or even whole research areas, based on what we think funders will like, or on the editorial preferences of certain journals. It is difficult to work in science without engaging with the reality that a bias towards career interest is a ubiquitous factor in what we do. It is arguably even more so in the age of social media, in which a presence on social media platforms can be a tool for professional advancement, adding fuel to the pursuit of career interest. Social media has done much good in helping us communicate science to the public. But it has also incentivized performative outrange and moral grandstanding, in which scientists can find themselves making statements and taking positions with an eye towards ladder-climbing and profile-building rather than the shaping of a healthier world.
Second, we are subject to ideological biases. Our ideological priors are the lens through which we see and process the world around us. It can be difficult to look beyond our ideological biases, to engage with ideas and data that may fall outside the bounds of our worldview. If, for example, we are committed to an anti-capitalist perspective, it may be harder to engage with data that suggest that markets can play a positive role in generating population health. If we are on the political right, we can be biased against data which seem to support a left-wing perspective, and vice versa. If we are not religious, data which suggest aspects of a religious lifestyle can support health may be hard for us to dispassionately engage with. The challenge of ideological bias was clear during the pandemic, as the conversation around the science of addressing COVID became sharply polarized. In some ways, by heightening the issue of ideological bias, the pandemic helped bring it to the surface in a way that now allows us to address it in this post-war moment.
Third, our work can be swayed by network-based biases. Consider the area of epidemiology, where there are a range of subfields, including social epidemiology, nutritional epidemiology, and life-course epidemiology. Within such networks, there can be a strong bias towards research which reinforces certain approaches and priors. If, for example, one’s work is situated within a network of social epidemiologists, it makes sense to continue pursuing work which supports the importance of social factors. There is much less incentive to produce work that shows that social factors may not matter in particular cases. An added incentive not to stray from the thinking of one’s network is our simple human desire to be liked. It is uncomfortable to be the odd person out, to take a position that runs counter to the prevailing view. This helps sustain network bias, shaping an incentive structure that can keep our science siloed.
Fourth, our science can be subject to identity bias. We all see the world from our own unique perspective. It is a perspective shaped by our experiences, by the circumstances of our lives. Each person’s life is different; what is central to the experience of one person could be entirely outside the experience of another. This can constitute a form of bias. We make choices based on what we know, leaning heavily on our life experiences and those of people like us. At its most pernicious level, this bias can lead to seeing the world through racist or sexist lenses, where we privilege perspectives that advance only those who share core personal identities with us. But our identities are many and extend well beyond core demographics. For example, at the start of the COVID pandemic it is possible that we in the academic world were quicker to support working from home and slower to support a return to in-person work because many of us have jobs that are conducive to remote work. Had this not been the case, we might have taken a more nuanced view of the policy. This bias underscores the critical importance of diversity—diversity of identity and of viewpoints—in what we do, something I have written about before. The more diverse our spaces, the more perspectives represented, the better we can see beyond identity bias, to shape an approach that is informed by a full range of experiences and inputs.
Finally, just as we can be biased in favor of our in-group, we can also be biased against those who are not like us. In this polarized moment, this bias has become a defining feature of our culture and politics. There are many who support Donald Trump mainly because he frightens and angers those they do not like. And there are many who reject the premise that anyone who supported Trump could ever have a good idea about anything else, or that anything the former president says could have a grain of truth to it. As much as we may wish our politics to be dominated by sober, fact-based consideration of important issues, it is difficult to deny, after living through the last decade, that bias against the out-group has been a key driver of much of our political behavior. This has been detrimental for our science, for our work to promote the health of the public. Our work depends on what this political climate has crowded out—the reasoned pursuit of data and open lines of communication about ideas. Bias against others threatens this process, at both the individual and collective level.
To be clear, I am not calling attention to these biases because I think they can be fully avoided. As I note earlier, to be human means to be, to some extent, biased and likely always will. In some ways, this is good. It is good to value social connections, to be biased towards our friends and colleagues and against those who seem to threaten us. The idea is not to eliminate bias, but to moderate it, to control for it in our pursuit of truth. We can do this as individuals by being more aware of our biases, more self-reflective about what we say and do. I will next week address how we can do this at the level of systems, to help our institutions avoid the pitfalls of bias and shape a stronger foundation for the future of public health.
__ __ __
Also this week.
A Dean’s Note reflecting on the lessons of Juneteenth and acknowledging the worst of our past, to shape a better future.