The right amount of risk
On shaping a better engagement with risk guided by a practical philosophy of health.
I have been thinking a lot lately about risk and its role in our understanding of health. Risk, by definition, is our estimate of the potential for X outcome—that outcome being, in the case of health, sickness or injury. Most of us have an intuitive understanding of the link between health and risk. We regularly make choices about our health that involve our perception of what is risky and what is not. Such choices include choosing not to drive when it is icy, avoiding certain food brands when a recall is announced, and wearing sunscreen to the beach. We also factor in risk when making choices with an eye towards the long-term, such as choosing to live in a safe neighborhood, opting not to skip annual doctor’s appointments, and embracing regular exercise. We make these choices based on a calculation of risk. We think about the risk of taking or not taking certain actions and we decide to do what we think will keep us safe and healthy.
In our work, risk is at the heart of the choices we make about how best to support the health of populations. We are constantly weighing how we can best reduce the risk of disease and harm. During COVID, risk was central to the debates we had and the actions we took. In the face of a novel pathogen, we worked to minimize risk through measures we hoped would keep us safe. And this, as we now well know, led to substantial public discussion and debate about what risks we were, or were not, willing to take. Which brings me, in part, to musing today about risk.
In considering our, at times, rocky engagement with the notion of risk, it seems to me that there are two key elements to risk about which our thinking is not always clear. First, there is the science of risk, the data that reflect the danger posed by certain hazards to the health of populations. Second, there are the consequences of the decisions we make about risk, our thinking about which is fundamentally shaped by our values. I have addressed these elements before, focusing last week on science and values. So, a consideration of risk, emphasizing these two key elements, as a key component of shaping a post-war vision of health.
We begin with the science of risk. The science of risk reflects what risk actually is—what the data say about the likelihood of a given harm. We need to be clear about these data, so that our thinking about risk is based on an accurate understanding of what poses a threat to the health of populations and what does not. The challenge to this is that we frequently misunderstand risk. In particular, we have a hard time gauging risk across various potential causes of harm, often overestimating dangers which are, in reality, highly unlikely to harm us, while underestimating—or, at least, rarely thinking about—dangers that genuinely pose a threat. For example, flying is by far the safest form of travel. The odds of dying in a plane crash are about one in 11 million. Meanwhile, the odds of dying in a motor-vehicle crash are about one in 93. Yet each day millions of people get into their cars without a second thought, while about one in three Americans are anxious about flying. Or consider the odds of dying from heart disease and cancer—one in six and one in seven, respectively. These odds reflect a risk far greater than risk of death from other hazards, including death by drowning (one in 1,006), death from fire or smoke (one in 1,287), death from electrocution, radiation, extreme temperatures, and pressure (one in 13,176), death from a cataclysmic storm (one in 20,098), death from dog attack (one in 53,843), or death from lightning (according to the National Safety Council, there were too few deaths from this in 2021 to calculate the odds). Yet while most people do indeed fear cancer and heart disease, many have feared equally, if not more, any number of these other, far less likely hazards. I have written previously about how feelings shape our decision-making process, often crowding out our more rational considerations. When weighing risks, how we feel can matter as much as what we know, and sometimes more. We may know that we are far likelier to die from cancer or heart disease than from our personal phobias, but fear can sway our thinking and cause us to make choices based on an inaccurate perception of risk.
Just as we are liable, as individuals, to misunderstand risk, we are also, as a society, vulnerable to forms of thinking that distort our collective processing of risk. In his book, Getting Risk Right, Geoffrey Kabat writes of the many factors that influence how societies perceive risk. One of these factors is the phenomenon of availability cascades. Originally developed as an idea by Timur Kuran and Cass Sunstein, an availability cascade is a process whereby a concept or perception gains ground in the public mind. A concept does this by seeming plausible even when it might not hold up to deeper scrutiny, and by emerging in a social and technological context that is primed to help it spread. Kabat gives the example of the perceived link between cell phones and brain cancer. Cell phones emit a kind of radiation, radiation can shape cancer risk, we hold cell phones to our heads, thus it makes sense that cell phones might cause brain cancer. Such an idea catches on, aided by technology and “availability entrepreneurs”—figures willing to advance the spread of the idea for personal gain. It may be that the jury is, in fact, still out on a given risk, or that there is indeed an established risk, but it is small. In the case of cell phones and brain cancer, no definitive risk has been found, though the question is still, to an extent, open, as the technology remains relatively new. Such facts, however, are often overwhelmed by the forces that contribute to an availability cascade, warping our perception of risk.
Our relationship with risk is further complicated by the fact that risk is not stable. It increases and decreases. These changes are reflected in the difference between absolute risk and relative risk. Absolute risk is the risk of something happening, such as the risk of developing a disease. Relative risk is the comparison of risk in two different groups, such as the risk of disease in one population that receives a vaccine and the risk in another population that does not. For example, imagine that the risk of catching disease X is 10 out of 100 in vaccinated people and the risk is increased by 25 percent in unvaccinated people. This relative risk increase makes the absolute risk for unvaccinated people 12.5 out of 100. Confusion about relative and absolute risk can inform broader misunderstandings about risk in general. If a study finds doing X will, say, double the risk of dying from a shark attack, this report of relative risk sounds frightening. On the basis of such a report, the public might support banning whatever it is that seems to double the risk of such a death. Yet the absolute risk of death from shark attacks is already incredibly low, at about four unprovoked shark-related deaths globally per year. Even if doubled, the absolute risk of death from shark attacks remains very, very low. Without the context of absolute risk, however, a headline that says “Doing X was found to double the risk of death by shark” becomes the sort of communication that can support an imprecise understanding of risk.
All this speaks to the importance of ensuring our engagement with risk is based on a solid understanding of the data. We in public health must be clear-eyed about what genuinely poses a threat to the health of populations, looking at risk dispassionately, guided by our science.
Then there is the second element of our engagement with risk: our understanding of the consequences of the choices we make about risk. Because risk itself is simply data. What we do with these data is shaped by how we weigh what will happen if we do or do not take steps to mitigate risk. Consequence is shaped by two factors. First, by the support structures we build to mitigate risk. Imagine two cities of the same size, both of which are struck by an 8.0 scale earthquake. In one city, the death toll is 5,000. In the other city, it is 20,000. Each city was built on the same fault line. Why the difference in death toll? The reason is that the first city had building codes that were shaped with earthquake risk in mind, to ensure the city was as prepared as possible for the worst-case scenario. The second city made no such preparations and suffered the consequences. Our willingness to make such preparations in the face of risk can prove decisive. This willingness, however, is not just a matter of data alone. Why, for example, did one city prepare and the other did not? Perhaps the city that did not experienced terrible poverty and would have had to divert funds from programs for people living with homelessness to build the necessary earthquake-resistant infrastructure. Perhaps it was a city that valued beauty and felt earthquake-resistant buildings would have been too ugly. Both cities understood there was a risk, but each city had different values which shaped its priorities in the face of this risk. For one city, the risk of a terrible earthquake happening sometime in the century was enough to motivate action, and they had the means to do so. For the other, the lens of values meant the citizens saw the situation differently.
This reflects the second factor the shapes consequence: values. Different people can encounter the same risk and take different actions because ultimately it is our values that determine what we do with data. Imagine, for example, thinking about whether to jump off a cliff into a deep pool of clear water. There is a risk of striking the water at the wrong angle and incurring serious harm, or even drowning. But there is also the joy of jumping, of having the kind of experience that makes life meaningful. To jump, then, or not to jump? How we decide depends on how we weigh the consequences against the potential benefits. This calculus depends on both the nature of the risk—the odds of harm—and our values. How much do we value the experience of jumping when the drop is steep, the water cool and blue? What about when we are considering jumping into a foot of shallow water where we may slip and break our neck but without the potential upside of a dramatic and enjoyable leap into the sea? Would we still value the experience enough to take the risk? It is not just the odds that matter. It is about how we feel, what we value.
This brings us to the heart of our engagement with risk, which is also the central question of this newsletter, one which animates all we do in public health: what is health for? I have always believed the answer to be that health is for enabling us to live rich, full lives. As individuals, most of us recognize that rich, full lives always entail some measure of risk. To risk nothing is to do nothing, to love nothing, to believe nothing. It has also been observed by many that to risk nothing is, in many ways, the biggest risk of all—the risk of looking back on a life that was not fully lived. As individuals, we grasp this. As a field, however, we do not always apply this insight to our engagement with policy. Instead, public health has tended to engage in binary conversations about whether to adopt broad policies geared towards the elimination rather than the mitigation of risk. How we philosophically understand health, then, shapes how we view the consequences of risk. We need to deepen this understanding, to view risk through the lens of the philosophical foundations of our field, the heart of why we do what we do.
__ __ __
Also this week.
Our recent Health Equity article underscores that addressing the foundational causes of ill health, including inequity in resources, power, and opportunity within and between countries, is fundamental to the pursuit of global health equity. Thank you much to my coauthors Nason Maani, Salma Abdalla, Catherine Ettman, Lily Parsey, Emma Rhule, and Pascale Allotey.
Great insights, Professor Galea! I've been reading the books written by Alfred North Whitehead, and he talks about risk as challenging the unknown, conquering the object world with courage and imagination. Both change and creation are the result of taking risk. Our growth and social progress require creation and development. Therefore, life and civilization cannot live without risk. We say that science and technology are the primary productive forces. In fact, science and technology are risks to the human spirit. I think science and technology itself is a risk, because it strives to peek into the mysteries of the universe and change the structure of the objective world. Science may also create risks. From the perspective of civilization progress, the role of science and technology is not so much to create material civilization, but to introduce us into a situation where we need to constantly face risks, so that our spirit dare not have the slightest slack, we must always keep alert and imagination. These are some of my thoughts.