The drawing the line problem
How can science reach consensus and take action in a time of doubt?
The line between what is known and unknown is becoming increasingly blurred. Information today flows freely and chaotically. In fragmented media and scientific environments, it is becoming harder than ever to determine which facts or expert views constitute “settled” knowledge and which remain open for debate. This is made even more complicated by a variety of actors willfully injecting doubt into the knowledge ecosystem, labeling as “fake news” or “unscientific” ideas that are not politically palatable or appealing. While the idea that knowledge is contested is not new — consider, for example, early debates about Darwin’s theory of evolution — there is little question that this has become a more challenging problem recently. Historically, so-called gatekeepers helped define consensus reality, but the rise of more democratized means of communication has meant that anyone can publish a claim or counterclaim — creating a sense that everything is contested or uncertain.
This brings us to the core problem — that of “drawing the line,” deciding what we accept as true or certain, in an era of eroded trust and proliferating perspectives. How can we know where to stand when the ground of knowledge keeps shifting? And how do we communicate this to the public? Fundamental to my asking these questions is a recognition that if we draw lines of consensus too soon, we risk premature closure and the exclusion of nuance; if we wait too long, we risk descending into chaos, endless doubt, and indecision. It is worth, then, taking a moment to determine where and how we should draw a line between knowledge that is still in flux and knowledge that we might deem settled — or, at least, as near settled as it is possible for knowledge to be, admitting space for reevaluation of priors and an open-minded consideration of all we think we know.
Let us start with where we used to be. In the not-so-distant past, gatekeepers (journal editors, mainstream news outlets) played an outsized role in shaping public knowledge and scientific consensus. These gatekeepers essentially “drew the lines” of credible information — determining which findings were published and which stories made the news. The result was a more unified consensus narrative, as most people got their facts from the same curated sources. There was a trust in authority: If an expert or major publication said something, it was largely accepted as true. This created a common baseline of agreed-upon facts. There were, of course, problems with this system. Legitimate new ideas sometimes struggled to get past the gatekeepers, and minority viewpoints could be marginalized. This was the case centuries ago, when the thinking of scientists such as Galileo faced resistance from civil and religious authorities, and it has been the case more recently in, for example, the underrepresentation within the research world of voices from the global south.
Yet, the gatekeeper era provided a kind of order — a clearer line between established knowledge and fringe speculation. The rise of the internet and social media shattered this old information order. Traditional gatekeepers lost their monopoly on truth, and anyone now can broadcast ideas, whether these ideas are based on rigorous analysis or on cherry-picked facts. There is much that is good about this new order—heterodox ideas now truly do have a chance to emerging when previously they may have been suppressed by an erroneous consensus. But there are real costs. Algorithm-driven news feeds create echo chambers; platforms learn our preferences and feed us content we are likely to agree with. We each end up in our own information bubble, seeing mostly what confirms our views. This has led to fragmentation of our shared reality. Each of us has access to seemingly infinite information, but this has had the perhaps paradoxical effect of enabling us to live in parallel realities, realities shaped by our feed. In this environment, alternative narratives have thrived. Conspiracy theories, fringe scientific claims, or contrarian opinions can find large audiences and dedicated believers online. Instead of one consensus, we have competing “truths,” each reinforced by like-minded communities. Alarmingly, this has created space for a rise to power of many who have made a career out of such contrarian approaches, creating even further opportunity for the public propagation of untruths and distorted facts.
Paralleling this phenomenon, certainly not unrelatedly, has been an erosion of trust in experts and institutions. As conflicting voices have grown louder, populations have grown more skeptical about traditional authorities who once drew the lines of knowledge. High-profile missteps and reversals by experts (for example, shifting public health guidelines during COVID-19) have further chipped away at credibility. As “experts” came to be seen as inconsistent or opaque, trust in expertise dipped. Public trust in scientists and medical experts fell after the initial pandemic period; in 2023, for example, trust in scientists was 14 percentage points lower than it was at the early stages of the COVID-19 pandemic. Similar drops in trust have been seen in government and media. This has led to previously unknown challenges to experts, meaning that even well-established facts can be doubted by a significant portion of the population. If we cannot trust the referees of information, there can be much less acceptance about where lines are drawn. In such an environment, achieving consensus on anything becomes a steep uphill climb.
Here lies the core dilemma, then: When should we draw the line on a scientific debate and declare what we (provisionally) know as, well, known? How do we do this in a way that increases, rather than harms, trust? In some respects, our challenge is daunting. Premature closure, arriving at conclusions too soon, can be dangerous. If we insist that “the debate is settled” before all evidence is in, we may overlook valid nuance or alternative explanations. Leaning into a tidy consensus while ignoring uncertainties can backfire. As Matthew Silk notes, “Emphasizing consensus at the expense of considered disagreement and uncertainty comes with risks,” potentially undermining credibility in the long run. This has been very much a live concern in recent years. Early in the COVID-19 pandemic, public health drew a firm line against mask-wearing — the U.S. Surgeon General even encouraged people to stop buying masks, saying they were “NOT effective” for the public. Weeks later, as more evidence accumulated, the guidance flipped, and the CDC urged cloth face coverings. Such reversals, born reasonably enough from evolving knowledge, unfortunately eroded public confidence. But the converse can be equally problematic. Excessive delay can breed disorder and paralysis. If we refuse to draw any conclusions until absolute certainty is achieved, we can get policy gridlock and endless argument. For example, decades of debate — much of it politically motivated — have resulted in stalled progress despite substantial data about a warming world. Similarly, we have decades of scientific consensus about — and millions of saved lives to show for — the efficacy of childhood vaccination for common infectious diseases, and suggesting that we are not yet ready to draw the line on knowing that the Earth is warming, or that vaccines save lives, causes needless immeasurable harm.
This all suggests the importance of finding a balance: drawing lines of consensus based on the best available evidence in the moment, while still leaving room to update those lines as new information emerges. This requires the humility to say, “This is what we currently hold to be true,” rather than, “This will be true forever.” How do we do this? I suggest in the following three ways.
First, by embracing openness in decision-making. Greater transparency about how experts and leaders draw lines of consensus may help to increase the public’s understanding of this process and forgive its occasional shortcomings. There is a trope — somewhat outdated to my mind — that people need to hear pat, simple answers to understand and internalize our guidance. I disagree. I have long thought that transparency about the science-to-policy process — the debates, the weighing of evidence, the uncertainties considered — may be more likely to result in the public trusting its outcome. Opening up the decision-making process demystifies it, showing that lines are drawn based on careful thought, not arrogance or hidden agendas.
Second, we should embrace epistemological humility as an underlying principle in all we do. This means acknowledging that our current understanding of the world is provisional. In practice, experts and policymakers could reflect this humility by communicating with phrases such as, “The balance of evidence suggests …” or, “We believe X is true for now, but we could learn more later that might change our minds.” By admitting the possibility of error, we can build credibility, signaling that consensus is not immutable dogma. This humility invites ongoing inquiry and signals respect for nuance, making it easier to adjust the lines we draw, if needed, without losing face.
Third, we should act, always, with a willingness to course correct. This means growing comfortable with making decisions without perfect certainty and being honest when we are doing so. This approach was pithily captured in a line often attributed to John Maynard Keynes, “When the facts change, I change my mind — what do you do, sir?” The key, therefore, might be to act under uncertainty while remaining ready to adapt. We should be drawing tentative lines, implementing policies or taking stands based on the best evidence, while being willing to redraw these lines as new information emerges. Ideally, this can help us avoid both the paralysis of inaction and the dogmatic limitations of unyielding certainty. Even more, this can help create a culture that sees changing one’s mind not as a weakness or a “flip-flop,” but as a logical response to learning.
Navigating the tension of the drawing-the-line problem will require a combination of honest transparency, humble acknowledgment of what we don’t know, and willingness to act while remaining flexible as data change. Caution and attention to when and how we draw lines can, and should, create a culture where we can indeed draw lines, but they are lines we are open to revisiting, finding space between the extremes of premature closure and perpetual indecision, as a step, even if small, toward rebuilding trust in science and public health.
__ __ __
Also this week
Thoughts in JAMA Health Forum on making US healthcare great and affordable.
Spoke with Dr. Philip Payne, WashU Medicine Chief Data Scientist and Director of the Institute for Informatics, Data Science, and Biostatistics, on the Gateway to Informatics podcast.
__ __ __
Goldfish takes a break
With this Goldfish, I will pause these regular essays for the next two months. It is summer, and I hope to take some time away to recharge. I always find taking a bit of time away from the actual writing and doing helps me sharpen my thinking, hopefully to be better at the writing and doing on return. Additionally, I am focusing writing time on putting the finishing touches on my next book “Why health? What we need to think about when we think about health,” that should be forthcoming from Oxford University Press in 2026. I will send out thoughts if the mood strikes, or if events so suggest, but otherwise I shall return toward the end of summer. I hope everyone has a moment to recharge in the coming couple of months.