The biases we bake into our systems
Our human tendency towards bias can become entangled with the networks and institutions we create.
On November 30, 2022, OpenAI launched ChatGPT, an artificial intelligence chatbot. Since its launch, ChatGPT has already done much to influence how we work, communicate, and think about the role of artificial intelligence in our lives. The speed at which this influence has been felt calls to mind the early days of the internet as we engage with an emerging, potentially world-changing technology. The speed of AI development has raised concerns about AI safety, including the worry that AI could become too powerful too quickly, generating humanlike consciousness with the power to outpace us, in a scenario echoing many a science fiction film. While AI likely has a long way to go before it reaches this stage, if indeed it ever does, in one respect it may already have shown signs of being all-too human. As users familiarized themselves with ChatGPT, concerns have emerged about what appears to be a political bias in its “thinking.” Recent analysis found ChatGPT to have a “pro-environmental, left-libertarian ideology.” In a commentary for Brookings, Jeremy Baum, and John Villasenor wrote of testing ChatGPT by asking it a range of questions about political issues. OpenAI CEO Sam Altman has commented on the issue, saying that ChatGPT has “shortcomings around bias” which the company is “working to improve.” These biases reflect more than just the growing pains of a new technology. They reflect how a tendency towards bias—to which we are all susceptible—can become embedded within the systems we create.
AI may be on the cutting-edge of technological development, but it is still, fundamentally, a system shaped by humans, and, for this reason, vulnerable to bias. As humans, we form networks, communities, and institutions. These systems reflect the characteristics of their creators, including our tendency towards bias. Last week’s Healthiest Goldfish addressed how we, as individuals, can be subject to bias, and suggested how we might correct for this. While bias may be an ineradicable part of human nature, we can moderate our biases by acknowledging them and being self-reflective about their influence. This week, I will discuss how we can apply this approach to improving the systems that support our work. These include health systems, academia, the world of publishing, the media infrastructure through which we communicate to the public, the processes by which we build our networks through recruitment, hiring, and promotion, and the broader ecosystem around health. Some thoughts, then, on how bias can manifest in these systems, and how, by becoming more alert to this, we can shape systems which no longer reflect and amplify our biases. I will focus here on five key areas where biases emerge in our systems.
First, biases can manifest as in-group advantages built into the structure of institutions. While we may try to correct for this, the “default mode” of our institutions is often a bias towards certain groups at the expense of others. For example, we have as a society made progress in making spaces more accessible to persons who live with disability. Yet the bias in the design of most spaces still favors those who do not live with disability. Then there is the collective bias toward literacy, in which those who lack the ability to read well can face disadvantages in a world which makes little room for them. This is often the case in healthcare, a field which still broadly aligns with the assumption of patient literacy. Indeed, physicians’ overestimation of patient literacy may play a role in the persistence of health inequities. And there are other ways our institutions can betray a bias in favor of certain communication styles beyond its favoring of the literate. In many institutions, a bias towards the speaking of a certain language can place some people on the outside looking in. We do not always notice this when we ourselves speak the “default” language, but it can be a persistent issue for those who do not. Even at universities, which generally prioritize diversity and embrace international students, faculty, and staff, there can still be a bias towards a default language, often English, which can create challenges for those who do not speak it. The challenges faced by those who do not speak the default language, or who may not be fully literate, reflect the reality that language is, itself, an institution, a system, from which our biases can exclude many.
This leads to a second area where our systems can become entangled with our biases—the biases in how we communicate our work. The work of promoting health is, in large part, the work of developing and communicating ideas. Central to this work is the network of peer reviewed journals which serve as incubators for the ideas that support the science of health. Collectively, these journals help to shape what can become the dominant thought within our field—the mainstream view. Ideally, this view would be purely a product of the reasoned pursuit of scientific truth, informed by the results of studies and reasoned debate. While these factors are certainly an influence, it is also true that our biases shape how we communicate our work in our professional spaces, and, by extension, shape the ideas that are developed in these spaces, in several ways. For example, journal editors will often turn to people they know to write commentaries, keeping the range of thought represented in academic publications confined to a relatively small network of professional insiders. For this reason, I have long thought, and have argued in writing, that no one should be invited to write in leading professional journals; instead, all publication decisions should emerge from a process of blinded peer review. The same is true of professional meetings, where we should try to ensure that only blinded, peer-reviewed presentations are accepted. Such efforts are consistent with a focus on promoting diversity within our field by supporting the inclusion of a range of ideas and viewpoints and working to mitigate the influence of bias on communications.
Third, bias can shape our institutions by playing a role in our hiring practices. Within academia, for example, project leaders can find themselves looking to hire their own trainees rather than proceeding through a more open hiring process. The research world moves quickly and hiring known quantities with whom we have previously worked can seem like a way of ensuring projects are quickly staffed with good people, enabling researchers to “hit the ground running” on projects. However, this can create a status quo in which the research world risks becoming less accessible to outside perspectives, our lack of full diversity keeping us removed from fresh ideas and approaches. Then there are the biases in hiring that reflect outright discriminatory attitudes, such as racial prejudice. Such practices are, of course, illegal, yet such bias can be hard to fully disentangle from any human activity. There is also the challenge—particularly relevant with this week’s Supreme Court decision on affirmative action—on how to best build academic communities in ways that acknowledge the legacies of exclusion that have long denied certain groups access to these spaces.
Core to these reflections about biases is the reality that our biases can be invisible even to us and that addressing them means remaining vigilant about when they may be subtly influencing our systems. It also means putting in place neutral structures which help correct for the presence of bias. In the context of hiring, this can mean blind interviews and structured rating systems for candidates. Adopting these structures can pose challenges, of course, and there is a good-faith debate to be had about how to best advance equitable, fair hiring processes. Such conversations depend on first acknowledging the role of bias in hiring and the necessity of addressing it.
Fourth, there are biases in the ecosystem around health. By “ecosystem around health,“ I mean the network of foundations, universities, NGOs, and other institutions which engage in the work of promoting health. Within this ecosystem, there is—though we may not like to admit it—a continual jockeying for the attention, funding, political clout, and prestige which helps to enable the mission of shaping a healthier world. These commodities are, at core, tools for building the influence and reach that allows actors within this ecosystem to incubate and operationalize ideas towards shaping a healthier world. This means being able to command the media attention that helps change the public conversation around health, to access the funding to advance research to shape cutting-edge health science, and to engage with the policymakers who help orient legislation towards a concern for supporting health. Like most resources, these are unevenly distributed. Access to them depends on the platforming and elevation of work within this ecosystem and on the media’s choices about where to focus its reporting about health. These dynamics are subject to biases which can ensure that some voices are heard more than others. Prestige begets prestige and the attention that comes with it, and it is often the individuals and institutions who already have much who get more. It is for this reason that, when the media seeks comment on a given issue, they often first turn to the most celebrated voices in our field, which is why we so often see the same people and places featured in news segments on health. This self-reinforcing bias can also be seen in the social media space, as journalists and members of the health elite cite, interview, retweet, and otherwise amplify each other, creating an echo chamber. While this echo chamber may be an accurate reflection of elite opinion, it may not reflect the full reality of issues of consequence for health.
In perhaps pulling back the veil on this dynamic, I do not mean to suggest it is always inherently problematic. Experts are experts for a reason, and their opinions are indeed excellent places to start if one is looking for a better understanding of health. However, experts are human like all of us, and humans are susceptible to groupthink and the occasional adopting of bad ideas. Within the health ecosystem, with its bias towards the expert class, it is easy for such ideas to quickly gain a mass audience, which can lead to ineffective approaches in the pursuit of health and diminished public trust in the integrity of our efforts. The internet-driven emergence of new media ecosystems, while posing challenges in their capacity to spread misinformation, reflect what may be the start of a new paradigm around the relationship of expertise to audiences. It could well be that in the coming years, audiences are no longer taken as a right of the credentialed, but, instead, as something to be earned through a public commitment to the truth and a demonstrated capacity to keep this commitment. Until then, those who are privileged to work in elite quarters of the health ecosystem should work each day to ensure that it elevates only the best and truest of what we have to say.
Finally, there are biases that can emerge in the processes that determine who is promoted and advanced within institutions. Many of these institutions—from academia to the nonprofit world—are, to varying degrees, hierarchical. Factors like credentials and seniority play a key role in deciding who is elevated, who can shape the direction of an organization through the assumption of leadership positions, and who is listened to in the media. While these hierarchies are, at their best, based on merit, reflecting the fruits of hard work, they are also influenced by biases. In the academic world, this can manifest in career tracks which are not always seen as equal. Some areas of research receive more outside interest and funding than others, some elements of professorship are better supported and professionally rewarded than others, and some categories of the academic profession are likelier to quickly accrue institutional status than others. This can have the effect of creating ingroups and outgroups in academic life. This is not the academic world as it should be, but I think few who are familiar with it would deny it is the academic world as it largely is. These challenges are not so much a product of the individual biases of those who play a role in academic hiring and advancement—though such biases can indeed be a factor—as they are structural, a product of the system.
This is not to say that academic hierarchy is itself a net negative. It is a system which has done, and continues to do, much to preserve the freedom and independence of thinkers within it, to create space for the development of ideas and the nurturing of intellectual talent. However, this does suggest we should work to disentangle the academic system from the influence of bias, in pursuit of a fairer, more egalitarian academic culture. We have worked to do so (imperfectly I am sure) at the Boston University School of Public Health, and it has been a privilege to engage with colleagues at other institutions who are equally committed to shaping the best possible context for academic life.
While bias within systems can seem like an intractable problem, it is important to remember that we built these systems. Because we built them, we can reshape them, towards the goal of reducing bias. In doing so, we can create new, stronger foundations for our work. Bias may be deeply human, but so is our capacity to learn, to grow, and, when necessary, to change course when we see ourselves headed where we should not go. Reflecting on the challenge of bias can help motivate just such a course correction.
__ __ __
Also this week.
A Dean’s Note on Affirming Our Values in a Post-Affirmative Action Era.
You make excellent points here, which to me summarize a platform from which to start a community-engaged conversation. Given that "experts are human like all of us, and humans are susceptible to groupthink and the occasional adopting of bad ideas," we need to create spaces for grass-roots dialogues about the world we envision with technology in it. I would be interested to hear where you think this could go. How much time do we have before we're too locked into a new system that potentially replicates our biases?
This is the most perfect and exciting use of the word “bake” ——- I love it!