Centering points of commonality where scholars and teachers can learn from each other, this episode explores bias through the intersection of three academic disciplines: cognitive psychology, political science, and Black studies. The episode discusses cognitive psychology or heuristics and bias within the context of medicine, specifically “why do doctors make mistakes?” This emphasis on neuroscience is countered with a discussion of structural racism and unconscious social bias, and the actions of people, groups and governments in conflict zones. The presenters discuss the different approaches to bias in their disciplines, the extent to which training is effective in addressing/managing biases, and key lessons learned in navigating biases in their professional lives.
Jonathan Sherbino, BSc MD MEd FRCPC FAcadMEd DRCPSC(CE)
Dr. Sherbino is an emergency physician and trauma team leader in Hamilton, Ontario. He is the Assistant Dean, Health Professions Education Research, Faculty of Health Sciences, and a Professor of Medicine, McMaster University. Jonathan is the past Chair of the Specialty Committee for Emergency Medicine, Royal College of Physicians & Surgeons of Canada.
Jonathan is a medical educator and researcher. He is the co-editor of a physician training framework adopted in more than 50 international jurisdictions. He co-hosts the KeyLIME (Key Literature in Medical Education) podcast with audiences in more than 40 countries and has published more than 175 papers and given numerous plenary and keynote addresses. His research focuses on competency-based medical education and clinical reasoning.
Andrea A. Davis
Andrea A. Davis is Associate Professor in the Department of Humanities at York University, founder and coordinator of the Black Canadian Studies Certificate, and the 2023 Academic Convenor of the Congress of the Humanities & Social Sciences. She is co-editor of The Journal of Canadian Studies, former Special Advisor on the Faculty of Liberal Arts & Professional Studies’ Anti-Black Racism Strategies, and former director of the Centre for Research on Latin America and the Caribbean. Her research and teaching encourage an interdisciplinary, cross-cultural dialogue about Black people’s experiences in diaspora. She is the author of Horizon, Sea, Sound: Caribbean & African Women’s Cultural Critiques of Nation (2022).
Sarah-Myriam Martin-Brûlé
Sarah-Myriam Martin-Brûlé is Associate Professor at Bishop’s University, Fellow at the International Peace Institute, consultant at the United Nations (UN). She is the Deputy Director of the Réseau de recherche sur les opérations de paix, and the 2018–2019 Canada Fulbright Research Chair for Peace and War Studies. She is an associate faculty member of the Center for International Peace and Security Studies (CIPSS) and Montreal Center for International Studies (CERIUM). Her research focuses on peace operations and her projects address gender and UN peacekeeping-intelligence. She recently conducted fieldwork in Central African Republic, Mali, Uganda, Democratic Republic of Congo, and South Sudan. She co-hosts the podcast “Conseils de sécurité” a co-production of the Canadian Defense and Security Network and the Réseau d’analyse stratégique.
Transcript
Andrea Davis: I’m Andrea Davis and I’d like to begin this podcast by acknowledging the Indigenous Peoples of all the lands on which we are located. As a member of the York University community, I recognize that many Indigenous Nations have longstanding relationships with the territories upon which York University campuses are located that precede the establishment of York University. York University acknowledges its presence on the traditional territory of many Indigenous Nations. The area known as Tkaronto has been care taken by the Anishinabek Nation, the Haudenosaunee Confederacy, and the Huron-Wendat. It is now home to many First Nation, Inuit and Métis communities. We acknowledge the current treaty holders, the Mississaugas of the Credit First Nation. This territory is subject of the Dish with One Spoon Wampum Belt Covenant, an agreement to peaceably share and care for the Great Lakes region.
Tkaronto’s intersecting communities are comprised of those native to this land, Indigenous peoples from other territories, as well as white settlers and those people who have come here by force, or otherwise as a result of slavery, colonialism, imperialism and ongoing wars. As the descendant of Africans formerly enslaved in the Americas who were taken from their ancestral lands against their will, I am committed to what Tiffany King calls “a notion of mutual care,” and I recognize that a future for Black peoples is not possible without a future for Indigenous peoples by whose leave I live, walk on, and share this land. I acknowledge finally that these Americas are built on violence and erasure and we bring these histories with us when we enter any room, any virtual space, and we must bring them into view. With this knowledge of history, we enter here in the hope of making a different world.
Jonathan Sherbino: Welcome to Shifting Conversations, a 3M podcast. This is an opportunity for us 3M National Teaching Fellows to have conversations about our work or scholarship and to engage with the ideas of peers. Hopefully this conversation is one that will challenge you and your thinking, it’ll encourage and excite you and give you an opportunity to reflect on the theme that we’re going to explore today. The theme we’re talking about is the idea of bias, how to understand and negotiate it. And it’s the way that we approach it the intersection of very three different academic disciplines. In a second, we’re going to introduce each other, but we hope to facilitate a conversation that takes us beyond perhaps how you think of bias, expands your understanding and challenges you in your thinking about how you can implement, understand, and approach the issue of bias in your own professional, academic, and perhaps personal life. But first, let’s have an introduction of our three guests today.
Andrea: I’m Andrea Davis. I am an associate professor at York University, where I teach cultures of the Black Americas.
Sarah-Myriam Martin-Brûlé: I’m Sarah-Myriam Martin-Brûlé, I’m an associate professor at Bishops University and a non-resident Fellow at the International Peace Institute. And I teach on peacekeeping and strategic studies.
Jonathan: I’m Jonathan Sherbino, I’m a professor of medicine. I am an emergency physician and trauma physician in Hamilton, Canada. I’m also the Associate Dean for Health Professions Education Research at McMaster University.
So, I thought we would start our conversation by just positioning where our own scholarship understands and approaches bias. And so, then we can see the three places and find the moments of intersection, find the moments of commonality, find the moments of dissonance, and I think that’s probably a really great point or a great place for us to jump off from. My own research on bias is from the cognitive psychology world. I have a programme of research about why doctors make mistakes. You can imagine why I think that’s important, particularly if you’re a patient. We want to with our group—a number of people I’d probably give shout out to would be Sandra Montero, Matt Sibbald, Geoff Norman, these are collaborators with me in my research team at McMaster—we’ve been pursuing over the last decade through a number of studies, both constructive-based but also objectivist from a controlled trial point of view, “Why do physicians make a mistake?” And, “How do we think about the issue of bias?” In cognitive psychology, the work of Danny Kahneman and Amos Tversky really brought the idea of bias to the forefront. Why does your mind make a decision that might be not just informed by what our rational evidence based approach is? I think the reason is, is because the world is really complex. And the data that we see every day is potentially overwhelming. And so, our brains have developed this idea of a heuristic. And a heuristic would be the mirror image of a bias, or the enantiomer of a bias. A heuristic is essentially a mental shortcut. It works most of the time, by saying, ‘here are the patterns that we see in life.’ And so, in my world, the heuristic is most people who come in with fever, and cough, right now have COVID. But guess what, you can come in with fever and cough and have a pulmonary embolism, a blood clot. And so, if I use the heuristic all time, every time I see someone with a fever and cough, I say that, “you have COVID,” I might miss that one in 10, one in 20 persons who actually comes in with a blood clot to their lungs. The treatment is radically different. And when I make that bias, where I see the patient, only in a general, mental shortcut kind of way that I make a decision in the care of the patient is… is not optimal and sometimes patient harm happens. And so, our whole approach has been trying to understand how to mitigate bias. Original work would suggest that we want clinicians to think smarter, go slower, don’t go so quick, be cautious and weary of your intuition. Don’t trust heuristics. Heuristics will lead you into danger. They will be, they will become the mirror image, which is bias, which is a negative outcome for that patient. And that’s what the standard thinking was. Now our research has, has really confronted that standard research by suggesting maybe just telling physicians to be smarter, think better, will not lead to less use of heuristic and less use, and less incidence of diagnostic error. In fact, that doesn’t happen when we tell physicians go slow, think better. They go slow, but their thinking doesn’t get any better. They still make the same number of mistakes. What we’ve discovered is that experience is everything. The way to mitigate bias from a cognitive psychology point of view in the training of physicians, is in fact to induce better knowledge and better experience. And so, what that means is intuition becomes a function of a greater wealth of patient encounters. I am a better clinician than my junior trainees, not because my mind is smarter. In fact, when I meet them, I’m amazed that I ever got into medical school. They’re highly intelligent, and their backgrounds pre-medicine are really impressive. And their philosophical approaches to life, they could argue me under a table. Their thinking process, their cognition, I think, probably is superior. But I’m a better physician, because I have seen thousands more patients than they have. I have developed a wealth of experience. And so, what that means when I think about bias is the way to mitigate bias is to ensure that we give an adequate sampling of all types of presentations to our trainees, so that they know and have seen, not just COVID, but they’ve seen blood clots. They’ve also seen things such as a pneumothorax, a collapsed lung. You can present with a cough, and breathlessness and you might actually have a collapsed lung. And that’s another unusual thing. So, you need to prevent bias not to tell physicians think smarter, think from the evidence, but in fact to give physicians a wealth of experience, so that they have a wealth of experience that is appropriately labelled, so just don’t let them go out there. They need to be supervised and taught and provided feedback. But give them a wealth of experience so that their database of encounters builds. And when they see the patient in front of them, it’s not simply “Well… statistically, it’s probably COVID, because everybody has COVID right now.” But, “your experience of the patient in front of you reminds me of a patient I saw a couple of months ago…” and that connection is vivid. And so, then I can make a better ultimate diagnosis and decrease mistakes. Now, Andrea, I’m curious to hear about how you think about bias, because I’m approaching it from a cognitive psychology point of view, I’m approaching it from how does your mind process all the evidence in data? And I’m using the word bias in a way that I suspect that you and I are going to say, “Wow, I would have never articulated the idea of bias in the same way.” So, talk to me about your positionality and maybe we can find points where we can learn from each other and when there’s commonalities. Or maybe there is a very different conceptualization of this idea.
Andrea: Yeah, it was really interesting listening to you, Jonathan, and coming from that perspective. I come to bias from an entirely different perspective, because I am, you know, someone who self identifies as Black and I teach in university classrooms where 90%, I would say at least 80% of my student body is racialized. These are the students I’m teaching, I’m mentoring both as undergraduate and graduate students. There are two ways I come to bias. I come to it through my own personal interactions as a Black person moving through the world, as someone who is a descendant of the formerly enslaved in the Americas. So, that I am always negotiating in my own day-to-day activities, that moving through the world, where, where and when, and how I might be encountering bias. And I’m not always sure it’s bias, right. So, I might, you know, in a very simple example, go to the grocery store and want to check out some items, but the person in front of me gets a really warm, wonderful welcome. And when it’s my turn, there’s silence. Or, I could be at a restaurant and so I have to decipher if the service I’m getting is because this person is tired and having a really bad day, or does it have to do with me, with what I am presenting and how they’re responding to me. So, for me, what I’m trying to say is that bias is lived. Not bias, but my embodied and my lived experiences demand that I… In some ways I stand on the other side of bias. I often see myself as the recipient, rather than the person giving and demonstrating. Not that I don’t have bias, but my experiences tend to encourage that kind of, you know, that kind of separation. So as a scholar, and someone who is teaching from the humanities, not in something understood as anti-Black racism, I don’t teach anti-Black racism. I sit with the work of authors. I look at philosophical thought. I look at poetry, and I use those things to unpack relationships, to unpack understandings of the human, and to understand power relationships. So, it’s out of those kinds of texts and those discourses and discussions that I invite students to place and understand their own lived experiences to see how they can decipher and respond to the multiple perceptions or things that they are receiving on a daily, on a daily basis. So, for my students, it’s walking into that hospital and not being sure how they will be received, how their own cultural differences might create a kind of bias in the, in the care that they receive, the assumptions that might be made about them. Right. So, I guess that’s, that’s a really interesting difference, I think between between maybe the three of us.
Jonathan: You know, as I reflect on what you’re saying, I see some contrast. And I see some commonalities. Maybe the first contrast is my research is academic in nature. I can walk away from my lab, knowing that my research assistants are compiling data, or the stats are there to be run on Monday. And it seems to me that you’re living the experience of bias and racism in both your scholarship, but in your daily life. And I think I just look at the weight of what that must feel like for you and your students. And, you know, that fills me with a great deal of respect for the strength that you have, but also sadness for that’s what our structures look like. I think the one maybe point of commonality is, I’m trying to give my patients experience, or my students experience with a multitude of patients to build up a richer understanding of the nuance of what the individual symptom presentation looks like. And if I stretch that analogy, hopefully not too far I wonder if in part, but not in total, some of that encounters that you have, your encounter in a restaurant or at a grocery store, whether the barriers of racism can slowly be welded down as people see past the superficial nature of the individual in front of them and understand the see who that real person is. But of course, that does nothing to issues of structural racism. So, the solutions I have from my own programme of research about approaching encounter doesn’t speak to the structural issues that are positioned against people of colour, that is not a function of just encountering more, but there are structural and systematic elements that have to be programmatically, systematically, bureaucratically, administratively addressed. So yeah, so it’s interesting, we use the same word and we land in different places, although the edges kind of touch. Sarah-Myriam, I’d love to hear about your programme of scholarship and how you think of bias.
Sarah-Myriam: I want to follow up on one or something you mentioned about who the real person is and I think this is, this is actually key. Because who knows and it depends as well on the context, right? I think biases are very dynamic. So, in my own field of work, I am blonde, I am white, I’m Canadian, and I work mostly on Sub-Saharan Africa and on civil wars and intrastate wars, et cetera. So, for me, the biases when I’m in those contexts is that who am I as a white blonde, Canadian who’s alien to a reality of war and grievance to call myself, I’m not necessarily calling myself that expert, in conflict in contexts that are so different from, from my own reality. And there’s a first bias of who am I to be working on those issues, and what legitimacy I have. And I guess this, this points to all types of biases that we can have. And in the way, of course, we teach the material that we’re teaching, and in the way we approach the topics, and also our own biases towards the classroom and also towards our colleagues. I think in the classroom, the way to teach conflict-… for example, I’m a professor at Bishops University in the Eastern Townships and most of the students are also quite alien to realities of wars, although not all of them. So, one of the one of the big challenges is how to teach those complex issues with students that have lived those realities and to those that for whom it’s a reality that is very far away and also addressing their own biases in in conveying this material. Biases will be related to also the types of students we are. I think COVID also highlighted different challenges of different students. So, for example, if we have students that are mostly in their early 20s, or if we have older students, we will be teaching differently as well. If students have a first language as a francophone, or if they’re anglophone, there will be a different approach as well to different realities. But also, that’s in teaching or convening material. But when we are interacting with our colleagues as well, we can have biases towards colleagues. So, for example, those who work at the UN HQ in New York will have different perspectives than those who work within the missions. Those who are locals, nationals, that see the interventions of foreigners in their own country will have also a different perspective on and maybe doubting, to what extent others coming in their own context can really grasp the nuances of their own reality. The employment statuses of people with whom we’re working will also create biases and will texture and colour the way we interact with them. So for example, in our own work we have different ranks of professors that might count sometimes in our interactions, but also in other fields. So, for example, if we’re interacting with managers with the consultant with those who have permanent jobs, or local staff will have different types of interactions, depending on their ranks. Also on their status versus if they are civilian police or military. Different biases will come into play. And of course, the gender aspect. If they’re men, women, or if they identify with other with other identities as well. So, I think the way it’s important again, to embrace the fact that that biases exists, but they are also very context dependent. And again, they will be dynamic. So, coming back to your question who the person really is will really depend on in which context we understand the work of that person. And those different facets of realities of who we are. And those different facets of our identity will play differently in different contexts. So, the fact that I have a PhD will give me credibility in an academic setting. The fact that I have a PhD, will be looked differently upon with people working in UN missions thinking I might be overthinking things. So, the fact that the academic versus the practitioners will always fear that I will theorise about the field instead of really trying to grasp what are their daily challenges. The fact that I am, for example, a consultant at the UN will be not seen very well, for example, working in UN missions, especially if I’m sent to the mission by the headquarters. Because I will look as I if I am someone or as someone who is outside of the system. So that will play differently. And I could go on with the fact that I am a civilian. So that will create sometimes a bias if I’m interacting with the military or the police. So again, I think I’m pointing to their own facets of our identity. And that plays not only in the way we teach but also in the way we work and that we understand the reality we’re addressing in our research and in the classroom.
(Break)
John: Sarah-Miryam, the first reflection I have, as I hear both of you talk is, you’re dealing with international conflict and peacekeeping and Andrea you’re dealing with anti-racism and structural bias. I feel I should stop talking because the problems you guys are tackling are so interesting and so key and critical. I’m just trying to teach medical students the difference between pneumonia and COVID. It seems the scale is different. One of the commonalities I thought about with your work, Sarah-Myriam, is this idea of context. Context is so important when we think about bias. And when we think about the mental shortcuts we’d like to take. You talked about how you might interpret or how your students might interpret from their own background assumptions. And how when they are doing analysis based on socioeconomic status, or cultural commonalities in certain regions, or just their own position and experience, how that context will influence and perhaps colour how they see and interpret and analyse. We have a similar challenge around context in the teaching of diagnostic reasoning or why doctors make mistakes. Historically, our education systems are full of all of these contextual non-sequiturs. And so, the person who has a heart attack is an older man. And it goes, it’s chest pain that goes into his left arm. And I’ll tell you that statistically, that’s the most uncommon type of presentation for a heart attack. In fact, when we look at data sets, there’s probably 50 to 60 types of common archetypes of how chest pain can present. What does that mean? It means that when women present with chest pain, or who are presenting with a heart attack, it’s not uncommon for their heart attack to be misrepresented or late to be diagnosed or missed altogether. And then the language as well, women present atypically. Well, they don’t present atypically at all. They present with chest pain, just like everybody else presents with chest pain. It’s that our educational structures have said the ‘typical,’ (and here I’m doing air quotes for the podcast, which you can’t see. So, you should know that I’m air quoting) the ‘typical,’ presentation is very contextual. And it’s contextual, probably from history, Andrea, that you would recognise as being very male centric in very one dominant culture and not representing what the broad type of presentation of heart attack looks like. And so, you know, Sarah-Myriam, when I hear about your work about context, I hear all the things that we need to unpack as we teach medical students how to approach patients, that context is everything. In that the unconscious biases that have been just incorporated, but this is what a heart attack looks like. Or this is what COVID looks like, fails to realise there is a whole spectrum of presentations, and all of them are really important to the patient in front of you. Because, if you are not a man, and you come in with chest pain, you really want your heart attack to be diagnosed. And we need to make sure that we think about the context and that unconscious assumption that underlies some of the actions and decisions and the way we see and perceive the world. So, Andrea, you’re gonna lead us through the next kind of theme that we’re gonna approach? What do you have for us?
Andrea: Yes, so I thought we could transition to talking about whether or not bias is always negative. Is it something to be countered or corrected? Can bias be neutral? So, I don’t know Jonathan or Sarah-Myriam, if you want to start?
Sarah-Myriam: I can start very briefly saying that I don’t think biases are negative. Actually, I think we need to embrace biases. It’s important to recognise and they exist, and they are inevitable. And so therefore, by definition, they are not neutral. And I think that’s why we create training to actually address the unknown unknown, right? Because of this inevitability of biases, that’s why training is necessary again to unveil, but that comes more from my own field of work that we’re always trying to unveil the unknown unknown. But biases are part of that. Most of us don’t know how, to what extent we’re biased. We’re all biased, but we don’t always realise all of our biases. And I think it’s, and they’re not negative, necessarily. They are part of who we are, our reality and our biases. Also, we evolve and they change. The biases we had when we were 19 are different than now that we are all 25. I mean, of course, with our with our lives biases will morph and transform but they will always stay there. And again, I think it’s important to recognise that and that they won’t always be on the same targets, on the same topics and they will manifest the same way either.
Andrea: Sarah-Myriam, what do you mean when you say that we need to embrace our bias? I mean, what would that look like? Because again, for me, right? I’m thinking about bias, like I said before, as receiving right as, as the person, a member of the group that’s often-receiving bias. So, what does it mean for the person on the other side to embrace their bias? You know, that part is a little bit confusing for me.
Sarah-Myriam: I think I wouldn’t use the same image of sides with regards to biases in the sense that we’re all subject to biases, but with different consequences, obviously. Right? Depending on the type of biases, sometimes biases can lead to violent encounters. And what you described at the beginning that doubt of whether someone is polite to you or not, or rough, is it linked to an appearance, is it linked to a bias? So, the consequences of biases can vary, of course, but we all have biases ourselves. And we are all subject to biases. And again, as a white francophone Canadian, going to many African contexts, there are certainly a lot of biases towards my appearance, for example, or who I am. Now, my appearance might play as a bias in the context of in African region, in which I’m a minority. I’m a visible minority being blonde and white, but other times it’s I’m going to be, biases will be against me because I’m an academic, or other times it will be because I’m a civilian. So that will change. And what I mean by embracing it is that it’s inevitable. We are part of the reality we are researching. Just like when we teach in the classroom, there’s as we know, there is no objective teaching. Every time we teach, even when we present a theory, the fact that we choose some theory to teach and not others, we are not neutral, we’re not objective. We are always subjective. And we’re all part of a biased realities in the choices we make, in the way we phrase, in the actions we take, in who we put in our syllabus. So, embracing it is to be aware that we are biased and we act accordingly. And of course, to always strive to address those biases and to be conscious of those.
Andrea: Yeah, so to embrace bias then is a call to self-reflection and self-reflexivity. I think many of us respond to accusations or the feeling that the idea that we ourselves might be biased by falling back on a kind of cognitive dissonance that says, “Oh, I’m a good person. I like everyone. I am fair. Of course, this doesn’t apply to me.” Rather than stopping, pausing and saying, “Okay, we all have biases. What is, how am I projecting that? Or how might something that I’m saying, the way in which I’m acting is being interpreted and read by someone else.’ And then it’s kind of what Jonathan said, it’s slowing down. But it’s slowing down with a kind of deliberateness and carefulness and thoughtfulness that is, is meant to both understand, I guess, right, the context and then to move toward changing it. And that’s difficult though. That’s really, really difficult. Jonathan, what are you what are your thoughts?
Jonathan: My first thought is, I’m so happy that Sarah-Myriam volunteered to go first to give me a chance to collect my thinking. The word bias, I think socially has a negative valence. And so, it’s hard to see it in a positive way. But the phenomenon just as Sarah-Myriam articulates is a true phenomenon. The mirror image of bias is heuristic. And in the absence of heuristic, none of us would function in the real world. We need mental shortcuts to allow us to process the overwhelming amount of data that hits our sensory registry. Our eyes or ears, touch, taste, if we had to process each in a conscious way, each piece of data, we would be paralysed, we’d never get out of bed. And in fact, we know that physicians who are most accurate who make the least errors have a higher non-analytical or intuition type of approach to diagnosis. And they build that off of the acquisition of years of experience that informs and helps fine tune what that non analytical, that unconscious approach to diagnostic reasoning looks like. So, heuristics, I think are helpful and it makes sense. But bias is when those heuristics fall apart and lead us into error. I think there’s a third way that I think we should really call out it’s the idea of I’m going to be completely objective. And I think both of you are hitting that. And I’ve heard this at times, people saying, I have a view from nowhere, I don’t have any opinion and the people who say that had the least degree of reflection. Because what they are failing to understand is that we all are conditioned. We all come from experience. We all have contexts that have shaped how we approach and see the world. And so, when people say, “Oh, no, I don’t have any opinion, I see the world completely as it is black and white,” that’s the scariest type of phenomenon for me. Because it says to me, ‘this student doesn’t even have enough insight to know that they need to be thoughtful about what does their database of experience look like?’ Where are their gaps? What kind of experiences do they need to have.’ When they say, I will see everybody who shows up with fever and cough and evaluate them equally, because I don’t have any bias, it tells me that they don’t understand the impact of socioeconomic status and access to primary care or whether the patient is a recent refugee and comes with a different geographical exposure to infectious disease. Or whether they have lifestyle modifications, or how the presentation and symptom complex varies across race and gender for various types of clinical conditions based on your genetic manifestation. So, I really get worried about people who say, “I don’t have any bias, I’m completely objective.” I think just as you articulated Sarah-Myriam, and as you kind of built on Andrea, bias, that negative valence should force the conversation around self-reflection, and say, okay, what do I assume? Let’s put that on the table and be clear so that I can know where my starting position is, and how I can course correct to a more balanced perspective in whatever kind of conversation as a learner, as a teacher, as a citizen of society, in understanding how we need to move forward.
Andrea: Yes, absolutely. So, everyone has bias, right? Bias comes from our histories, our understanding of the world. So as a teacher and researcher, for example, I have a humanities bias that might not have enormous consequences, but it is a bias. So, bias has to be understood in relation to the particular contexts or situations that we find ourselves in. We have to admit them and be aware of how they might influence power relationships. So, if I am, you know, serving on a hiring committee for a new colleague, for example, I’m going to bring what I know to bear on the reading of a CV, right, on that hiring committee. I think what we’re agreeing is that what we do with a bias is most important and that we’re transparent and self-reflexive, understanding that we bring our own knowledges, our own perceptions, our own views of the world to bear on all situations. Nobody can ever be neutral, right? We can’t suspend what we know. So, we have to acknowledge them. We can only in fact, counter or correct our biases, when we’re self-reflexive about them, and when we admit them. All right, thank you for that. Sarah-Myriam, do you want to take us somewhere else?
(Break)
Sarah-Myriam: Well, in the continuity of that, I would be interested in knowing how do you train your students? Or how do you do it? Or how is training effective in unveiling biases or addressing biases in general? So, Jonathan, Andrea.
Jonathan: I guess I can take this question a number of ways. I’m gonna try not to take all the airtime. When I think about clinical bias… so there’s two pieces, I think. One around clinical bias about your assumptions around a diagnosis. We try to teach that there’s a whole spectrum with symptoms with an associated number of diagnoses that happen with them. Feeling short of breath can be a function of how your lungs are working, how your heart is working, how your blood is working and the treatment for those, the kind of approximations are very different. And so the meta theme is we’re always trying to pull back all the information underneath and more importantly, expose our trainees to real life examples. Because even when we tried to put together a written case, in reducing down the patient, that complex, messy individual, and making it a written case of several 100 words, you’ve lost a whole bunch of extraneous stuff. And so, it’s no longer an authentic thing. And so, the ideal place for our trainees to counter the richness of various presentations is real patients in the clinical environment. And that’s one part of bias. I think that the counterpart of bias that we want to think about more specifically, is the assumptions about manifestation of health and socio-economic status in our trainings. And so, we’re always thinking about that. And so, this is not a question specifically to my own programme of research around why do doctors make mistakes, but it’s inherently linked. You can’t hear about how access to cardiac care in the U.S. is dramatically different if you’re a person of colour. It’s really obvious data. If you are a person of colour, if you’re a Black American and you show up into a hospital, your outcome is going to be worse. And you can strip away all the other socio-economic markers and the geographical whatnot. That’s what the data looks like. And so simultaneously beyond my own programme of research, we also have to think about these issues of how do we manifest and think about bias. One of the common tests that is in the medical education literature is the Harvard implicit association test. It’s a test that helps unpack people’s assumptions just looking at the physical manifestation of different faces. You look at faces, you make assumptions about who’s dangerous, who’s safe, who’s angry, who’s happy, and whatnot. And at the end, it says, based on your gender, your race, there are assumptions that happened there. The medical education literature is pretty mixed on the effectiveness of that, in terms of its effectiveness to change performance after taking the test when you measure it by, you know, if you keep repeating it, how does your performance change? It doesn’t seem to happen. And I think that’s because that literature is very simplistic and very superficial. I’m gonna name drop a colleague of mine in the U.S. Javeed Sukhera, who is an anti-racism scholar in medical education. He is, I think he’s a real world leader, and so if a listener wants to go Google search somebody who will say things smarter and better, about then, I’d turn them to Javeed. He’s a friend, I think he does some amazing stuff, his take on the Harvard IOT, so the implicit association test, is that it is an excellent conversation starter, that we should not look at simple metrics of you take the test, does your performance change after you take it a couple times, but rather, take the test, and then engage in facilitated conversation, seminar work and unpacking and conversations about who we are, where we come from, how we imagine things happening, that whole portion of self-reflection that we just talked about, but guided self-reflection. And when you use those types of teaching materials, then you can see actual transformation in the ways that students and faculty will articulate, perceive and speak of the assumptions they have about the patient in front of them. It makes a lot of sense for me, because I work in an inner city, urban emergency department. And so, our population is the diaspora of Canada. And, we’ve done studies that the patient population that comes to my hospital has a difference in life expectancy of about 20 years when you move 10km to the north of my hospital. And so, when you move out of a postal one postal code to the next postal code, that’s a 10km drive, your life expectancy changes by two decades. And that’s a function of socio-economic status in the mix of patients there. And that’s, you know, that’s a bike ride, I can bike to work. And so, we need to have those conversations specifically with my trainees, because they are encountering a spectrum of the diaspora of Canadians. If they’re not attentive to what do the assumptions they begin with, then they’re going to miss the diagnosis and their patients are not getting the care they need.
Andrea: I have to say that I am a bit sceptical about training, too, because people often, you know, think that they’re checking off a box. So let me go to a workshop. Let me do this. Let me put it on my resume. Let me say, you know, say that I have done this and bias magically disappears. There has been so much discussion about structural racism. And it’s linked to cognitive and unconscious bias in the last two years, right, after the police murder of George Floyd. And universities and businesses have been rushing to implement change to try to fall on the right side of history. I remember in the first months of COVID, and those, you know, protests, anti-racism protests that erupted everywhere, being shocked at seeing, you know, these people who have significant power, like the Prime Minister, like the police, kneeling in solidarity with Black people as a kind of public acknowledgement of their unconscious bias and desire to change. But my worry is that a lot of that can be performative. Right? So, there’s been a lot of discussion about the lack of representation of Indigenous and Black faculty in universities, as well as in senior leadership positions in Canadian universities, the low numbers of Black students in STEM and really everywhere in the university, the low numbers of Black employees in upper management positions and how one corrects this, right. This has been a huge part of the kind of anti-racism debates and activities of the last two years, and the first place that universities and corporations tend to go in trying to correct this structural racism is to implement or mandate unconscious bias training. And while I certainly think that this is a good start, there’s also a tendency, right, for this to be merely performative. And to be honest, sometimes I worry that having revealed, so this is kind of going back a little bit to what we talked about before, but having revealed people’s biases, is there a chance that we might actually be giving them the language to further entrench and maybe conceal those biases? So, I’m thinking, for example, again, of being on a search committee or a hiring committee, you know, you can hide your affinity bias by avoiding any reference to culture fit. You’re actually told in unconscious bias training not to use that word, culture fit. Right? “This person is a good fit.” So, you can hide your affinity bias by avoiding that kind of reference in your rationale, and by basing your arguments solely on evidence, right, the evidence in the CVs or the evidence in the resumes as the rationale you’re using to make your decision. But this approach can also lean into your confirmation bias that a white candidate is always already more qualified than a Black or Indigenous candidate. So that you are unable to read the CV in a way that takes into account how structural racism may have affected the very differences among the CVs or resumes we’re looking at. So, I think in some ways, we need to perhaps challenge the idea that bias is unconscious and do what we said earlier, which is, all of us need to be open and admit this is my bias, right? This is my bias in this case. These are the biases I am dealing with. Let’s just be transparent and open about them rather than pretending that somehow you know, they’re unconscious and I’m trying to negotiate that kind of relationship. In those cases, those, you know that bias has real, real consequence. So, my own thing is just, I’m not saying that there is no process towards change, but that you can’t do one workshop. You certainly can’t do one workshop and overcome your bias. It’s a perpetual, ongoing learning and unlearning. And if we admit that all of us have bias, then we admit that all of us have work to do continually.
Jonathan: If I could just amplify what you said, I would say, as an education researcher, one workshop doesn’t change anybody’s thinking about anything in whatever you’re teaching. But certainly, when you have some something so deeply ingrained, such as racism in our sociological biases, then it’s a fool’s errand to imagine that one I-80 or one workshop completion is going to do anything.
Sarah-Myriam: Yeah, I think I think it’s important in training to address the normative hammering that happens often by repeating norms and norms as if it’s going to be absorbed by the participants, and all of a sudden, they will apply it in their daily life that that normative hammering doesn’t necessarily change practices. But I still think that training is good. To highlight, notably that what can be counted is not the only thing that counts, so that we need to take into account intangible factors in what we see and don’t see. And also, to refer back to the assumption of the real manifestations or what are real manifestations? And for example, in my own work, I work on peacekeeping, intelligence and on gender. And one of the big challenges in in addressing issues of gender in a conflict setting is that often there’s a conflation between conflict related sexual violence and the question of women in conflicts and gender being conflated with a question of women as if gender was only about women. First thing. And the second thing that women are recipients of security, that women are victims or threatened to be sexually assaulted all the time. The problem with that is that, of course, if we don’t address that bias is that we don’t see how women are active agents in conflict as well and active combatants as well. And again, that women can be both victims and active combatants. So, in that setting, training are important to show different scenarios, different setups and come back with a spectrum of anecdotes that can actually help the participants of those training realise different roles that different actors can have in conflict. So, one of the recent examples I had was in one country there would be attacks along the road, and it took the analysts several days to realise who were perpetrating the attacks, and how they got the knowledge of where to attack. And the reason is that the analysts were looking out for reconnaissance units that they had pre identified as necessarily, boys between 18 and 25, on motorcycles probably working as recon units. They completely oversaw the fact that for a few weeks, there were a new group of older women with baskets and with fishes and wood that would go from one village to another to sell their merchandise every Saturday. But they didn’t notice that group of older women. Because of course, their biases was that it would only be young boys working as recon units. But when they finally realised that these older women were the recon units, then they were able to identify their route, and they were able to prevent the attacks. But I mean, this is one of the examples that we have in which, because of biases, we don’t realise, we don’t see the reality in front of us, because we’re biased against who are, for example, who are the relevant actors. In my own field of work, this is always a starting point, who are the relevant actors, and often women are not part of those relevant actors. And I think training on how to think about who is relevant in which context and in which capacity can actually help. So that’s a different take of training, of course. It’s not anti-bias training. And just one training doesn’t necessarily change daily practices. But it certainly can expose to different manifestations of realities to which participants or analysts in this example have never been exposed to. Help them, again, unveil this unknown unknown, looking at what are your preconceptions of who is relevant and who is not. And who do you don’t see who is invisible, who is invisible. And that’s, that’s important, and that can be conveyed through training.
(Break)
Jonathan: So, our time is, is rapidly coming to an end. I have a lot more things I want to talk to both of you about, which means I think we’ll probably need to book an episode two and continue this conversation. But if you’re listening to this in the car, maybe your commute is over. Or if you’re running, then you’re definitely tired, and you need a break. So, we’re going to, we’re going to wrap up now with an opportunity for each of us to close with a take home point or an idea around our own scholarship, and our own academic positionality around bias that we really want to emphasize or that we haven’t fully covered in round one of our discussion. So, Andrea, to you.
Andrea: Yeah, thank you so much, Jonathan. I’d like to talk very briefly about how I integrate my understanding of bias in my in my teaching, because teaching is such a big part of what I do. And, you know, I am teaching classrooms that are largely racialized, but those classrooms are not homogenous spaces, right? Even students who self-identify as Black are coming from different geographies, different class positions. They’re continental African; their first languages might be French and not English. Their religious perspectives are different. They might be gay, transgender, two spirit, and so on. So, I’m dealing with students across a multitude of differences. And two of the scholars that I rely on a lot in my teaching, in particular of my large first year course, are the feminists, Audre Lorde and bell hooks, both of whom we have lost but who remain such a deep part of of my own teaching practice. And from Audre Lorde, I draw on an idea of the mythical norm, that a mythical norm circulates in each of our societies. In North America, that mythical norm is white, male, young, financially secure, Christian, able bodied, heterosexual, and so on. Very, very, very few of us identify, can identify fully with that mythical norm. Yet it operates and has power and the more boxes we can select in that category, then the more power we kind of normatively occupy. On the other side of that is learning how to respond, or how to deal with each other across our differences. So, for Audre Lorde, she’s you know, there are three ways in which we deal with difference. If we are the person who enjoys power, we often respond to difference by trying to eliminate it, right, to get rid of it altogether, so that our power remains entrenched. If we’re coming from the position of, you know, the person who has no power, the person who is marginalised, we might respond to our own difference by rejecting it, right, what is perceived as our own weaknesses by rejecting them and elevating that normative, that mythical norm that we see circulating in our societies. What we haven’t learned to do well, Audre Lorde says, is how to relate to each other as equals across our differences. And that is something that I come back to over and over again, in my teaching, right? This idea of how to live in relationality, in reciprocity, understanding that we need each other and that difference is a good thing. It’s not a bad thing. And how do we then learn to relate to each other across our differences? bell hooks also for me has this wonderful definition of stereotype in which she identifies a stereotype as a myth and a fantasy that’s acting as though it’s real. But at its core, it’s actually a myth and fantasy that allows us to confirm and entrench our biases, right? So as long as I insist on believing that you can only be this way, right, Sarah-Myriam, because you are a woman and you are white and you are blond, then there’s no way for me to really know you, right, to take the steps that make, you know, real knowing possible. But the minute I’m willing to do that, the minute I’m willing to take those steps, then the stereotypes disappear and your humanity emerges, right. So that’s what we’re trying to reach towards, make that humanity emerge. So, in my teaching practice, I’m trying to create brave spaces of learning, that are not always going to feel safe because I’m asking students to take a step outside of their own way of looking at the world and to step into somebody else’s space. And that sometimes feels uncomfortable. But what I want is to make deep self-reflection possible so that, as learners we’re constantly challenging our preconceived knowledge bases, understanding our individual points of privilege, and learning to see things from multiple perspectives. So, making space for the telling of different stories and valuing each other as equals across our differences. So that’s what I’m really trying to do.
Jonathan: Thanks for that to Andrea. One, I want to take one of your classes, but what I really am probably going to do is I’m going to steal, well, I’m going to give with attribution, I’m going to take with attribution, some of these ideas and think about how I can incorporate it into my own teaching. Sarah-Myriam, any final thoughts or take-home points you’d like to share?
Sarah-Myriam: Thank you so much, Andrea, for that. And indeed, we should have that talk at the beginning of the class and at the end of the class as well to have those reflections on ongoing and fresh in our mind all the time. And in a sense to take-away points for me and what I’m trying to teach my students but also myself all the time is to remind myself, ourself, that we are a disturbance, actually. And my advice to students and again to myself as well is to always be mindful that we are a disturbance, that I am a disturbance, that they are a disturbance. Because complex environments are not only complex in terms of security and safety. There are also a complex of interval and interactions, interest and stakes. And anywhere we go, it’s important to be mindful that we are noticed for different reasons, and that we’re not always welcome. And we are an interference to something, somewhere, for someone. And that’s important to acknowledge. And there’s the fact that we must acknowledge that no matter how often how much one has travelled, how much we know in environments, environments are always dynamics. And we all always operate in somewhat blindly. So sometimes an environment that we are very familiar with in one context might be quite different in another. In my own work, for example, if I’m very familiar with a conflict situation in Mali in 2017, obviously it will be quite a different situation in 2022. But all of that to say that the manifestations of our biases will again evolve and change and it’s important to remain humble and be mindful that we are part of a system and that each of our actions have consequences on others. So, humility is always the best posture to have and to hold on to when we’re teaching, when we’re travelling, of course, and when we’re researching as well to acknowledge that we don’t see, and certainly that we don’t understand everything. And as in any environment, there are many veils, and we must remain conscious of that as well. Staying humble is obviously something that we need to remind ourselves, but also remind ourselves that we usually don’t get it. And that understanding takes time. That understanding is also complex. And time needs to be factored in understanding and reflecting back. And it’s also a dynamic, so debriefing, exploring different narratives, seeking feedback, staying humble, always taking for granted that we didn’t get it. It’s something to be reminded of all the time. And again, coming back to what made you take a bit, but embracing biases in the sense of acknowledging them, that we have biases, and that we are also subject to biases, and that all factors in our teaching or research and our interactions in general.
Andrea: If I could just jump in really quickly, because I so love that idea of humility, to also add that I think that we also need to extend grace. I think we live in a period where there is so much anger that it’s very easy to just be so focused on, you know, people’s actions and what they have done, and not give them space to grow and change. And so, I think we also need to allow that possibility for growth and change in in others as well.
Jonathan: If I was going to try to find a common thread through all the different themes and all the different conversations we’ve talked about, it seems to me there’s a thread of curiosity in all of our different positionalities. We’re seeking to understand more and to be able to interpret and process and to share that. And with apologies to a political scientist and a humanities prof, this is Shifting Conversations, and my final take home point is around artificial intelligence. And I know it seems like it’s a big left turn from our conversations before, we’re starting to use in artificial intelligence in some of our programmes of research, but why doctors make mistakes. To see if we can improve diagnostic accuracy. And yes, not surprisingly, when we use artificial intelligence, we improve the accuracy of physicians. But it’s not in the way that you might imagine. It’s not that the machines are going to be our overlords and run us. But it becomes a decision support system for the human. And it allows better retrieval of past experience. It allows perhaps triggering of some difficult to access prior knowledge. And that’s how we are seeing some of the work around that. But the piece around artificial intelligence, which I think is important for us to bring out is artificial intelligence is really own is dependent on the quality of the database. And so, we know health systems databases are inherently biased in how categorizations are made, in what gets in. And so artificial intelligence systems, by nature, even though they have no context, they just follow mathematical algorithms can be biased. Now, that might be really concerning to us if we let the machines start being predicted to us and start reinforcing the fact that the only way a heart attack ever presents is it a man who is middle-aged, grey haired and has left arm pain. And I just told you, that’s the one the most uncommon way for heart attack to present. But we can cure it, we can query our databases and ask our artificial intelligence to say what’s unknown in your systems. And so, it can reveal the fact that we are not attending to the multiplicity of the patient presentations we have. It can show us the gaps in the database that exists. And so, we need to be thoughtful as we as we move forward, at least in Health Sciences and Health Services, not to imagine that our technology will come from a view of nowhere. The technology comes with the views that are inputted by the humans that have provided the data. And so, it brings us full circle right back to the beginning of our conversation about bias or heuristics are the way we function and programme in our technology has that echo of who we are as people. We need to be thoughtful. We need to be reflective of ourselves and also of our technology to ensure that we unpack, and we think, and we challenge the assumptions even though it’s the computer that’s giving us this supposedly objective type of answer. I’m going to wrap us up, and we’re grateful to all of you who have continued to the end of this episode. And if you’re excited by this episode, then we’d love to hear from you. You can find how to reach us in the show notes. And we would love to probably record an episode two. We’d probably have a conversation regardless of anybody wants to hear episode two because I found it to be really engaging and thought provoking and I’ve learned far more than I’ve shared. And so, thank you to both of you for joining and thank you to 3M for making this podcast Shifting Conversations possible and available and we hope it helps continue the conversations that you’re having. Take care.
Andrew: Shifting Conversations was created by the Society for Teaching and Learning in Higher Education 2021 cohort of the 3M National Teaching Fellows with the expert guidance of Judy Bornais and Srinivas Sampalli. This project was made possible by the Society for Teaching and Learning in Higher Education with the generous support of 3M Canada. Special thanks to the team at STHLE, in particular Jay Adamson, Natalie Smith, Tanya Botterill and Debbie Brady. Project management and technical support from Craig Fraser. Social media support by Aysha Campbell, additional support from Meghan Tibbs. Original music composed by Hope Salmonson and performed by Ventus Machina. You can find more information on our website www.sthle.ca/podcasts.