RECORDED ON MAY 24th 2024.
Dr. Mona Simion is Professor of Philosophy and Director of the COGITO Epistemology Research Centre at the University of Glasgow. Her research interests include epistemology, philosophy of language, moral and political philosophy, and feminist philosophy. She is the author of several books, the latest one being Resistance to Evidence.
In this episode, we focus on Resistance to Evidence. We start by discussing what resistance to evidence is, the phenomenon of epistemic vigilance, and positive epistemology. We then discuss what evidence itself is, the normative aspects of resistance to evidence, when it is permissible for people to suspend their judgment, and virtue responsibilist approaches to resistance to evidence. We talk about resistance to evidence as input-level epistemic malfunctioning, the phenomenon of defeat, epistemic dilemmas, and skepticism. Finally, we discuss misinformation and disinformation, and an approach to disinformation as ignorance-generating content.
Time Links:
Intro
What is resistance to evidence?
Epistemic vigilance
Positive epistemology
What is evidence?
The normative aspects of resistance to evidence
Suspending judgment
Virtue responsibilist approaches
Resistance to evidence as input-level epistemic malfunctioning
Defeat
Epistemic dilemmas
Skepticism
How to approach disinformation
Follow Dr. Simion’s work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello, everyone. Welcome to a new episode of the Center. I'm your host, Ricardo Lopes, and today I'm joined by Doctor Mona Simeon. She is professor of philosophy and director of the Cognitive epistemology Research Center at the University of Glasgow. And today we're talking about her book, Resistance to Evidence. So, Doctor Simeon, welcome to the show. It's a pleasure to everyone.
Mona Simion: Thanks so much for having me.
Ricardo Lopes: So, let's start perhaps with a basic question here. So, could you start by telling us what resistance to evidence is, what it means and uh as a philosopher, why do you think we need the philosophical account of this sort of phenomenon?
Mona Simion: Yeah, so I mean, I think that what's important to note when you look in the landscape of people resisting scientific evidence is that epistemologically it's a very complicated landscape. So it's not the case that everybody who resists scientific evidence is crazy, I guess is what I'm saying. Um, SOME people, of course, are doing it for, for no good reason. Um, SOME people are doing it for, for bad reasons, like, you know, from motivated reasoning. They just don't like to believe that stuff, so then they don't. Um, BUT some people, as a matter of fact, are in a much more complicated evidential situation where they find themselves not knowing exactly what to believe because they get, they get contrary evidence from different sources. So depending on what on your kind of evidential environment, what, what evidence you have available to you, what you know, your environment tells you is the case, and, and so on, it may well be that uh your rejection of scientific evidence is not irrational evidence resistance, but this is a distinction that is very often not made in. Uh, YOU know, in mass media, uh, for instance, but also not in research. So for instance, in social psychology, the assumption has always been that there's gonna be some, um, source of irrationality that explains this phenomenon. So that's why I think a careful epistemological investigation that is able to draw the relevant distinctions between rational and irrational rejection of. Evidence is important. The way, the way I use this terminology in the book in order to, uh, you know, to, to make the distinction clear, I call evidence resistance, uh, irrational resistance to evidence, and I call cases in which you reject evidence, uh, for good reasons, just evidence rejections in order to signal that these two are very different phenomena.
Ricardo Lopes: Mhm. Uh, BUT, uh, again, uh, the second part of my question that I don't think you would rest directly, why do you, do we need the philosophical account of it? I mean, we could just go with, uh, for example, a psychological account or something like that.
Mona Simion: Psychological, social psychological accounts are descriptive accounts. So these are, this kind of important epistemological distinctions are normative distinctions, right? So. Psychology doesn't know how to do normativity. That's the kind of thing that philosophers know how to do. So what they know how to do is to describe what they see on the ground. Uh, WHAT they, what they struggle to do is to see the normative distinctions between two phenomena that look exactly the same descriptively, but are normatively different. So the example that I was giving you was with, um, evidence, uh, irrational versus rational evidence, right? So that is a. THAT is normative. RATIONALITY is a normative phenomenon. If you just look at what people do on the ground, you're not going to be able as a psychologist to know whether what they're doing is uh rational or irrational, um, and this is exactly what we see when we look at social psychological studies of this phenomenon. So just to give you one example, um, the, the most kind of popular or at least hot hypothesis in social psychology is that. Uh, THIS phenomenon is vastly generated by politically motivated reasoning that people reject scientific evidence when it suits them because, uh, to do so because it doesn't fit nicely with their political identity. So you know, if you're a far right person and part of your political identity is to reject that climate change is happening, but to accept that guns are safe, uh, no matter what evidence you bring, the thought is these people are going to reject it because. Of motivated reasoning. But if you look carefully at the experimental setup, you know, you notice that actually the claim is not supported by, by the data on the ground in that uh in these people that are tested, they uh. They do have the relevant political identity, but that is also correlated with, uh, reasons with, with a vast amount of evidence that background evidence that supports uh the relevant uh claim, right? So one example is if you grew up in, um, you know, in an American state where you've been told since you were small that all kinds of reasons to believe that guns are important for your safety, you're gonna have. A lot of testimonial evidence that guns are safe in a way in which me in Glasgow, I'm not gonna have any of that, right? So basically the way in which we're gonna deal with evidence having to do with gun safety is going to be rationally so very different because we have different background, uh, evidence. And this is the kind of stuff that social psychologists cannot, uh, capture in their experimental setting without collaborating with an epistemologist who understands the difference, um, between the two types of evidential settings, basically.
Ricardo Lopes: So in regards to this type of phenomenon, I've had on the show people like Hugo Mercier and then Sperber, and they do work on the phenomenon that they call epistemic vigilance. Does that relate in any way to resistance to evidence or not?
Mona Simion: Yeah, so in the book, I consider whether, uh, that might be one way to explain what's going on. So here is how that might, uh, that might explain it. So what, what, uh, um, Mercier calls, um, epistemic, uh, and Sperberg called epistemic vigilance, um, they is basically this property that they stipulate we have, um, and they, they think that. The fact that, you know, that the stipulation is plausible is supported by all kinds of phenomena, uh, whereby we are good at filtering, they think, uh, you know, bad from good testifiers, for instance, so that we have, we have this kind of mechanism that kind of filter, um, for defeaters as it were. So for evidence that our testifier is not trustworthy, right? And they think that that's what explains in a way our success as a species because of course we rely a lot on each other on each other's testimony for the knowledge that we have. So if we, their hypothesis is that, you know, if we didn't have this vigilance mechanism, it is a mystery how we would have succeeded as a species given how much we need to rely on each other. Uh, SO the, if you, if you think this, uh, you know, faculty of epistemic vigilance exists, one way that you might try to explain what's going on is to say, well, look, uh, our, uh, cognitive capacities have evolved in a very different information environment than the one that we inhabit right now, right? So they, they aren't evolved to deal with this kind of high density, high choice information environment that we are now inhabiting because of the internet, right? Um, BECAUSE of that, they might be, uh, malfunctioning, which is to be expected, uh, you know, for any functional item that is moved out of its normal conditions into abnormal conditions. So you might think, well, we have this mechanisms for epistemic vigilance, but those are working quite well in a normal environment where our testifiers are like, you know, people that we meet in person and we can read maybe signs of this. ON their face, uh, and we know whom to trust and who not to trust because they have a particular social profile that we're aware of. Uh, SO it's much easier, as it were, to be vigilant, um, if, uh, your testifiers are all kind of flesh and blood people that you meet in the street that you know about from your neighbors and so on. It's much harder to be vigilant online where everything is more or less anonymous, right? And the amount of information coming at you is also hard to process, right? So it's much easier to process one or two testifiers uh every minute as you walk down the street than the amount of stuff that comes at you on the internet. So maybe what's going on is that people don't know what to believe anymore. So this kind of epistemic vigilance faculties are misfiring, right? So they're, they're being, they're making you skeptic when you shouldn't be because they just don't know how to fire well anymore since they haven't evolved to deal with this kind of craziness. So that's what, that's one explanation, uh, that you can give. Um, THE worry that I, that I have, I mean, I have two worries for this explanation. One is that again, it stipulates that um. That everybody who does it is malfunctioning. Everybody who rejects scientific evidence is malfunctioning, right? And that, uh, that implies some widespread irrationality assumption which is very implausible. Uh, WE are a very successful species, so to think. I think given how well we're doing in the world, to think that our cognitive mechanisms are failing in this way on such a large scale would be very surprising, right? Given how, how much epistemic, uh, you know, resource we need in order to be so successful, right? And but the more important problem is that um a lot of psychological studies suggest that actually we're very bad at what they call vigilance. That even in normal settings with our, you know, friends and neighbors, we are extremely bad at detecting deception. Indeed we are basically just a bit above average, so it's a, it's a coin toss whether we detect deception or not. So this is, we are, we think of ourselves as being much better at detecting when someone is lying or not and so on than we actually are, as it turns out. Do is that always true? No, we are much better at detecting deception in contexts where we have a lot of information. So when we know this person really well, maybe finally we start getting used to the ways in which they behave when, when they lie, or when we have all kinds of other contextual contextual information like, you know, we know that there's something fishy about this person, we know that whatever they say doesn't fit with whatever this other person says. Uh, SO then we, we are good at detecting deception, but that is to be expected, of course, because that again is explained by the background evidence that we have. So basically, given these two worries, I'm thinking that the explanation in terms of our vigilance mechanisms failing, uh, I think is not likely to be the best explanation of the data.
Ricardo Lopes: So just to ask you briefly about the kind of approach that you bring into your book when it comes to resistance to evidence, at a certain point, you say that you approach it through positive epistemology instead of negative epistemology. What does that mean
Mona Simion: exactly? So it's, it's a term that I borrow and I, I'm grateful for from um uh Jonathan Jenkins Ichikawa. So, so that. The tradition in epistemology for 2000 years is very surprising to me when you think about it, um, has been very much focused on epistemic permissions rather than epistemic obligations, which is not the case in other normative fields, right? In, in morality, for instance, we care a lot about obligations, right? So that's very strange that there we would have a normative field where it, where we don't discuss obligation, we only discuss permission for 2000 years and you might wonder why that's the case. So, so here's what happened. People throughout history of epistemology, very clever people, assumed that believing is risky. So, you know, forming a belief is like jumping, right? So like you're taking a risk, and then they kept asking the question, when is it safe enough to take the jump, as it were, when do you have enough evidence? When are you justified enough to take this jump into believing? And you can see why, why people might think in this way, prima facie, because you might think, well, as soon as you believe you do a lot of stuff with that belief, right? You start asserting it, you start acting on it. So that's why you should be really careful before you jump, uh, the before for the belief. So this, this kind of, uh, assumption led people to focus on, uh, when belief is permissible, but not on when belief might be obligatory given your evidential, uh, situation. Um, SO for the, so in that sense it's a, it's been a, you know, 2000 year history of negative epistemology in that people have debated what restrictions we should put on our believing, right, on our forming beliefs. How should we restricted better such that when we finally, uh, jump, it's safe to jump, as it were. Um, I think that the, the most recent phenomena having to do with evidence rejection pointed out to us that we were wrong, that, uh, the, you know, to only focus on problems arising from jumping too quick, as it were. There also problems arising for not jumping when you can jump, uh, right? And that is, that is exactly what, you know, phenomena like vaccine. Um, SKEPTICISM and climate change skepticism has shown to us that sometimes you should form your belief and go ahead and act, go ahead and and get the vaccine because you might die if you don't. Uh, SO, so not forming the belief is hardly the safe option. It's not safe, safer epistemically, it is not safer practically, uh, your, your health will not benefit for it. Uh, AND the same with, with, with climate change denialism, right? So I think that very recent phenomena in the landscape have kind of shown us that that it was a mistake all along to only focus on epistemic permissions rather than um obligations. I also think that we have, you know, that what this phenomena have helped us, theoreticians see is that this was, this was completely unwarranted, this, this focus on. Negative epistemology to begin with, because, uh, when you don't form a belief, you form a suspension, normally. That's what you do. You suspend belief, right? A suspension is also a doxastic attitude, just like belief. It's a kind of doxtastic attitude, but it's still a doxastic attitude. So the assumption that one particular doxasic attitude comes for free, that you, that you, you don't need any evidence for it, that Doesn't need to be properly justified. That was, if you think about it, completely crazy all along. Of course, a suspension is going to be a good suspension, a permissible suspension, only if it fits your evidence properly, just like belief. So I guess, you know, it's one of those very interesting cases in which the real world has informed the theory, um, and has made it better for it, I think.
Ricardo Lopes: So, we're also going to talk a little bit about uh suspending uh judgments or suspending beliefs later on in our conversation. But just before that, I guess that there's also another important question to address here. So, what is evidence? Particularly from a philosophical perspective, what are some of the most common accounts of evidence out there and how do you suggest we should approach it?
Mona Simion: Yeah, thanks for the question. So in this, this project, this book was initially, it's an old project, it hasn't started quite yesterday. The, the project started in a way before the phenomenon of evidence resistance became such a, you know, well known problem, uh, publicly. Um, I was just trying to develop a better account of evidence. This was a project about evidence. That's all there was to it. And the reason why I was trying to develop a better account was because I wasn't satisfied with the accounts that were available on the market. So of course people in philosophy disagree about everything, so there's not, there's not a lot of stuff that people agree on. But here is something that you find with very few exceptions throughout history of epistemology. Um, WHEN it comes to what, what it is for someone to have evidence, people tend to think that that having of evidence relation refers to, in one way or another, the evidence being in your head. So that's what people believe in epistemology if you look throughout history, that it's either you have the evidence if, for instance, you believe it. So you, I have evidence that there's a computer in front of me because I believe that there's a computer in front of me, or because it's, I'm justified in in my, I have a belief and it's justified, or because I know it. I have evidence that there's a computer in front of me because I know that there's a computer in front of me and so on. Um. Or, uh, so these are, you know, some doxastic accounts, but you can also have an account that's not doxastic, doesn't imply belief, where your evidence are your seemings. So whatever hits my eye and generates a seeming as of a laptop in me, that's my evidence that there's a laptop on the table. Um, THIS is, you know, it's not about having beliefs, but it's still in the head, right? So the, the assumption is always that as soon as you have the evidence, it's got to be that it lies somewhere within your skull, as it were, right? And that's an assumptions, interestingly, that's shared by, by camps that are quite opposite on all other fronts in epistemology. Internalism, externalists share this assumption for the most part. Um, NOW, here's what I found very strange about this assumption. It doesn't fit at all with our folk conception of having evidence. So if I talk to my grandmother tomorrow. And she said, well, she said something like, Well, why didn't you buy any carrots at the market? AS I asked you to. And I said, Well, I didn't have any evidence that there were carrots at the market. And she would go like, Didn't you see the carrots on the table at the market? Well, no, because, uh, you know, I, I just couldn't believe my eyes that there were carrots at the market. So since evidence is belief, I didn't have any evidence. That there are carrots at the market. So I don't think my grandma would be very, very, you know, satisfied with this explanation. And now living my grandma's side, it looks like even in legal context, we don't use evidence in that way, right? So now imagine a detective who's in the, who's taking testimony on the stand and the judge asked them, Did you have evidence that the butler killed the victim? And the detective said, No, I didn't have any. Well, how do you mean? Didn't you go to the, to the, you know, crime scene? Didn't you see all this stuff that suggested that the, the butler did it? Uh, WELL, I, I saw it, but I couldn't believe my eyes because the butler is such a good friend of mine, and I couldn't believe my eyes that he would do such a thing. Again, I don't think that that's. When it would be acceptable in court, so it does look as though, you know, for instance, accounts that um that take uh having evidence to having have to have to do something with believing it, um, depart quite abruptly from the folk, uh, conception, right? Um, AND when it comes to seemings, the problem doesn't disappear either. So we know for instance that our seemings tend to be penetrated by all kinds of biases. So you know, we perceive black faces as being more dangerous than white faces, for instance. So we get those kinds of seemings that are only sourced in our bias. They're not justified in any way. Um, SO one result that we don't want is to say that as soon as you're biased, your evidence, you have evidence that black people are, are more dangerous than white people because that's clearly not the result that you want, right? You need to go back to the drawing board if that's the result that your theory gets. So because of this dissatisfaction that I had, uh, with extant extant accounts of evidence, I thought, look, we need an account that that doesn't place having evidence in your head. Clearly having the evidence doesn't have to. Uh, THAT you have the evidence in your head is just something having to do with availability, right? If it's very readily available to you, you have it. If it's lying on the table in front of you, you have evidence, right? Um, WHETHER you choose to disbelieve it or not, it's, it's completely, um, uh, irrelevant. So that's those are kind of the accounts that exist in the literature very roughly, and that's what motivated me to put together a new account that is based on this notion of being in a position to know. You don't need to know these facts in order for them to count as part of your evidence. You just need to be in a position to know them, in a position to basically take them up in your cognitive system.
Ricardo Lopes: And earlier in my very first question, and I, I asked you why do we need the philosophical account of resistance to evidence. You mentioned that uh science, psychology, more specifically doesn't deal at least directly with the normative aspects of this sort of phenomenon. But what normative aspects of resistance to evidence do you consider and what sort of normativity are we talking about here exactly? Is it social normativity, moral normativity normativity or some other sort of normativity?
Mona Simion: Yeah, good. So, so one thing that people uh have thought for a long time is something along these lines. Look. When it comes to forming beliefs, we don't have obligations to form them because we don't have control over our belief formation. They just happen automatically to us. I, as soon as I see this laptop, I believe it's there, whether I want it or not. And people for the longest time, mistakenly in philosophy thought that an unrestricted version of the following principle is true. What it implies can. You cannot have an obligation if you can't do it. And this sounds plausible. At first it's a principle, notably put forth by Kant. But recent results in ethics suggests strongly that it's the false principle. And if we want the, I mean, there's something true to it, but it needs to be restricted, uh, right? For instance, just because I can't treat, uh, women and black people well because I'm a racist or, and a sexist, it doesn't follow that I, it's not the case that I ought to treat them well, right? So, so the, clearly the principle is false. But because people believe this principle and assumed it, they thought there was no way there can be odds to believe something. So then how do we Explain this resistance to evidence and what's wrong with it. People thought, well, it has to do with the breach of some other type of normativity. Maybe it's a breach of social normativity, so it can't be epistemic normativity that's breached. It's got to be social or moral normativity. So accounts in the literature, for instance. Uh, TRY to suggest that what is going on is that we inhabit a particular social role and because we inhabit the social roles, the social roles come with social obligations that sometimes are social epistemic obligations because you know we all depend on each other for information and. We can't really cooperate with each other unless we do this kind of information exchange well, but in order to do this information exchange, well, it better be the case that we, you know, privately as it were also do our information uptake well because otherwise we're not going to participate in this exchange, um, in a valuable fashion. So the thought was, well, look, this resistance to evidence is not epistemically problematic, but it is socially problematic because you're going to, as it were, mess up with the entire epistemic landscape when you don't take up the evidence that you can and put them in the common, um, as it were, basket of epistemic resource. Um, SO that was one explanation that was offered. Another explanation that was offered, look, it's just a moral problem. And in this cases, so say the case that I gave earlier where, uh, you know, you have all the evidence in the world that the, the, this black person in front of you is smiling and is being very kind. But because you're a racist, you have this this impression that they look angry. uh, SO some, some epistemologists thought, look, as soon as you seeming is in that way, you're justified to believe that they're angry, but so epistemically that you're doing nothing wrong, but morally, of course it's bad because the reason why you have this seeming is because you're a biased racist person and being a racist is bad. Uh, SO why you might think, why don't these two explanations suffice? Why do we need to make it epistemic, the problem, uh, like my account does it. Uh, VERY quickly about the social explanation, the reason why it doesn't work is because if you, if you want to say that the only thing that's wrong with this is social. The problem is that so sometimes social norms are bad. Not all social norms are good. To the contrary, most of them are bad. So one thing that you don't want to say is that as soon as the social norm says something like, don't believe women. That makes it OK not to believe women, right? So resistance to evidence from women is socially OK. If it's also epistemic, you're OK, OK, what's the problem? You know what I mean? Um, SO that's, that's the social problem, uh, as it were. The, the problem with the moral explanation is, is even worse. So first of all, because you can easily cook up cases where it's morally, uh, actually good or even required to resist this evidence. So I have a, I have a case in my book. Uh, OF basically partiality and friendship. So many people in ethics think that we owe to our friends to be a bit more skeptical about accepting, um, uh, evidence for their wrongdoing as a person. Someone comes and tells me that my friend just killed someone in the street, I should be skeptical, right? No, my friend doesn't do such a thing. Well, if they just come and say the same thing about the stranger, I should just say, OK. Um, SO, so if, if that's true that we have a moral obligation to our friends to be a bit more skeptical when it comes to this kind of stuff, um, then it would look as though there are cases when resistance to evidence is morally good, but you still want to say that it's epistemically problematic, right? So remember the case of the detective that we were talking about earlier. If the detective comes and says, I didn't believe any evidence that the butler did it because the butler is a friend of mine, and I can't believe these things about my friends, that's not a good thing epistemically, right? So that's one very serious problem. Another problem is that we know from a long history of ethics that moral responsibility has an epistemic condition on it. So if you are to be, for instance, blameworthy, you, it better be the case that, you know, you meet some sort of epistemic condition, otherwise you're going to be blamelessly ignorant, right? So if you know, if you did everything in your power to investigate a particular topic and you still ended up with a false belief, you're not going to be morally responsible either because you did your job well. But of course if there is an epistemic condition of moral responsibility and we want to claim that it is about moral responsibility, what this resistance to evidence is, why it's problematic, then we're just, you know, uh, getting back to the same problem because moral responsibility implies an epistemic condition, so we still need to answer the question, OK, what's epistemically wrong, uh, as it were. So that's why there's two ways of explaining the data is not, are not going to work.
Ricardo Lopes: Mhm. So, let's talk now a little bit about suspended judgment. That is something that you touched on a little bit earlier. Uh, AND you mentioned the examples of climate change and vaccination where, I mean, there, there are two situations where perhaps people shouldn't suspend their judgment very much because it has very direct consequences to their health and now to Our economy, environment, and so on. So, uh, but, uh, are there situations where you think that it is or it would be epistemically permissible for us to suspend our judgment? What would be your account of that?
Mona Simion: So What I do in the book is I consider all kinds of ways of accounting for what it is for suspension of judgment to be possible and in particular I distinguish between two things. First of all, there's one thing to suspend the judgment, and it's another thing to be completely neutral, which are sometimes people don't care, don't talk much about this distinction, and they don't make it clear what the view is about one or the other. So suspending judgment, um, you know, is, is just something that you are permitted to do in cases in which you don't have enough evidence to support full belief, right? Go ahead and have an outright belief that something is the case. So if you don't have enough evidence for that, then you're fine to suspend judgment, where suspension of judgment just means not believing it. And so basically being in an attitude of not believing, not fully believing. It's a completely different story, what neutrality is, where neutrality is being completely neutral of whether something is the case or not. So being a fifty-fifty, as it were, right? And why, why do I say this, that this is very important? Well, because in, take the case of vaccine, right? So many you would hear vaccine skeptics very often saying things like, Look, there's a lot of evidence that vaccines are safe, but I've, I've seen some people saying the contrary as well. So until I'm certain, I'm not going to take the vaccine because I'm, I want to be certain before I, you know, put that thing in my body. Well, well, that's a mistake. So say, say that you are in an evidential environment where you have many, very many testifiers giving you misleading evidence against the safety of vaccine, right? And then you are, if these testifiers have a good track record, you're right to trust them. You might be in a situation in which you don't have enough evidence available to you to fully believe that the vaccine is safe. Scientists are saying that they're safe, but then your family who are otherwise people that you trust, and you trust for a good reason. They're reliable, and so on. They care about you. They say that there's something fishy about it, that they're, you know, the scientists are motivated by some industry funder and and so on, right? So. Maybe you are in an evidential environment in which you don't have enough evidence for full belief. So then it's OK to suspend judgment. Here's what is not OK is to be fully neutral, because that is only OK when your evidence is really 50/50. So you have, you know, exactly 0.5 on one hand and 0.5 on the other. And this matters because what we need for action is not full belief. What we need for action is just enough confidence given. What, as it were, the value of the outcome is to us. So let's go back to vaccines. There's no, there's no option to do nothing. You either go and take the vaccine, or you decide not to take it and you remain vulnerable to the virus. There's, there's no midway between these two, right? So what you need to do now is you look at the evidence for the vaccine safety that you have and say that it's not in. For full belief. In your case, it's not, I don't know, say that full belief is a threshold of 0.9 probability. Say that you are at 0.7, so you, you know, you don't even have enough for full belief. But, uh, you know, now think about it. So you have 0.7 probability that it's gonna be great. It's gonna be OK if you take the vaccine, right? How much probability do you have that you're gonna be OK if you don't take, um, the vaccine, right? Uh, WELL, that's not great, right? There's a lot of people dying without the vaccine. So even in cases, I guess what I'm saying, in which suspended judgment is justified, neutrality is not justified, then if neutrality is not justified, you still should go and take that vaccine because you still have enough evidential support. To act and go and take the vaccine. Uh, SO that's, that's why I, I draw this distinction and I discuss it in the chapter because very often you see epistemologists focusing either one or the other depending on whether they are formal epistemologists or they're more traditional epistemologists. So I think that these two distinctions are very important to the topic.
Ricardo Lopes: So in the book, you also address at a certain point what you call their virtue responsibilist approaches to resistance to evidence. So what are these kinds of approaches and what do you think about them?
Mona Simion: Yeah, so virtue responsib is, um, you might think, on the face of it, has fantastic resources to deal with the phenomenon because what they care about are virtues and vices. Basically epistemic virtues and vices. So things that, uh, are things like open-mindedness or curiosity are. Supposed to be, uh, virtues and things like dogmatists are, are supposed to be, uh, vices. And the thought would be, well, this isn't resistance to evidence clearly, just a case of dogmatist and or a case in which this person is not open minded enough. So, uh, doesn't that, isn't that the problem? Um, SO, so the. You know, it may well be that sometimes in these cases of evidence resistance there are manifestations of these devices. It may well be that the person in question is a dogmatic person, um, and you know, you see this, for instance, in cases in which, uh, so we know from science that cognitive flexibility, so our, in a way our open mindedness as it were, decreases age. So, uh, that is one explanation for why what you see is that people as they Become older, they become more entrenched in their opinions and harder to, you know, to move out of their opinions. So, so you see that a lot and you know one explanation for that can be just epistemic. They gather so much evidence during a huge lifetime for their beliefs that it's harder to change now because it's harder to defeat that evidence. But another explanation unfortunately is that with age we all get less cognitively uh flexible, so we're less open to to other people's views. Uh, AND you might think, well, that's exactly the phenomenon that we're looking for, right? Um, THIS, this person is less open minded and more dogmatic because they have this decreasing cognitive flexibility. So, so that's what's going on. Um, UNFORTUNATELY this phenomenon doesn't, uh, need to happen in people who are vicious and indeed. Again, stipulating that everybody who resists scientific evidence is a vicious cogniser is a bit much because it's, it's a very well spread phenomenon. And again, we are a highly successful species which implies that we are actually quite good cognizers. So if we, if viciousness would be so widely spread, we wouldn't be doing that well. Uh, SO what in particular, what tends to happen a lot is that perfectly fine cognizers, perfectly virtuous cognizers make mistakes on a particular topic, right? So let's go to the. The, the social psychological explanations in terms of motivated reasoning. It might be that you're a fantastic organizer in all walks of life, but when it comes to things that affect your political identity, you just don't reason well anymore, right? You're going to believe all kinds of rubbish just because you are, you know, you want to preserve your political identity. But that doesn't mean that you're a vicious cogniser in general because, you know, this vices imply quite a lot of kind of disposition to for failures, so you need to be quite bad to be. You know, properly describable as vicious, um, and it can also just be a one off failure, and we want to be able to explain that a completely unmotivated one will failure this resistance to evidence. It might be that just one off, I don't appreciate exactly the evidential situation properly. So we are liable cognizers but not infallible. So that just basically implies that we fail sometimes. And when we do that. On just one occasion we still want to be able to explain what went wrong there and and to predict that something did go wrong. But if our only resource is to say, well, it was a manifestation of vice, well, no manifestation by stipulation this is a perfectly fine cogiter they just failed once. So that's why I think that although prima facie it might look like. But your responsibil has fantastic resources to account for this phenomenon and to the contrary, it's um it's gonna, you know, the, the danger is that they're gonna predict too much viciousness in the population in order to be able to account for all this, the instances of this phenomenon.
Ricardo Lopes: So, uh, you also characterize in the book resistance to evidence as an instance of input level epistemic malfunctioning. Could you explain that terminology and particularly what malfunctioning means in this specific context?
Mona Simion: Yeah, so, uh, what my book says is, look. Uh, WE need to distinguish between evidence resistance and mere evidence rejection. Evidence rejection doesn't need to be evidence resistance. It can just be perfectly justified, right? If my evidential environment is such that it's populated with a lot of misleading evidence against the safety of vaccines, I am perfectly justified to reject the evidence of its safety. It is it true that it's not safe? No, of course it's not true, but very, we are, as a matter of fact, fallible cognizers and very often we are misled by misleading evidence. And when that happens, we are responding properly to this evidence because misleading evidence is evidence, right? So when, when we are being misled by it, it is a good reaction and a rational reaction to go with this misleading evidence, and we shouldn't try to change that because that's the normal way to function epistemically, to respond to your evidence because of course you don't know that it's misleading, right? Um, SO, so, what I'm conjecturing, although of course that is an empirical hypothesis, is that in most cases of evidence rejection. What we have is justified and rational evidence rejection because of this kind of environmental problems. Of course that's not to say that it's always the case. We know that there are cases of evidence resistance, like for instance from motivated reasoning. I'm not saying this case exists. All I'm saying is that they're going to be much more isolated, which makes it such that I don't need to stipulate this crazy widely spread irrationality hypothesis. But, but I still need to explain what's happening in this isolated kind of cases, and what I'm saying is, look, If proper fun the proper function of our cognitive system, uh, includes proper evidential takeup. So my, my cognitive system is not properly functioning. If there's a computer lying straight in front of me and I can't believe that there's a computer lying straight in front of me, that something went really wrong. Imagine how you would feel right now if I told you, actually, not sure there's a laptop right, right now in front of me. You'd really think that there's something is going wrong, uh, quite significantly. So my, the way in which I, I think about this is our, our, some of our bodily systems, as it were, uh, are input dependent. So for instance, our lungs, uh, if they don't take up, if there's oxygen in the environment, readily available, and they don't take it up, um, that is a sign of malfunction. Something has gone wrong with our lungs. Similarly, I think our cognitive systems, uh, are mal. Functioning if they don't take up very easily available evidence, um, from the environment. So there's nothing strange, as it were, about our cognitive systems. Some, uh, of our systems are input dependent in that way. If they don't take up readily available inputs, they're malfunctioning. So in that sense, it is an input level type of malfunction in that when you are this kind of system that's supposed to take up easily available something from the environment and you don't, that is one way in which you can uh malfunction.
Ricardo Lopes: So another topic that you address in the book is uh defeat. So what does defeat mean and how do you approach it?
Mona Simion: So defeat, uh, defeat is a fancy epistemological term for just evidence against something. Uh, WE always like to have technical terms to pretend, um, that we're serious. Um, uh, BUT what is interesting about defeat, I think in particular when it comes to this resistance to evidence phenomenon, is that it comes in two, broad types of interest to us today and simplifying somehow the phenomenon, but that's what's what's important for us. Um, THERE'S a rebutting defeat and then there's undercutting defeat. Again, very sophisticated technical terms for actually quite simple phenomena where rebutting defeat is evidence against a particular proposition. So the, the case is, you know, you have, for instance, you have a testifier that says that it's raining outside and you have another testifier that's. IT'S not raining outside. So this testimony, the second testimony is a rebutting defeat to the first one because it affects, it's evidence against the proposition that was asserted by the first one, right? So the proposition is, it's raining outside, and this is defeat, a defeater, uh, to, to your evidence that it's raining outside because it says it's not raining outside, right? Uh, SO the most interesting for us type of defeat, however, is not rebutting defeat that much, is undercutting defeat. So this is a very pernicious sort of, uh, situation, uh, that can, can lead to very widespread, uh, problems in that this kind of defeat doesn't affect the probability of the proposition for you, but it affects the probability that your testifier, uh, is a good testifier, right? That the probability that your source is a good source. So what this defeater does is it tells you your source is rubbish. So the the case is one, for instance, where you have a testifier that comes and says it's raining outside, and you have another one who comes and says, don't believe anything that George says because George is a compulsive liar about meteorology, you know, um. So the reason why this is a more dangerous form of defeat is that it doesn't only affect one proposition, it affects everything that George says from now on, right? So if this person comes and says he's a, he's a compulsive liar, you're gonna be worried about everything that George says from now on, right? So it's a much more problematic form of defeat. Or you know, or good form of defeating cases in which George actually is a compulsive uh liar, but you can see how, you know, in the, in the cases of interest here like vaccine safety and climate change, it's the difference in, um, as it were dangers, um, is huge. Between these two types of defeaters because it's one thing, you know, you have your evidence from the scientists that vaccines are safe and climate change is happening, and then, you know, we have a couple of people telling you otherwise and you weigh these things against each other and, you know, the evidence in favor of the safety of vaccines and the climate change. Uh, HAPPENING is hugely, um, you know, kind of more weighty than the three people around the world who deny these claims, right? So if that was the only problem, I don't think we would have a lot of rejection of scientific evidence because it is overwhelming how much evidence there is that vaccines are saved and, and climate change is happening. Unfortunately, the way it works, um, if you look on the ground is that. These people, the, the people trying to deny these claims don't come and say just vaccines are unsafe. They come and say don't trust anything that. Uh, YOU know, public health authorities say about vaccines because they have a vested interest to sell you these vaccines to make some money for the for the vaccine industry or they have a vested interests to to generate this kind of uh herd immunity. So even though they know that it's there are some people for which the vaccine is not safe, they're not gonna tell you because they just want to generate herd immunity. So as soon as this happens. No matter how many scientists tell you that vaccines are safe, their entire testimony is undercut because now you don't trust any of them and you don't trust anything that they say. You don't trust them on vaccines, you don't trust them on climate change, you don't trust them on anything anymore. So you can see how that's a much more dangerous type of defeat, and you can see it on the ground as well, because it's very pervasive, right? This is the kind of discourse that you hear. Don't trust scientists, don't trust mainstream media. It's not so the evidence that the misleading evidence that is brought in. On this policy relevant topics of of high interest are very rarely actually affecting the proposition itself. Vaccines are safe, so very rarely, you know, would you see a far right person coming and saying actually there's a study in Nebraska where they tested this vaccine on, you know, people. With, uh, blue eyes, and it turned out that that that actually it's not safe for them. No, what they come and say is usually don't trust public health authorities because they're just trying to, uh, you know, poison us all and, and in order to, whatever, um, give some money to the industry or something like that. So. So this is the kind of defeat that is mostly problematic, and what this defeat does, as I said, is it decreases your probability for the trustworthiness of the source. So that's how it becomes very problematic.
Ricardo Lopes: So, another thing that you also address in your book is uh epistemic dilemmas. So what is an epistemic dilemma and why does it, why is it important for us to address it in this context of uh resistance to evidence?
Mona Simion: Yeah, so one, there, there's two reasons why we need to talk about epistemic dilemmas if you write this book. One reason is purely theoretical. As soon as you postulate that there are such things as obligations in a normative domain, you might get that, right? Because permissions never generate dilemmas. If you're, um, you know, if your norm says you're permitted to jump in the lake, um, and this other norm says you're permitted to go to lunch. No, you know, whether you choose to jump in the lake or go to lunch or to the country, do something altogether different, none of these norms will be breached, right? Because they just permitted. They don't ask you to do it. When you have obligations, however, if you have two that cannot be met at the same time, they come in conflict, right? The classic case is you have promised a friend to go to lunch, but you encounter a child drowning on the, on the way. So now, evidently you need to jump and save the child. The more obligation to save the child, um, takes. But at the same time, you will have broken your obligation to keep your promise to your friend. So, you know, it's not a dilematic situation. It's just a normative conflict situation in the sense in which it's pretty clear which norm you should follow. You're not kind of like, which one should I go with? It's fairly evident. You should go with saving the child. Um, BUT I guess what I'm saying is that as soon as you have obligations, they might come in conflict. And when they come in conflict in a fashion where it's not really clear which one is stronger than the other, like in the case that I just described, you might be faced with a dilemma. So, you know, as opposed to what I just described with the lunch and the child consider a situation in which you're on your way to lunch. Now you have Two children drowning, one on the left and one on the right, and you still have the promise keeping to do, right? So the promise keeping norm is completely overridden. So that's, that's out the window. You're, you're fine being late for lunch. But these two norms, you're still left to them. One that says save the child on the left, the other one says, save the child on the right. Um, AND it seems like you're in a normative dilemma because these norms are equally strong, and no matter what you do, you're going to break a norm, that's as strong as the others, right? So the question is, well, moral dilemmas we know about them, uh, because we've studied moral obligation for 2000 years, but epistemic dilemmas, we don't know enough about them. Uh, AND now that you're postulating these obligations, we're gonna, aren't they, we gonna get them all over the place in the epistemic domain. And I guess what I'm saying is, well, first of all, no, not that much. I'm arguing in the chapter that actually epistemic dilemma are not easy to come by because what's going on in epistemology, um, is that you can always suspend judgment. You don't need to either believe P or no P or something like that. Um, SO as soon as you have that resource, actually epistemic dilemmas. ARE harder to come by. If anything, what is maybe um easiest to come by is an epistemic trilemmas and I, I give some examples in the book. Uh, BUT I guess going back to our phenomenon of resistance, so this is just the the reason why it's theoretically interesting, but why is it practically interesting for the phenomenon for evidence resistance? Well, because very often you hear that people say things like, I don't know what to believe anymore. Uh, SOME people say vaccines are safe, some people say they're not safe. I'm going to throw my hands in the air and not believe anything anymore. And I guess that, you know, what I'm saying is, no, actually that situation where it's OK for you to do that. Uh, SO one in which you are faced by this epistemic dilemma, uh, when it comes to, to believing or not believing something is not something that occurs very often, um, so it's not, it's not gonna be a problem. Uh, USUALLY when people do this, uh, they do it unjustifiably so it's not, it's not a phenomenon that is, uh, well spread, basically.
Ricardo Lopes: So, uh, I have here two more topics that I would like to ask you about. The first one is skepticism. So, uh first of all, what does skepticism mean in this kind of context because I guess we could argue that there are different kinds of skeptics out there. And uh is it possible that at least in certain circumstances, in certain situations, skepticism is also a form of resistance to evidence.
Mona Simion: Yeah, so I think, you know, sometimes skepticism is warranted. So if you live in an environment in which you have a lot of evidence that P is false, skepticism about P is justified, right? And of course this, this evidence may all be misleading, but even so, it's evidence and you are supposed to, you know, update your beliefs according to your available evidence. So. It may well be that skepticism sometimes is warranted, be it about vaccines or about the existence of the external world, you know, whether you're an epistemologist or someone who's interested in vaccine uptake. Um, BUT I guess what I'm trying to argue in that chapter is, uh, that what has been assumed in the literature on skepticism, I mean epistemology, which is that in a way, the skeptical, the skeptic has the safe position because the skeptic is just sitting there, not believing anything and wondering, why should I believe that the world exists? Because, you know, for all I know, maybe. It doesn't, then my evidence is compatible with both and other such motivations. What I'm saying is, no, actually you have evidence that the world exists and you should uptake it and believe that the world exists. So I, what I'm basically arguing in the chapter is that this assumption that skepticism is the safe position and it is on us, non-skeptical epistemologists, to explain to the skeptic why it's OK for them to, you know, take the jump and believe that the world exists, and so on. Um, IS exactly that, that mistaken assumption that we had about suspension, right, that that relies on the assumption that suspension is the safe position, right? As soon as we discovered that suspension of judgment is not safer than believing that it also requires a warrant, we discovered that the skeptic doesn't have anything on us because being a skeptic also cannot be done without justification, I guess, in a nutshell.
Ricardo Lopes: Mhm. Uh, SO, the last thing I would like to ask you about is, uh, in the book, you also talk about toward the end, um, about misinformation and disinformation, which is something that people nowadays are worrying a lot about. So, uh, taking into account the main topic of the book, resistance to evidence, uh, why do you also approach these questions surrounding misinformation and disinformation and In what ways do you approach them?
Mona Simion: Yeah, the reason why I think that this topic needed to be approached in this, in this book is because the the these two phenomena, basically scientific evidence resistance and disinformation and misinformation um are not independent from each other. They, they have kind of mutually reinforcing patterns. So what's going on is the more disinformation you have or misinformation that you have in your environment. The more misleading evidence you have, uh, with the result that the more justified you're going to be to resist scientific evidence, right? On the other hand, As soon as you don't trust scientists and expertise in general anymore, you're more vulnerable to disinformation campaigns because you don't have any expertise anymore to fall back on. So these are, these are two phenomena that cannot be studied in isolation. I think that the fact that they are currently studied in isolation is a big problem and that is going to hinder progress quite a bit. So that's why I'm at the end of the book I'm saying look let's let's look at the nature of uh information misinformation disinformation in order to understand how this mechanisms kind of reinforce each other evidence scientific evidence rejection and disinformation. And one result that I get is that again, so, so here's a funny thing that's happening in this in this literature, uh, for the longest time people all over the scientific spectrum. Uh, WHO study misinformation and disinformation and their kind of flow, um, use a dictionary definition to do that, which is really funny. I mean, especially, you know, in, in domains that are otherwise quite critical philosophy included, that you would work with the dictionary definition. I mean, you know, if we're working with dictionary definition of knowledge or justification, we wouldn't need epistemology anymore. And the dictionary definition of this information, shockingly unsurprisingly, is false. So, uh, you know, it's it's extremely anthropocentric because it's an old uh definition and it gets it wrong. So the way in which it's defined commonly is that it's. Uh, KIND of false content that spreads with the intention to mislead. That's roughly the definition that you find out there. And Donel is my colleague Don Felles from the US, did work a bit on this and said, Well, actually it doesn't need to be spread with the intention to mislead because surely we, you know, machines can also spread this information and that, you know, our word these days is with 1, right? So it can, it can just be that it's the function to mislead, not just the intention, because machines maybe don't have intentions, but they have functions. Uh, BUT that's about the only work that we have on this information. If you start working and think about it very carefully, uh, first of all, importantly, this information need not be false content, uh, and indeed, you know, I, I'm a former journalist, and what we were taught in journalist school is that disinformation campaigns are not best done with false content because people are not that stupid, you know, as I said, we're a highly, uh, successful species cognitively. Uh, THE way in which you do it if you want to have a successful disinformation campaign is you do it with true statements that imply a falsehood that implicate falsehood, right? So you say you don't come and say climate change is not happening. People are going to be suspicious. You come and say something like there is. Disagreement in science about climate change. Now that is not a statement of course you can always find a couple of crazy scientists in the middle of nowhere in a village who think that climate change is not happening, right? So the statement there is disagreement, strictly speaking, all it says is that there's at least one. Scientist in the world that disagrees. So the statement is true, but when you hear it on TV, what do you hear? You hear there are vast and important and relevant amounts of disagreement because otherwise you're thinking, why is it on the news if it's only one isolated guy somewhere, right? Uh, SO that is a pragmatic phenomenon. The sentence that you utter doesn't mean that it's a huge amount of disagreement, just means that there is at least one person who disagrees, right? But what the hearers here is not just what the sentence means, it's also what it pragmatically conveys, and pragmatically it conveys that it's a significant amount of disagreement. Why? Because it's told on the news and it doesn't make for a news item if it's just one person disagreeing, so it's not relevant for the context, right? And that's how you spread this information cleverly, not by saying outright falsehoods. Now the problem is that Uh, first of all, that's one thing that we don't realize when we are rushed to judgment to think that all these people rejecting scientific evidence are completely crazy and irrational, because why would they believe when they are told by a nobody who's not a scientist that climate change is not happening? Yeah, but that's not what's happening, right? It's not, is not that they're falling for this kind of blunt, uh, you know, in a way stupid way of doing disinformation. No, they're falling for the clever way of doing disinformation, and of course they are because you're supposed to react to that pragmatic in that way. That's the rational way to form beliefs in that contex. Next, right? Um, SO, so that's why it's important to have a proper definition of disinformation because then we can see how very often people reject evidence warrantedly because of the way in which they're tricked by this kind of subtle ways, uh, of disinforming. Another reason why it's important is because we're trying to design good AIs, right? So now we have the bad AIs who are trying to spread disinformation. The good thing about that is that we can, if we're clever enough to make the bad ones, we're clever enough to do the, do the good ones as well. Uh, So what we need to fight LLM, disinformation is LLM disinformation trackers. That's what we need because us humans are not going to be able to fight uh disinformation spread by LLM, um, but in order to build an LLM that tracks this information properly, you'd better know what it is because if we just build it to track falsehoods, it's not gonna track this kind of very dangerous disinformation. Um, CAMPAIGNS that use this kind of pragmatic mechanisms that I that I mentioned earlier. So that's, that's what they do in the chapter.
Ricardo Lopes: Uh, BY the way, since you mentioned journalism and you also refer to the fact that you don't like the standard definition of this information because it implies that it has to be intentionally spread with the intent of generating false beliefs or something like that in other people. I guess that's another good example of exactly that would be when sometimes on TV for example, they do debates between And a climate scientist, expert, and another climate scientist expert who is actually a science denier, and they present them on an equal playing field. And then, oh, we're just going to do the debate. It's for the viewers to decide, uh, and I mean, no, it's not for the viewers.
Mona Simion: I have, I have previous work on, on this, that that is a problem. Um, AND it's very. People to see the problem and there is a lot of empirical literature that shows that that way of doing it generates a lot of false beliefs, and I want to say rationally so because what you're suggesting by bringing one testifier that says that P and one testifier that says that non P, you're generating an evidential situation that's 50/50. So you're it's rational to be neutral under those circumstances because the way in which these people are presented is as though they have equal evidential standing. Now what I want to say in defense of my colleagues, the journalists is that for the longest time, you know, journalism is, is, in some countries is kind of, you know, officially regulated more than in others, but if you look at kind of the on the ontological codes that regulate journalists, they tend to have this kind of 5 or 6, you know, principles that should, should guide the activity. Of journalists and the problem with these principles more often than not is that they include two principles that often contradict each other. So one principle is the reliability principle that says, you know, do whatever you can to, to, you know, report the truth rather than not, right? So that, that's good, um, but then. There is this kind of alleged objectivity or fairness principle where you're supposed to bring people from all corners of the debate and present them as you just as you just suggested. And what is sometimes, and everybody thinks these two principles are in conflict because people think, you know, if there's 3 positions out there, you need to bring one for, for each and every one of them, right? Um, BECAUSE otherwise it's not fair. But of course you know that only person A is telling the truth. So the principle of reliability tells you don't bring person B and C, just bring only person A. Uh, I think that that's, you know, it's a problem with those principles, and they should be revised, but I also think that there's a bad, that is a bad interpretation of what the fairness or the objectivity principle should say. I think the fairness, it is the fairness and objectivity principles. Should give as much exposure as the probability um of being right uh is so you know, I mean sometimes topics are complicated, you know, you, you know that you, you don't really know where the truth lies, right? So there are going to be people who argue that P is the case and people who argue that non P is the case, and you as a journalist you're going to have a set of evidence and your set of evidence suggests that this that P is 0.7 probable and non P is 0.3, where that's the kind of exposure that you should give if you're a truth searcher, you should give an exposure that is, uh, you know, that has the same weight as the evidence that you have for the truth of P. IF you want to still follow the, the reliability principle. So I think that in a way, all I can say about my colleagues, the journalists is that it's not only that they're making a mistake, it's also a mistake that is warranted by the best the ontological codes that we have right now and which they need to be revised, uh, very substantively.
Ricardo Lopes: Mhm. Yeah, I, I mean, I brought up that example also because, uh, in that particular case, at least it doesn't seem to me that if this information is generated in those cases, it is intentionally done, right? I mean, I don't think that there's any journalist out there bringing on a science denier just to intentionally mislead to the contrary.
Mona Simion: And my colleagues indeed are very worried that they have to do that, and they do it because that's what the the ontological code says. And I bet that who, you know, whoever this, uh, uh, this, the, the fathers and mothers of this the ontological code that, that put this the ontological code together back in the day also had very good intentions. Um, YOU know, probably they were living in a simple, a simpler epistemic environment. I would, I would suggest where this kind of norms would make more sense. Um, SO in, indeed, most of the time this kind of this is exactly what happens, but I want to say that even with the exact the, the example that I was mentioning earlier with that there's disagreement in in science about climate change, look, this, this pragmatic phenomena are studied in speech act theory, which is a scientific field and which most. People don't study and particularly in journalists nobody teaches you uh speech act theory that much so the fact that this pragmatic phenomena are generated are not things that your regular journalists should be aware of. So if I come on TV and I say there's disagreement in science about climate change, when what I mean by that is that there's just 11 guy disagreeing. Uh, I cannot be blamed for generating the pragmatic phenomena that I just generated because I'm not aware of how these mechanisms work because I haven't studied this field. So that's why we need a bit more knowledge exchange from science towards practice because it's not the job of the journalists to know about, you know, Gri and pragmatics. It's the job of the scientists to come and inform, uh, the field and explain how these mechanisms work in order to make for better the ontological codes for journalists.
Ricardo Lopes: Mhm. So just to wrap things up and perhaps to just summarize this last bit about this information, uh, could you tell us about, uh, then your account of this information as ignorance generating content? What,
Mona Simion: when you get rid of the dictionary definition, you're not left with much because we just got rid of the intention, we got rid of the falsehood. So what is, what is it that we're left with? Uh, SO, so stepping back, looking at all these possible ways in which you can, um, run a disinformation campaign, what, what this way suggests is that what they all have in common is that they generate ignorance in the audience. You can do that. Via many ways by saying something false or by saying something true that has a pragmatic implicature and so on and so forth, but what all these ways of generating this information have in common is that they have a disposition to generate ignorance in your hearer. And it's important to understand that because as soon as we understand that we also realize that whether some content is disinformation or not is highly hearer dependent. The exact same content asserted in one context is going to generate a lot of false beliefs asserted in another context is going to generate no false beliefs, depending on the evidential background of the audience. And in We already know that disinformation campaigns are targeting particular audience groups because they know that they're particularly vulnerable in virtue of their evidential background. So just to go back to the disagreement says about climate change, if I watch a journalist saying that, I'm not going to follow the first belief that there's a huge amount of disagreement because I know how speech act theory works and I know that the journalist is just making a mistake. If that journalist comes on TV and tells my grandmother the same thing, she will probably uh get this uh false belief because she's not a specialist in speech act theory. So it's important that we understand that what whether content is this information depends on the background evidence of the hearers, because that also explains why some people are, you know, more vulnerable to this, to some particular disinformation messages and thereby end up rejecting scientific evidence. Well, some people aren't. That's a, you know, something that's, let me give you an example of how important it is in the real world. So Public Health Scotland. When they communicated, uh, the safety, the safety of vaccines, they just did it the same with all audiences, so they sent us all a little flyer that said vaccines are safe, come and get them. Um, THAT was, that was all they did. Now of course we have very different evidential backgrounds around here in Scotland because they're very diverse people. So for instance, the same flyers sent um to white communities generated 87% or something vaccine uptake sent to African and black communities generated only 40%. uh VACCINE uptake the same message. Why? Different evidential backgrounds. The black community has a, you know, a lot of inductive evidence of discrimination from, from the public health authorities, which, you know, I don't. They also have a lot of background evidence of lack of trustworthiness of institutions more uh more generally and institutional ignorance again, I don't have that problem, so it is highly surprising that I trusted the flyer more than they did. Um, SO in that, in the same way in which Public Health Scotland, uh, failed to kind of tailor their message to the evidential background of the audience in the same way if you want to spread, um, spread this information, you can tailor your message to the audience's evidential background and target the more vulnerable, um, communities. So that's why it's important to understand that what matters is the ignorance of the hearer rather than what is actually written on the flyer. Mhm.
Ricardo Lopes: Great. So, the book is again resistance to Evidence. I'm leaving a link to it in the description of the interview and Doctor Simeon, apart from the book, would you like to just tell people where they can find you and the rest of your work on the internet?
Mona Simion: Yeah, so if you Google me, you're probably going to come across my, uh, my website and my university, uh, web page and where I have, um, a bunch of, um, drafts and PDFs of my papers if you're interested in more detail about my work. Um, AND the book in good news is open access with Cambridge, uh, due to very generous grants from the European Research Council. So if you want to read the book, you don't need to, um, go ahead and buy it, but you can just go on the Cambridge website where you can find it open access.
Ricardo Lopes: Yeah, like I did. So and I'm very grateful for that. So, Doctor Simeon, thank you so much again for taking the time to come on the show. It's been a real pleasure to talk with you.
Mona Simion: Thank you so much for having me. It's been great. Thank you.
Ricardo Lopes: Hi guys, thank you for watching this interview until the end. If you liked it, please share it, leave a like and hit the subscription button. The show is brought to you by Nights Learning and Development done differently, check their website at Nights.com and also please consider supporting the show on Patreon or PayPal. I would also like to give a huge thank you to my main patrons and PayPal supporters Pergo Larsson, Jerry Mullern, Fredrik Sundo, Bernard Seyches Olaf, Alexandam Castle, Matthew Whitting Berarna Wolf, Tim Hollis, Erika Lenny, John Connors, Philip Fors Connolly. Then the Matter Robert Windegaruyasi Zup Mark Neevs called Holbrook field governor Michael Stormir, Samuel Andre, Francis Forti Agnsergoro and Hal Herzognun Macha Joan Labrant John Jasent and Samuel Corriere, Heinz, Mark Smith, Jore, Tom Hummel, Sardus France David Sloan Wilson, asilla dearraujuru and Roach Diego Londono Correa. Yannick Punteran Rosmani Charlotte blinikolbar Adamhn Pavlostaevsky nale back medicine, Gary Galman Sam of Zallidrianei Poltonin John Barboza, Julian Price, Edward Hall Edin Bronner, Douglas Fry, Franco Bartolotti Gabrielon Corteseus Slelitsky, Scott Zachary Fish Tim Duffyani Smith Jen Wieman. Daniel Friedman, William Buckner, Paul Georgianneau, Luke Lovai Giorgio Theophanous, Chris Williamson, Peter Wozin, David Williams, Diocosta, Anton Eriksson, Charles Murray, Alex Shaw, Marie Martinez, Coralli Chevalier, bungalow atheists, Larry D. Lee Junior, old Erringbo. Sterry Michael Bailey, then Sperber, Robert Grayigoren, Jeff McMann, Jake Zu, Barnabas radix, Mark Campbell, Thomas Dovner, Luke Neeson, Chris Storry, Kimberly Johnson, Benjamin Gilbert, Jessica Nowicki, Linda Brandon, Nicholas Carlsson, Ismael Bensleyman. George Eoriatis, Valentin Steinman, Perkrolis, Kate van Goller, Alexander Hubbert, Liam Dunaway, BR Masoud Ali Mohammadi, Perpendicular John Nertner, Ursulauddinov, Gregory Hastings, David Pinsoff Sean Nelson, Mike Levin, and Jos Net. A special thanks to my producers. These are Webb, Jim, Frank Lucas Steffinik, Tom Venneden, Bernard Curtis Dixon, Benedict Muller, Thomas Trumbull, Catherine and Patrick Tobin, Gian Carlo Montenegroal Ni Cortiz and Nick Golden, and to my executive producers, Matthew Levender, Sergio Quadrian, Bogdan Kanivets, and Rosie. Thank you for all.