RECORDED ON DECEMBER 20th 2024.
Dr. Daniel Williams is a Lecturer in Philosophy at the University of Sussex. He works mostly in the philosophy of mind and psychology. His primary research interest at the moment is on how various forms of irrationality and bias are socially adaptive, enabling individuals to achieve social goals that are in conflict with epistemic goals.
In this episode, we talk about the science of misinformation. We discuss what misinformation is, and misinformation as a symptom of other problems. We discuss who should call out misleading information, as misinformation as a moral panic. We talk about AI-based misinformation. Finally, we discuss how to handle misinformation, and the fact-checkers.
Time Links:
Intro
The science of misinformation
What is misinformation?
Misinformation as a symptom of other problems
Misinformation as a moral panic
Who should call out misleading information?
AI-based disinformation
How to handle misinformation
Fact-checkers
Follow Dr. Williams’ work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello, everyone. Welcome to a new episode of the Dissenter. I'm your host, as always, Ricardo Lopez, and today I'm joined by a return guest, Doctor Daniel Williams. He's a lecturer in philosophy at the University of Sussex. And today we're going to talk about the science of misinformation specifically. We're going to talk about some criticisms that that Doctor Williams says of the science of misinformation and how it's conducted based on some of his substack posts. So, Doctor Williams, welcome back to the show. It's always a pleasure to everyone.
Daniel Williams: Thanks for the invitation.
Ricardo Lopes: So perhaps let's start with the definition just for people to understand what we're talking about here. So, um, when you tackle the science of misinformation, what are you referring to exactly? What, what is the science of misinformation?
Daniel Williams: It's difficult to be too exact, is what I'd say. Um, OF course, if you understand misinformation extremely broadly as any factors responsible for distorting human judgment or leading individuals to form misperceptions, and if you understand science extremely broadly as. Any systematic investigation of a phenomenon, you could say misinformation studies goes back thousands of years. You can think of Plato and Aristotle as engaged in a certain kind of research into misinformation. You could also think of all of the great sort of social theorists, so Marx, Weber, Durkheim, and so on, as interested in the science of misinformation in some sense. And of course, the emergence of social psychology in the 20th century. The, the, the mid-twentieth century, research on things like stereotypes, on prejudice, on the way that social identification bias is judgment, etc. ETC. ALL of these projects in some way are concerned with understanding misinformation in a sort of systematic manner. But when I'm focusing on misinformation studies in most of my writings on this topic, I'm really focusing on something which is much narrower than that. And it's a sort of multidisciplinary body of scientific research that really rose to prominence round about 2016, and we can maybe return to why that was a sort of important year. And it's concerned with establishing scientific generalizations, often sort of sweeping scientific generalizations about misinformation as a construct. So that includes things like estimating the prevalence of misinformation, often to sort of several decimal places. Um, ESTIMATING the, the quote unquote fingerprints of misinformation, the idea that misinformation is associated with the use of emotional language, um, estimating which parts of the political spectrum misinformation is more prevalent within, different groups, susceptibility to misinformation, and also the sort of impact of interventions against misinformation. So these sort of broad, relatively sweeping generalizations about misinformation. Um, AND as a, a project, although that didn't start in 2016, it really, I think, rose to prominence in 2016. And I think many of the sort of pro problems and pitfalls of the discipline, as I see it, is to some significant degree, a sort of post 2016 phenomenon. But it is really important just to add another bit of context, which is that. You know, like any body of research, misinformation research is incredibly complex and diverse, and there's lots of variation in the sort of quality and sophistication of research within it. So probably in the course of our conversation, I'll cite some research within that field that I think is really kind of excellent and and high quality. At the same time, I also think there's some research which sort of lacks those intellectual virtues as well. So, you know, I don't want to generalize too much. Often when I talk about misinformation research. It would be impossible for me to claim that the, the characterizations that I'm making apply to every single study within the field. But I do think as well there are some sort of foundational conceptual, methodological, philosophical questions, like, for example, what even is misinformation, like what are we talking about when it comes to that concept that arise as an issue for most of the, the, the research that occurs within that field.
Ricardo Lopes: So tell us then about 2016. Why was it that in 2016 was the year when the misinformation studies you focus mostly on uh rose?
Daniel Williams: I think there were two big events in 2016 that took many people, especially sort of the expert class, you know, commentators and and pundits and journalists and social scientists by surprise. One of these was the United Kingdom's vote to, to leave the European Union, um, Brexit, and the other was the election of Donald Trump in the United States, or the first election of Donald Trump in the United States. Events which were sort of viewed as indicative of this more general sort of populist backlash throughout many parts of the world. This sort of angry rejection of quote unquote elites and quote unquote, the establishment, where that, at least in the case of many of these movements, includes what you might think of as sort of epistemic elites. So social scientists, members of public health authorities, um, individuals within mainstream media, and so on and so forth. And part of the reason why misinformation as a phenomenon becomes such an intense sort of um topic of, of focus during that time is because these movements are viewed by many as being associated with a, a large amount of false and misleading content. But also there's this kind of narrative that emerges in the aftermath of these events and this sort of broader populist backlash that they're indicative of, which is that they're driven by, they're in some sense rooted in a recent explosion of kind of false and misleading information, which is manipulating a sort of half gullible, half deplorable public. And this comes from. Algorithmic manipulation, it comes from, according to this narrative, you know, foreign influence campaigns from Russia, but also from sort of domestic demagogues. And that's why, according to this narrative, there's been this sort of increase in support for these movements, which many people within what you might think of broadly as, again, quote unquote, the establishment, which many people within that area sort of rejected and felt, felt surprised by. Um, um, SO you've got that kind of narrative frame, which is that a driving force behind these movements is misinformation, and that's found among, you know, members of government, intergovernmental organizations, among social scientists, among journalists, and also among members of the general public. So if you look at survey data over the past sort of 8 years or so, um, many, many people are incredibly worried about misinformation, and many ordinary citizens also view misinformation as this sort of great societal threat. And then that then leads to uh uh uh uh an intensification of research on misinformation within the social sciences. And all of that then increases dramatically once you get to 2019, 2020, and the COVID-19 pandemic, where again you get this kind of framework, which many people find attractive for understanding why it is that people reject public health advice and reject expert judgments during the pandemic, which is that it's rooted in, it's driven by misinformation. So famously in 2020, you've got the World Health Organization. That declares, like amidst the COVID-19 pandemic, that there's also this worldwide infodemic, which is characterized by this kind of explosion of false and misleading information. And again, this, this, this fuels this kind of intensification of, of research by social scientists and experts across lots of different fields, trying to get a kind of scientific purchase on this problem, which is viewed as a sort of great societal danger. And again, I wanna sort of stress. It's not as if the, the study of misinformation begins in 2016, you know, in a broad sense, there's always been this systematic investigation into the sources of human error and fallibility and even research that explicitly uses terms like misinformation, you know, that predates 2016. But it's very well documented that around 2016, you get this explosion of research, like a dramatic increase in the number of scientific publications with misinformation in the title and the abstract. And a lot of that research that sort of falls within that post 2016 um phenomenon, that's for the most part the sort of target of, of many of my critiques.
Ricardo Lopes: And what are the main claims made by people who study misinformation when trying to justify the importance of it?
Daniel Williams: I mean, I think the the overarching one is that misinformation is widespread. Um, IT'S incredibly dangerous, so it's viewed as a, a driving force, um, underlying a whole range of things that people are worried about, you know, things like declining trust in institutions, support for, for demagogues, attacks on democratic institutions, um, more generally, kind of anti-science or non-scientific beliefs that you find among members of the general public. So it's viewed as this sort of dangerous force, which is, is shaping many of these um attitudes and behaviors that, that many people are worried about. But at the same time, I also think there's an assumption, sometimes it's implicit, this assumption. Very often actually, it's sort of stated in the opening paragraph of articles within misinformation research, which is that not only is misinformation prevalent and really dangerous, really impactful. But it's also much more dangerous now, or at least much more um um prevalent now than it was in the past. So this is why you often see references to the idea that we're experiencing a crisis of misinformation or we're going through a kind of misinformation age or disinformation age, or post-truth era, or era of conspiracy theories, the age of conspiracy theories and so on. And that's part of what explains, I think, this sort of intensification of research on this focus on misinformation. This idea that not only is it this great societal threat, but in some sense, the, the magnitude and the dangers associated with that threat have increased dramatically recently.
Ricardo Lopes: But what is misinformation exactly? Is the definition like demonstrably false information good enough?
Daniel Williams: Well, that's the million dollar question, I guess. Um, WHAT is misinformation? And of course it's like a foundational question for a field. If the aim is to establish more scientific generalizations about misinformation, about its characteristics, about its prevalence, about different groups susceptibility to it, you'd better be able to say exactly what misinformation is, how do we define, measure, operationalize the, the, the concept. And what you find is, like this is a buzzword that gets used in lots of different ways, often in confused and inconsistent ways, not just in the sort of popular discourse from journalists and commentators and so on, but also honestly within lots of the scientific research as well. Um, IF you're focusing on something like demonstrably false information, which is a definition you often sort of find within this literature, you'll you'll also find terms like sort of unambiguously false information. Um, THERE are sort of two different ways in which you could look at that kind of definition. The first question you could ask is, does it pick out the appropriate class of information that researchers are interested in? So in other words, is demonstrably false information, is that the kind of information which is leading members of the public to, to embrace misperceptions and make bad decisions and so on and so forth? That's one kind of question, what is misinformation? How should we define the concept? But then there's another kind of question you can ask as well, which is just for any given definition, like for example, demonstrably false information. Is it the case that in applying that definition, in sort of determining which information satisfies that definition, misinformation researchers are in an objective position to apply the concept, that they're going to be able to apply the concept, sort of with a high degree of reliability and impartiality. I think if you focus on a concept like demonstrably false information. There's a question there about um what that even means. Um, GENERALLY the idea is if you're focusing on demonstrably false or unambiguously false information, you're kind of acknowledging that there's lots of disagreement about what's true and false in politics. There's gonna be significant uncertainty, um, misinformation researchers are not claiming to be sort of universal arbiters of truth. But there are some clear cut cases where either news media or a pundit or a commentator makes a claim, engages in a kind of communication, where we can be highly certain that the relevant claim or the relevant contribution to discourse is mistaken. So a sort of canonical example of that kind of information would be something like fake news. Like if a disreputable news outlet just makes something up, you know, the, the, the Pope endorses Donald Trump for president. Classic 2016 viral fake news story. Um, THAT'S a case of sort of demonstrably false information. It's not like that's somebody's opinion, which might be legitimate, it's just mistaken. Or you might think of an opinion which directly contradicts the sort of consensus judgments of experts. So if somebody opines that vaccines cause autism, or they make the claim that um human activity has no impact upon climate change. Um, THOSE sorts of claims are not just taken to be mistaken by some people, they're supposed to be sort of demonstrably false, unambiguously mistaken. Now there are sort of questions you could ask about that, um, as a definition, like who qualifies as an expert? Like what degree of expert consensus is really necessary for, for something to qualify as demonstrably false if it um contradicts that expert consensus. But nevertheless, in terms of achieving a definition where it's kind of plausible that it's gonna be pretty objective, what satisfies that definition, that, that sort of definition does, does quite well. The problem is, it doesn't seem to do very well in terms of identifying the kind of misinformation that misinformation researchers are interested in, insofar as they're interested in the sort of drivers of popular misperceptions in society. And, and one reason for that is you can have information which is demonstrably false, which is not misleading at all. You can think of irony, you can think of satire. You can even think of, you know, idealized assumptions within science, which are often known to be false, but it'd be weird to think that they're misleading. Um, BUT even much more problematically than that, most of the misleading communication that you tend to find isn't really demonstrably false communication, right? There are many, many ways in which propaganda campaigns in which pundits and which commentators and so on can and do mislead audiences that really have nothing to do with publishing demonstrably false information. So it has this nice characteristic, which is that. If you're focusing on demonstrably false information, it's kind of plausible that you're gonna achieve quite a high degree of scientific objectivity. But on the other hand, it seems like you're gonna miss most of the sort of misleading communication that really shapes lots of pernicious attitudes and behaviors within society.
Ricardo Lopes: And if we're talking about the most probably false information specifically, how common is it for people to encounter such kind of misinformation?
Daniel Williams: It's a really difficult question, um, because for any of these things, obviously it's gonna slightly depend on how you unpack this concept of, of demonstrably false information. And just by the very nature of the project, there's gonna be sort of measurement uncertainty and and measurement error. Um, BUT if you're focusing on something like fake news, where there is quite a lot of research, so disreputable websites publishing fabricated news stories, the overwhelming consensus in the scientific literature is that that kind of content does not seem to be very prevalent. In most people's information diets, at least within Western democracies, where there's been lots of research. So for example, there's a study in um uh Science from 2020, published by Jennifer Allen and colleagues, and they estimate that fake news makes up roughly 0.15% of America's overall media diet. And I think part of the reason for that is, you know, most people don't pay much attention to politics, current affairs, and news at all, let alone fake news. But among those people that, that sort of do tune in to, to news media, um, and pay attention to current affairs, they overwhelmingly tune in to mainstream media organizations. Now those organizations might be churning out content which is selective and biased and misleading and propagandistic in all sorts of different ways, but it's very, very rare for those sorts of organizations to publish outright fake news. And in a way, the, the sort of low prevalence of, of fake news within the information ecosystem shouldn't really be that surprising. Not just because on the demand side, among audiences, you know, people generally want accurate, reliable information and so organizations emerge to kind of satisfy that demand. But also because even if you're focusing on explicit propaganda campaigns, sort of self-conscious attempts to manipulate public opinion. It's very rare that publishing fake news is necessary for achieving your goal. Like there are countless ways in which you can manipulate, deceive, distort audience opinions without making anything up. So if you focus, for example, on a campaign to demonize immigrants, let's say, one thing you could do as part of that campaign is just to publish outright fake news. And of course that does happen to some significant degree. But another thing you can do is just whenever there's a story of an immigrant behaving in a negative way, let's say, you just publish and amplify that story as widely as you can. And the cumulative effect of that kind of highly selective reporting is to greatly exaggerate the negative perception of immigrants in the relevant country. You haven't published anything, strictly speaking false, but because of this biased revelation, because you're selecting and amplifying that kind of content, you end up pushing a particular kind of narrative. So it's very rare that even for the most propagandistic deceptive outlets, you need to publish fake news in order to push a particular. Um, AGENDA or to manipulate public opinion. Um, BUT also of all of the ways that you can try to manipulate audiences, publishing outright fake news is the most hazardous way of doing it. Because if you make something up as a news organization, first of all, it's very likely that audiences will find that out, partly because it's a competitive media ecosystem and it's always in the interest of your competitors to discredit you, if you're a news outlet, for example. Um, SO there are always these incentives to sort of call out when another, um, a media outlet makes something up. Um, SO it's, it's very likely that audiences will discover that you've published the fake news, and if they do, that's gonna really hurt your reputation, right, that's gonna result in this just collapsing trust in you as a source. Whereas if you publish something that's true but misleading, it's much more difficult for you to be called out on that or to enforce norms against that, because you've always got the kind of cover story which is, well, we published something that was true. You know, surely you can't be angry about us publishing something that was strictly speaking, accurate. So in a way, the empirical research I think shows that this kind of clear cut fake news is not that prevalent. And I think if you step back and you reflect on even the sort of most obvious, most deliberate propaganda campaigns, shouldn't really surprise us that that kind of content is not that common.
Ricardo Lopes: And you say that this kind of misinformation uh is largely symptomatic of other problems. What other problems are you referring to exactly?
Daniel Williams: So I think the the concept of it being symptomatic there is supposed to sort of push back against this idea, which I think many people have and which underlies lots of the sort of discourse and the research on this topic, where, you know, misinformation is viewed as a kind of exogenous force that comes into society and it drives people to embrace false and inaccurate beliefs and make bad decisions about the world. And that can happen, but I think especially when you're focusing on this really kind of clear cut, unambiguous fals and fabrications, what the research tends to show is not only is sort of average exposure to that content pretty low, but the average exposure is pretty misleading in the sense that most people don't really encounter much of that content at all. But there's a narrow fringe of social media users that encounters quite a lot of it, and that increases the, the overall average. And that fringe of the, the sort of social media users that engages with lots of this content online is not a cross section of the population. It's not, and this again is a sort of popular image many people have, a situation where, you know, otherwise ordinary people with sort of basic ordinary beliefs, um, fall down a rabbit hole and now they're a queue and non-believer. What tends to happen is you've got people with pre-existing attitudes, identities, worldviews, beliefs and so on. Engaging with and seeking out content that aligns with that general perspective on the world. And in many cases, at least as some sort of um almost sort of ethnographic and qualitative research which suggests this, the engagement with that really kind of absurd fake news is not even for reasons of belief, it's just being spread and amplified for things like trolling, for satire, for sowing chaos in society and so on. Um, BUT even when it is engaged with by people who take it seriously, that's often symptomatic of the fact that these are segments of the population which don't trust institutions. They might actively distrust mainstream knowledge generating institutions like public health authorities, mainstream media, um, scientific consensus and so on. And so they're seeking out counter establishment content, or they've got, for example, a very general kind of conspiratorial worldview. And so they're seeking out content that affirms and rationalizes that kind of worldview. Or another thing you find is, if you've got highly polarized contexts where you've got some people who are strongly politically engaged and highly partisan, so they're interested in seeking out content that sort of demonizes the other side, often in a very hyperbolic manner, they will also tend to have lower standards in terms of the content that they engage with. They'll seek out sort of really hyper partisan, really biased content. So that's the sense in which it's symptomatic that it's not a matter of, you know, otherwise ordinary people with average beliefs being sucked into rabbit holes and then engaging with this content. The research on this topic overwhelmingly suggests that you've got people with preexisting kind of attitudes, beliefs, worldviews, engaging with and seeking out this content because it aligns with those pre-existing beliefs. Crucially, that doesn't mean that it has no harmful consequences. That would be clearly absurd as a claim. The mere fact that it's responsive to and sort of reinforces these pre-existing beliefs, that's consistent with it sometimes having some negative consequences. So for example, sometimes this kind of content will be used to mobilize different parts of the community around a shared narrative. Um, AND it can also serve to sort of entrench people in, in, in these worldviews, these sort of pre-existing beliefs that they bring to social media. So it's not to claim that it has no harmful consequences at all. But rather the point is, if you're trying to deal with some of these really deep rooted issues in society, deep-rooted epistemic issues of people having these highly conspiratorial worldviews, not trusting institutions, seeking out content that sort of demonizes members of the other side, you're not gonna get very far if you're just focusing on that misinformation. That misinformation exists for the most part because of these underlying kind of attitudes, worldviews, identities and so on.
Ricardo Lopes: Would you classify the preoccupation with misinformation as a moral panic? And if so, why?
Daniel Williams: Um, SO I know you've had Sasha Altay on the, on the podcast before who also sort of talks in this, in this kind of way. Um, AND he's also one of the most sort of astute and sophisticated, um, theorists when it comes to focusing on misinformation as a symptom of these sort of underlying problems. And he wasn't the first person to, to use this sort of moral panic framing, you find it among, you know, media researchers and so on. Um, WHAT I would say is, whether you want to use the term moral panic or not, when it comes to this really sort of clear cut, unambiguous, fake news that you find online, published by disreputable news outlets, um, there really does seem to be a case where the amount of attention. The amount of panic, the alarmism surrounding that phenomenon is not warranted by the evidence we have concerning its prevalence and its impact. So in that respect, in the sense that there is a lot of focus on this, a lot of attention on this, a lot of panic surrounding this, I would say that um at the very least what you can say is it doesn't seem to be supported by evidence concerning the, the scale and impact of the problem. I would also say another thing that I think the, the moral panic framing gets right is that it's not just that all of this concern with and alarm about misinformation exaggerates the the prevalence and impact of that kind of misinformation. There's also this broader idea, which I alluded to at the beginning, which is that many people think this problem of misinformation. In a broader sense of people believing wrong things, endorsing conspiracy theories, being ignorant, holding misperceptions, having anti-establishment worldviews, etc. ETC. There's this idea that this is much, much worse than it used to be. And again, that's another case where I think there simply isn't good evidence. To the effect that that's true, there simply isn't good evidence that across the board there's been this overwhelming deterioration in the kind of quality of the broader informational ecosystem. And inasmuch as lots of misinformation, commentary, and indeed lots of misinformation research, embodies the assumption that we're now living through a misinformation age, or a disinformation age or a post-truth era, in contrast with some allegedly previous era of, you know, greater truth and objectivity, that I think is also very, very misguided. And so I think insofar as the moral panic framing captures the fact that that's misguided, I think it's also got something to it there as well.
Ricardo Lopes: So you have been accused of endorsing a weird postmodernist view where there are no differences in the reliability, honesty, and objectivity of different outlets, institutions, and communicators. What is your reply?
Daniel Williams: Yeah, I've also been accused of endorsing 100% postmodernism, um, I mean, maybe it's like good to, to step back a little bit and give the context for where that, that accusation uh arose, which is that. In my view, misinformation research, if it focuses on the really kind of clear cut unambiguous falsehoods that we've just been talking about, even though I think that content doesn't tend to be particularly widespread, it doesn't tend to be that impactful. Um, NEVERTHELESS, I think if misinformation research restricts its focus to that kind of content. It can achieve a pretty high degree of objectivity. Um, Now, in response to the fact that that kind of content isn't particularly widespread, and it doesn't appear to be all that impactful, there's been this real move from those who study misinformation and those who focus on it, to say that we should really kind of broaden the definition of misinformation so that it focuses not just on demonstrably false information, and maybe not even just on false information, but on any kind of information that is misleading. Clearly, if you broaden the definition of misinformation in that way, it's trivially true that misinformation is so understood, it's gonna be much more widespread. Um, AND I think it's also trivially true that that kind of content, if you're focusing on misleading content in this really kind of general and broad sense, is much more impactful than clear cut, you know, fake news or unambiguous falsehoods. You know, partly because it's not in any way restricted to disreputable websites online, you know, misleading communication in the broadest possible sense, I would say is absolutely pervasive within the informational ecosystem. Um, SO again, to, to cite a study by Jennifer Allen which came out um earlier this year, which looked at vaccine content on Facebook. And there, the sort of the, the headline result of this study is, if you focus on just outright fake news about vaccines, what you tend to find is that content existed on Facebook, but it wasn't that prevalent and it wasn't that impactful. But if you focus on content that's not fake news, it's not strictly speaking false, but it's nevertheless vaccine skeptical in its sort of general implications. So for example, true reports of rare vaccine related deaths. That content, I think they estimate it as about 46 times more impactful than fake news. And that's just one example, but you can also think of things like partisan media, in a very well documented and very obvious feature of partisan news media like Fox News or MSNBC or GB News in the UK. That even though they very rarely just make stuff up, they tend to select, emit, frame, package, interpret information in ways that are highly misleading, that sort of support a particular kind of partisan narrative. So clearly that kind of broader misleading communication is very widespread. But in my view, I think if you understand misinformation as a term that broadly, it really is not suitable for objective, scientific study, classification and generalization. And you know, part of the reason is, it's not just that misleading information so defined is widespread. It's sort of so widespread that the concept loses any analytical value. Even if you focus on sort of mainstream news media and you set aside all worries about sort of ideological and political biases. Even high quality outlets like the BBC, The New York Times, the Financial Times, and so on. They report, because they're news media, a highly non-random sample of all of the bad, threatening things happening in the world. Like, if it bleeds it leads, is quite literally the recipe underlying lots of news reporting, and also a mechanism of cherry picking. And as a result of that, the, the audience of news media, mainstream news media, tends to have a very negative understanding of the world. Whenever a trend is going in the right direction, the audience of these sort of establishment, news media. Um, ORGANIZATIONS tend to have a misperception about that. They think it's going either in the wrong direction or they just generally have a pretty catastrophising understanding of the world. Does that mean that all of news media would qualify as misinformation? I think that's a little bit peculiar as um a definition. Um, BUT I also think it's just incredibly difficult to even say an abstract what makes information misleading. Like to really give a precise understanding of why it is that certain kind of reporting qualifies as misleading and other kind of reporting doesn't. So misinformation researchers want to say, if you report on rare vaccine related deaths, even if you accurately report on them. That's misinformation because it might mislead audiences into thinking those sorts of events are more prevalent than they are. But if you try to apply that principle consistently, you get into incredibly weird places. For example, take reporting of rare police shootings of unarmed black citizens in the US. I would say that's very important reporting, but statistically speaking, you're reporting on rare events, and there's some evidence that people in society greatly overestimate how prevalent that those sorts of occurrences are. Um, DOES that mean that that kind of reporting is gonna get classified as misinformation? You end up in really weird places here. And I think part of the reason is, once you start with this really expansive definition of misinformation, it's very difficult to see how judgments are gonna be anything other than judgment calls on the basis of your worldview, your ideology, your values, your interests. And they're gonna be very context sensitive, and they're not gonna lend themselves to these sorts of precise scientific generalizations about, you know, certain people being more or less susceptible to misinformation or misinformation constituting X% of the informational ecosystem. Now it's because I've made that kind of argument, which I've made in much more depth elsewhere, um, that. The reaction from certain misinformation researchers has been, well, I must endorse some sort of weird postmodernist rejection of the very idea of truth or, or rationality or knowledge or objectivity. But of course, I don't endorse that kind of strange view where there's no such thing as reality. It's rather that I think reality is complex, uh access to reality is often highly mediated by which information we encounter from others, how we interpret that information. Human beings tend to be biased, you know, experts don't tend to fully escape those biases, etc. ETC. And for all of those reasons, it's not like I think. There's no such thing as objectivity. I just think objectivity is incredibly difficult to achieve when it comes to politics, and it becomes much, much more difficult to achieve the more expansive the definition of misinformation is.
Ricardo Lopes: But should people still call out misleading content when they encounter it, and who should do it? Do you think it should be done by misinformation experts, for example?
Daniel Williams: I mean, I think it depends on what capacity, but I think it's, it's a really important question because there is a fundamental distinction between democratic citizens, and that could be either one of us, you know, you've got a podcast, I've got a blog, um, it could be any pundit, any commentator, any member of a democracy who wants to participate in the public sphere. There's a fundamental difference between the activity of democratic citizens within the public sphere, making judgments about what they take to be true, calling out ideas, reporting claims that they judge to be misleading or quote unquote, misinformation, although most ordinary citizens don't really use that kind of jargon. There's a fundamental distinction between that project, which of course is incredibly important, and that's why we've got a democracy, and that's why we've got a public sphere. Because we recognize the truth is incredibly challenging to obtain and there's gonna be a diversity of perspectives, and you need that argument, and you need that deliberation, and you need that space where people can make mistakes and put forward different kinds of ideas. There's a difference between that project and a project of a certain kind of technocratic expert class, attempting to establish scientific generalizations about misinformation. From an allegedly neutral objective vantage point. For the purpose of publishing these findings in scientific journals, and then either directly or indirectly informing policy responses to what's studied within that, that research. Those are two fundamentally different projects, and it's reasonable to hold the latter project to much higher standards than we would hold the former project. If you're just an ordinary citizen and you might be a misinformation researcher participating in political debate as an ordinary citizen, in which case obviously it's fine. That's one thing, and we recognize human beings are biased and we're fallible, and we're often groupish, and we've got allegiances and our judgment can be biased by self-interest, etc. ETC. That's one thing. It's totally different when we're thinking of an allegedly objective, scientific project. And we should have completely different standards and much higher standards when it comes to that project than we do when it comes to ordinary democratic debate and deliberation.
Ricardo Lopes: So, in 2024, this interview will come out in 2025. The World Economic Forum published its global risk report and misinformation and disinformation came out as the top global threats over the next two years. So, Uh, uh, I have of things like, for example, extreme weather events, societal polarization, cyber insecurity, interstate armed conflict, lack of economic opportunity, inflation, involuntary migration, economic downturn and pollution.
Daniel Williams: Uh,
Ricardo Lopes: WHAT do you make of this?
Daniel Williams: Um, I think it's really indicative of this more general. Panic and alarmism about misinformation. Um Part of the issue here though, I think, is it really again depends on what's meant by the term misinformation and disinformation. And I think the problem with this sort of report where misinformation and disinformation is placed as a more severe threat, more severe risk than nuclear war or economic catastrophe, and so on and so forth. Um, THE problem is, if you're understanding these terms very, very narrowly. And that's gonna include things like fake news, but it's also gonna include things like, you know, unambiguously false opinions. And it's also gonna include things like deep fake technology, you know, the use of artificial intelligence, generative AI to create these sort of hyper realistic audio and video recordings, cos I think that was in the background when this report was published. Um, THAT kind of content. As we've already discussed, doesn't appear to be that prevalent and doesn't appear to be that impactful. So I think if you're focusing on that really kind of discrete phenomenon of clear cut, unambiguous forces and fabrications, it's not to say that it's not a problem at all, for the reasons I've already discussed, I think that can sometimes and and it does sometimes have harmful consequences. To me, it's just absurd to suggest that's a, a more severe risk than the other things that are listed within that risk report. Now one of the responses to that, which mirrors this more general response to the discovery that really clear cut misinformation doesn't appear to be that widespread or impactful, is to say, well, maybe we should understand misinformation and disinformation. To refer to, you know, any factors which end up distorting human judgment and leading to bad decision making within society, because of course, for all of those other risks, whether it's military conflict, nuclear war, you know, all of these other things. Clearly bad decision making, human error, fallibility and so on, are playing an important role within them. So why don't we just subsume all of those factors which cause human error and fallibility under this general term misinformation and disinformation, and then we can say misinformation and disinformation are the most significant risk inasmuch as they're implicated in all of these other risks. But I think one problem with that is, it doesn't really make any analytical sense to bundle together all of the different sources of human error and fallibility as misinformation and disinformation. You're dealing with a vast complex, heterogeneous set of factors which can distort human judgment. That's gonna include human ignorance, human self-interest, self deception, tribalism, etc. Um, IT'S not analytically helpful, I think, to, to bundle all of those very different things together under this sort of general label, misinformation and disinformation. But also, there's absolutely no reason to think, if you're just focusing on the sources of human error, that this is a discrete near term threat. Those sources have always been features of the human condition. They almost certainly always will be features of the human condition. So it's very confused, I think, to sort of, within the next two years, single those out as the most significant risk that humanity confronts. So I think there's a sort of issue here where it's either wrong if you focus on a really discreet phenomenon, which is this sort of clear cut unambiguous forces and fabrications, or it's just so confused, it's not even wrong. It's just an incredibly unhelpful way of framing. What is a genuine problem, which is human error and fallibility, but of course that's always been a problem, there's nothing new about that as as a problem.
Ricardo Lopes: And how about AI based this information? Do you think it is a legitimate threat or not?
Daniel Williams: I think it's a legitimate threat. Um, I think it's a good thing that lots of people are, are working on it and worrying about it and, and thinking about ways to guard against the sort of the, the negative, potentially negative consequences of artificial intelligence and and how it sort of shapes and, and impacts upon uh the public sphere and the information ecosystem. I mean, I would say there was all of this alarmism about the impact of deep fakes, especially, but not exclusively, um, in terms of the elections of 2024. That includes the election in the United Kingdom, it also includes the US election and many, many other elections around the world. And it really seems to have been the case that deep fake technology did not play a significant role in terms of shifting vote share in these elections. And I think the reason for that is very easy to see, based on these sort of more general things we've discussed in the context of thinking about misinformation and disinformation. You know, clear cut misinformation like deep fakes fall into that general category, in general, isn't particularly widespread or impactful for the reasons that we've discussed. So it's unclear why something like deepfake technology would have really had a big impact there. And then more generally, there's something which we haven't really touched on, but is in the background of this conversation, which is just that human beings are really difficult to influence. When it comes to things like voting intentions, when it comes to things like our political allegiances, our basic intuitions, our basic worldview. Incredibly difficult to manipulate people into holding false beliefs that contradict their pre-existing ideas and attitudes. And that's because of work that's, or, or rather, the reasons for that are reflected in work by people like Dan Sperber and Hugo Mercier that you've obviously had on the podcast before on epistemic vigilance. You know, it it wouldn't have made sense for human beings to have evolved to be gullible. Instead, we've evolved a whole set of both cognitive defenses against manipulation and misinformation. And also we developed sort of social and institutional mechanisms to guard against those things as well. Um, AND so, given that, a lot of the alarmism about AI based disinformation didn't really take that into consideration. It it reflected this kind of view that many people have, which is that people, and it's always other people. ARE gullible, they're credulous. They're sort of easy to manipulate by content that turns up in their social media feed. And that's just not true, and it, it's not even true when it comes to really um intense sort of propaganda and advertising campaigns, which have a lot of funding behind them. So it's certainly not going to be true when it comes to, you know, relatively low quality um attempts to, to manipulate the information ecosystem with, with AI. And then there's a final thing which is just that any discussion about AI based disinformation. Insofar as it treats that as this really great societal threat, has to explain why it is that these advances, these sort of developments in AI are going to asymmetrically benefit bad information over good information. Because if not, then it just seems like any consequences it might have in terms of the spread of bad information are gonna be mirrored or even outweighed by the benefits that it's gonna have when it comes to good information. And certainly if I think about my own work, both my sort of published academic research, but also in terms of trying to make like evidence-based contributions to public debate on certain issues, I can't think of a single way in which it's been negatively impacted by AI and I think it's been benefited in many, many ways by advances in generative AI. And I suspect that's true of many people throughout society. Um, ACADEMICS, journalists, um, pundits, commentators who are trying to make sort of good faith, evidence-based contributions to public discourse. So even when it is potentially true that that artificial intelligence might benefit those people who are deliberately trying to spread bad information, it's also, for similar reasons gonna benefit people who are trying to make good faith, evidence-based contributions to public discourse as well. And I haven't really seen any reason to think that it's gonna asymmetrically benefit those bad actors over the good actors.
Ricardo Lopes: So, how do you think we should deal with misinformation and disinformation then if at least sometimes they can be harmful?
Daniel Williams: I mean, I think it's an incredibly difficult question, and I wouldn't claim to have a particularly good thoughtful answer to it. I think it's helpful to distinguish between what we can do as individuals and what we might try and do at the kind of broader social, political, institutional level. Um, I would say as individuals who are participating in the public sphere and engaging in democratic debate and deliberation and so on. There I think what's really important is things like, you know, modeling intellectual humility, trying to cultivate what psychologists call, you know, active, open-mindedness, being receptive to the possibility that our beliefs and our contributions to public discourse might be mistaken. Here I think the research of people like Philip Tetlock on those characteristics which are conducive to high quality forecasting, which tend to be characteristics very, very different from those that prevail in sort of public debate and deliberation. Things like intellectual humility, this act of open-mindedness, that's really important. Um, AT the social institutional level, my sense is the standard response which many people endorse, which is, if there's bad information, we just need to censor it, either in terms of hard censorship or in terms of subtler sort of content moderation policies. I think that has been, maybe not a disaster, but an overall negative in terms of that, that strategy so far. Um, PARTLY because a lot of that is based on ideas which exaggerate the impact of this misinformation and disinformation to begin with. Partly it's because of reasons we've discussed to do with achieving a high degree of objectivity when it comes to classifying things like misinformation, because you're never gonna get infallibility, there are gonna be mistakes. And if you get censorship wrong and you accidentally or deliberately, but I think it's normally inadvertent. If you censor content which is legitimate or reasonable, as has happened, I think, several times over recent years, that can create a massive backlash against elites and establishment institutions. Um, AND more generally, I think it tends to exacerbate the very things like institutional distrust and political polarization, which really um cause many of the issues surrounding misinformation. That people are concerned with, to begin with. If you know, those people who distrust establishment institutions, they view these institutions as clamping down on dissent from establishment narratives, that's likely to aggravate the distrust, which is driving them towards counter establishment and often misinformative content to begin with. So I don't think censorship is the answer, um, either in terms of hard censorship or for the most part in terms of subtle sort of subtler forms of censorship, like, like content moderation. I think the most important thing ultimately is. In complex societies like the ones we inhabit today, we do depend upon experts, and we depend upon well functioning institutions, norms of professional journalism within media, we depend upon effective public health authorities, we depend upon scientific research and so on and so forth. And a large part of trying to build trust in those institutions, which is absolutely essential, I think it's just trying to make those institutions more trustworthy, to trying to invest in making sure that they're not overly politicized, that there's a significant diversity of perspectives within them. To make sure that all of the sorts of factors that drive things like the replication crisis or the excessive communication of certainty that you often get from public health authorities and so on, to make sure that that stuff gets sort of stamped out. So building institutional trust by I think making these institutions more trustworthy is really important. And then I definitely do think there is a place for, you know, public information campaigns, you know, in cases where. We can have a high degree of confidence that a certain position is true. So for example, on vaccines. I think it really is important to try to um disseminate that research, disseminate those ideas with the goal of trying to persuade audiences, because I think even though people aren't gullible, even though they're not easy to influence, they are still rational and people do respond to persuasion. So I think those sorts of public information campaigns are really important, but I think in and of themselves, without addressing these other sorts of issues like institutional distrust and and intense political polarization and sectarianism and so on, I don't think on their own they're gonna get that far.
Ricardo Lopes: So one last question then, earlier we talked about misinformation experts. How about fact checkers themselves? Can we trust them because at least they seem to have a good track record, right?
Daniel Williams: Um, I mean, it's difficult to know whether they've got a good track record, I guess. There there's always this sort of deep methodological issue of how you evaluate the reliability of fact checkers. The really annoying slogan, who fact checks the fact checkers, um, gets to something important, which is, it's really difficult to systematically evaluate the degree to which fact checkers claims correspond to reality, because if you were in a position to do that, you wouldn't need the fact checkers to begin with. Typically the way in which their reliability gets evaluated is not by checking for correspondence, but, but by checking for agreement between different fact-checking organizations. And when you do that, you do tend to find that there's a fairly high degree of agreement between different fact-checking organizations, especially when it comes to highly confident judgments, for example, a news story is false or that a news story is, is true. And I think that's, you know, relatively compelling evidence that for the most part, on many issues, they are quite reliable, although of course you might be able to tell a story where the explanation of that agreement is not correspondence, it's some other sorts of factors. But I think at least when it comes to these sort of factual matters, my own view is they do tend to be fairly reliable. It's just that I think there's only so much that fact checking, whether it's internal to the news reporting of organizations, because of course, lots of people don't know this, but if you publish something through the BBC or The New York Times, it goes through an extensive internal fact-checking procedure, or in terms of these external sort of fact-checking organizations, which fact check the, the reporting of other news media. Um, THERE'S only so much I think that that kind of work can achieve. So I think. You know, there's good reason to believe, for reasons I mentioned in terms of the importance of public information campaigns, that that can have positive consequences. At least if, you know, the fact checking industry is not viewed as overly partisan. Unfortunately in some countries it is for complex reasons. But I, I, I do think it can have positive consequences, but. Precisely because many of these epistemic issues of people endorsing conspiracy theories, misperceptions, institutional distrust, and so on, because they're very deep rooted in society, there's only so much I think that you're gonna be able to achieve with fact checking. Um, SO it's, I, I don't want to be completely dismissive of it, I think. Overall, I would say my own sense is that they tend to be pretty reliable and have positive consequences, although I think it's difficult to establish that with certainty. Um, BUT just as we shouldn't be sort of dismissive, I also think there's a lot of faith which is put into the idea that if you just show people the facts, that's gonna be a magic cure for many of these sort of epistemological problems in society. And I think if you really understand what's driving these problems, it becomes clear pretty quickly that that's unlikely to make that much difference.
Ricardo Lopes: Great, so just before we go, where can people find you when you work on the internet?
Daniel Williams: I have a blog, Conspicuous cognition, so you can just search that. um, YOU can also just put in Dan Williams' philosophy into Google and if you do that, you can go to my website and you'll find a list of my published research.
Ricardo Lopes: Great. So, Doctor Williams, thank you so much for taking the time to come on the show again. It's always a pleasure to talk with you. Thanks,
Daniel Williams: Ricardo. Cheers.
Ricardo Lopes: Hi guys, thank you for watching this interview until the end. If you liked it, please share it, leave a like and hit the subscription button. The show is brought to you by Nights Learning and Development done differently, check their website at Nights.com and also please consider supporting the show on Patreon or PayPal. I would also like to give a huge thank you to my main patrons and PayPal supporters Perergo Larsson, Jerry Mullerns, Fredrik Sundo, Bernard Seyches Olaf, Alexandam Castle, Matthew Whitting Berarna Wolf, Tim Hollis, Erika Lenny, John Connors, Philip Fors Connolly. Then themetri Robert Windegaruyasi Zup Mark Neevs called Holbrookfield governor Michael Stormir, Samuel Andrea, Francis Forti Agnunseroro and Hal Herzognun Macha Jonathan Labrant Ju Jasent and the Samuel Corriere, Heinz, Mark Smith, Jore, Tom Hummel, Sardus Fran David Sloan Wilson, Asila dearauujoro and Roach Diego Londonorea. Yannick Punter Darusmani Charlotte blinikolbar Adamhn Pavlostaevsky nale back medicine, Gary Galman Sam of Zallidrianei Poultonin John Barboza, Julian Price, Edward Hall Edin Bronner, Douglas Fry, Franco Bartolotti Gabrielon Corteseus Slelitsky, Scott Zacharyishim Duffyani Smith John Wieman. Daniel Friedman, William Buckner, Paul Georgianeau, Luke Lovai Giorgio Theophanous, Chris Williamson, Peter Vozin, David Williams, Diocosta, Anton Eriksson, Charles Murray, Alex Shaw, Marie Martinez, Coralli Chevalier, bungalow atheists, Larry D. Lee Junior, old Erringbo. Sterry Michael Bailey, then Sperber, Robert Grayigoren, Jeff McMann, Jake Zu, Barnabas radix, Mark Campbell, Thomas Dovner, Luke Neeson, Chris Storry, Kimberly Johnson, Benjamin Galbert, Jessica Nowicki, Linda Brandon, Nicholas Carlsson, Ismael Bensleyman. George Eoriatis, Valentin Steinman, Perkrolis, Kate van Goller, Alexander Hubbert, Liam Dunaway, BR Masoud Ali Mohammadi, Perpendicular John Nertner, Ursula Gudinov, Gregory Hastings, David Pinsoff Sean Nelson, Mike Levine, and Jos Net. A special thanks to my producers. These are Webb, Jim, Frank Lucas Steffinik, Tom Venneden, Bernard Curtis Dixon, Benedic Muller, Thomas Trumbull, Catherine and Patrick Tobin, Gian Carlo Montenegroal Ni Cortiz and Nick Golden, and to my executive producers, Matthew Levender, Sergio Quadrian, Bogdan Kanivets, and Rosie. Thank you for all.