RECORDED ON JULY 18th 2023.
Dr. Cailin O’Connor is Professor in the Department of Logic and Philosophy of Science at UC Irvine. She is a philosopher of biology and behavioral sciences, philosopher of science, and evolutionary game theorist. Her monograph The Origins of Unfairness was published in July 2019 by Oxford University Press. She is the author of Games in the Philosophy of Biology, and The Misinformation Age.
In this episode, we start by talking about game theory, and what we can study through it. We then talk about social pressure, conformity, conservativism, and selection in science. We discuss if scientific retraction works. We talk about perpetual categories, and if they have evolved to track properties of the world. We discuss learning generalization, and evolutionarily stable strategies. Finally, we talk about misinformation during the COVID-19 pandemic.
Time Links:
Intro
Game theory
Social pressure and conformity in science
Is science a conservative enterprise?
Is the scientific community a population undergoing selection?
Does scientific retraction work?
Have perceptual categories evolved to track properties of the world?
Learning generalization, and evolutionarily stable strategies
Studying misinformation during the COVID-19 pandemic
Follow Dr. O’Connor’s work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello everybody. Welcome to a new episode of the Center. I'm your host as always Ricard Lobs. And today I'm joined by Doctor Kaylyn o'connor. She is professor in the Department of Logic and Philosophy of Science at the University of California Herrin. She is a philosopher of biology and be sciences, philosopher of science and evolutionary game theorist. She is the author of books like the Origins of Unfairness Games in the Philosophy of Biology and the misinformation Age. And today we're going to talk about some of her uh topics. So some of the subjects she focus on. So Doctor o'conner, welcome to the show. It's a big pleasure to everyone.
Cailin O'Connor: Oh, thanks for having me, Ricardo.
Ricardo Lopes: So let me start by asking you since you apply a game theory in your work or at least to some of your work. So what is it basically? And to what kinds of topics do you apply it? Uh Mostly.
Cailin O'Connor: Yeah. So game theory, it's um it's a branch of math that is used to study strategic human interactions. And so a strategic interaction. Is it anything where you have two individuals who are interacting and where they both care about what the other one does. So I have some stake in the game and what you're doing and the reverse is true. Um Game theory when it was first developed was applied just to humans. And usually the idea was we'll analyze people as if they're fully rational and then use that to predict what they might do or explain what we see people doing. So assume that I think really hard about what you're gonna do and then make my best choice for an action based on what you're gonna do and what I want to happen. Um Later on, it was introduced to biology as well to apply to different kinds of critters, you know, to animals, even things like micro organisms sometimes, and some of the assumptions were changed to think less about like rationality and more about how um how animals might learn to behave strategically or they might evolve to behave strategically. So a lot of the work I do uses those latter kinds of models or tools and I've applied game theoretic models to all sorts of systems. So I've used them to think about signaling in biology and in humans. So things like human language and how animals communicate with each other. I've used them to think about like perception in the brain. I've used them to think about things related to unfairness, which is related to that book. You mentioned like how to unfair norms emerge in human societies. Um I've used them to think about the evolution of moral emotions, like guilt and shame. Uh, AND also to some degree to think about stuff like misinformation and the spread of knowledge and belief in humans.
Ricardo Lopes: Mhm. Yeah. And we'll get into some of those topics, uh, later in our conversation. But I would like to ask you now about science, the institution of science and scientists themselves because there's, that's also a topic that you study. So are scientists themselves subject to social pressures?
Cailin O'Connor: Uh Yeah. OK. We're like jumping to different stuff. So, yes, they are. This isn't um an observation that's like new to me by any means. It's something that people have been noticing and talking about for decades and decades in thinking about how science works. Uh So scientists of course, are humans, they like other humans care about what other people think they grew up in human societies. They have human biases, human social tendencies. And there's a lot of really compelling evidence showing that all of that influences how science as an enterprise gets done.
Ricardo Lopes: Uh And I mean, one of the things that uh we as humans are, at least to some extent is we are conformist. So is conformity, something that also happens in science and if so, is it good or bad?
Cailin O'Connor: Yeah. So this is something that I've worked on with um a collaborator of mine, Jim Weall. So the idea of conformity of some sort mattering to science goes back a pretty long way. So for example, Thomas Coon, in this very famous book, The Structure of Scientific Revolutions has this idea that, you know, scientists tend to work within a paradigm and they'll all be kind of attached to this para this way of thinking about the world and there will be social pressure sort of keeping people within the paradigm and supporting it. And I think there's some idea there about people conforming with each other. And then he has this idea that like you get younger people who are less part of this community coming up and challenging the old notions in our work. What um Jim and I did was thought about groups of learners who are getting evidence from the world. So we built these models of individuals who can gather evidence and share evidence. And we asked, would they get better or worse at learning if we made them want to conform? And the way we included conformity in the model is we assumed all the agents are in a network. So this is something that represents their social connections. Each node in the network is an individual and then each link between them is like a social tie. And so our agents um in deciding what evidence to gather and what actions to take, they would both take evidence that they had gathered or seen in the past, but they would also think about what their neighbors were doing. So, for example, if I had a lot of evidence that say the COVID vaccine is relatively low risk, but I had a lot of neighbors who were not getting vaccinated. In this model, I might choose not to get vaccinated because I prefer to conform with neighbors. And then we ask, how does that impact learning and decision making in the group? And then these models, we found that on average conformity would tend to make the group worse at learning and individuals more likely to take bad actions. And there were a couple of reasons for that. So one thing is that conformity would stop individuals from sharing good evidence. So say, I think personally that vaccines are safe and I have good evidence that that's the case. But I'm with a bunch of people who aren't getting vaccinated if I conform to them, I never tell them about my good evidence. I never get vaccinated and let them see what happens. So you create these kind of information bottlenecks where good information isn't flowing in the network,
Ricardo Lopes: but the science or uh the science tend to be a conservative enterprise. And if so, what does that mean exactly within the domain of science to be conservative?
Cailin O'Connor: Yeah, that's a really um a big question and I think a really hard one to answer. So some people in philosophy of science worry about conservatism in science and when they talk about that. So this isn't, this is something I've written a little bit on, but it's not like a central topic for me. So just keep that in mind when they talk about that. Um Usually what they mean is something like there are forces or structures in science that keep scientists working on the same kinds of problems or keep them from, for example, looking at questions that are too weird or too outside the norm or that are, for example, high risk, high reward or that are sometimes people call it mavericky or um just more unusual. And so a lot of people argue that there are forces that kind of keep scientists from doing stuff that's too different or too unusual. And sometimes people argue that that's a bad thing that um what you want across a community of science is to have at least a good handful of people working on topics that maybe don't seem right or hypotheses that people don't currently believe or things that seem risky or strange. And if you have that handful of people, maybe most of them fail, but some of them do succeed. And in doing so might discover things that are really unusual or sort of revolutionize a science. So that's why a lot of people are like, conservatism could be bad in science. Um The different arguments I've heard about like why you see conservatism in science are things like uh well, increasingly the age of scientific investigators is going up, increasingly it takes longer and longer to get to the position where you're the head of a lab and you might just be indoctrinated for a very long time within a field before you're the one picking the research choices. Um People argue that uh grant giving agencies are inherently conservative because they're trying to get these good outcomes. And so they tend to give money to, to projects that look more safe, more dependable, more reliable. Um Some agencies have set up these sort of high risk, high reward special grants to try to push against that. I wrote this one paper that modeled scientific communities and asked, OK, what would be the conditions under which we'd expect conservative science to spread as people train their students and then those students get jobs. And the argument I made in that paper is that while conservative science tends to be more like less risky, you know, that you're gonna be able to get some discovery and publish it compared to something that's more high risk where maybe you're gonna get a huge payoff, but maybe it's just not gonna turn out to be anything interesting at all. Um And so in my models, I made that assumption and I found that often, in fact, high risk science would be likely to spread because the people doing it are getting these really high payoffs. They become, who succeed, are getting these really high payoffs, they become famous, their students can get jobs. But the problem with that is that it's often hard to repeat. So on the assumption that if I'm a sort of risk taking scientist and I happen to be successful, my students, if they try to take risks, may or may not be, I find that in those cases, conservatism can kind of dominate in science.
Ricardo Lopes: And I imagine that in this case, it's high risk because it might imply several different sorts of potential damage, like reputation damage and in the extreme, perhaps losing one's own uh career and academic credibility. I guess
Cailin O'Connor: there can be those kinds of risks and there can also just be the risk that you spend a lot of time and effort on some project and then you just don't get anything out at the end. So for example, if you think about someone who's trying to get tenure in an academic system, they have to publish before the tenure clock runs out. So if you take on a project that like maybe it's gonna turn out great, but there's a pretty good chance the whole project is gonna fail, you can see why that might be a bad choice for that individual.
Ricardo Lopes: But these are not easy problem. Uh These, these problems are not easy to navigate, right? Because on the one hand, it's understandable that uh looking at the whole system, uh many times it's not worth it to waste resources on research or ideas that won't produce anything. Of course, I, I imagine that it's hard before and to really know for sure what would be wasteful or not. Uh, BUT on the other hand, uh, we also very much need, at least sometimes, uh, people who think a little bit more outside of the box to push the fields forward.
Cailin O'Connor: Right. Yeah, that's right. And I think, you know, say you're the National Science Foundation in the US or say you're, um, an eu grant giving body, you're giving these grants, you are paid by tax dollars and run through a government. And so you need to justify to the people of, you know, your country or your institution, why you're giving the grants you're giving and what they're for. And sometimes I think it can be really hard to say like there was a good reason why we gave a grant to this person who's doing this thing that sounds kind of wacky or out there. Um And so that I think there are like these very practical reasons why you wouldn't necessarily want to support higher risk science, even if you can make the argument like, ok, but across a whole body of scientists, we want to have at least some people doing this higher risk stuff. Uh One of my colleagues, Kyle Stanford has argued that um you know, changes in funding, like have promoted conservatism in part because when you had a good number of like independently wealthy scientists, they could just do whatever they wanted, right? Like they, you know, they didn't have to answer to anybody if they think, like, well, tomorrow I'm gonna go look at all the earthworms all over England. Like Charles Darwin did. Who's gonna say no? Yeah. Yeah.
Ricardo Lopes: Yeah. That, that was exactly one of the things that was coming to my mind while you were speaking because, uh, I mean, of course, historically, we know that most of the people who made a scientific progress were usually very, very wealthy people, or at least people with, with good enough, uh, monetary resources and other kinds of resources to devote their time and they didn't have to answer to anybody but mo for the most part, uh, and they could dedicate as much time as they wanted and as long as they wanted to study any kind of subject like Charles Darwin in the, in the 19th century. And I guess that also to some extent, Einstein in the 20th century. But, uh, I mean, um, uh, uh, those were on the one hand, lucky people and on the other hand, uh, privileged people. And on the other hand, I mean, if we are to really push science forward more rapidly and involving more people, which I imagine it's better than just rely on perhaps a handful of lucky geniuses or something like that, uh, I, I mean, it, it's not really feasible to wait for that kind of thing to happen.
Cailin O'Connor: Right. No, I think, you know, another point which a lot of people have made and seems right? Also is that when we're thinking about this desire to have science as a group working on many, many different types of topics, um like another way that you get that is by drawing on different kinds of people with different backgrounds and concerns. So in some way, like being, you know, whatever a wealthy gentleman means, you have all the freedom to work on whatever wacky thing you want. But it was also the case that historically in science, especially if we're looking at like Western science since the scientific revolution. Well, it's like a lot of wealthy, independent white European and men, right? And so there's not as much diversity as a perspective as you would see in a lot of scientific communities now. And so people also think, well, that kind of diversity of perspective is another source for new ideas, different ideas, ideas that will push science forward.
Ricardo Lopes: Yeah. Also because uh I don't know if this is something that you look into specifically. But over the years I've been talking on the show with um for example, cultural psychologists and cognitive scientists and uh even people that come from different cultural backgrounds tend to think about things in different ways. And it's very much valuable to science to have people with different cultural backgrounds and coming to the table with different uh cognitions, I guess.
Cailin O'Connor: Yeah. And I think there's a lot of examples from the history of science that demonstrate how that can can be beneficial.
Ricardo Lopes: Uh So since we're talking about science, from the perspective of uh game theory or trying to understand science through game theory or more specifically evolutionary game theory, uh is the scientific community also a population that undergoes selection,
Cailin O'Connor: I think it can be thought of that way. So there's been this kind of handful of people using evolutionary models to think about scientific communities. So the idea of thinking about science as an evolutionary system goes back like much further. So for example, David Hall wrote this very influential book where he was thinking about the evolution of science, but he was thinking of the units more or as ideas where the ideas themselves are being selected and changed. Whereas recently, a bunch of people have been writing papers where the units are scientists and you can ask questions like who remains within a scientific community and why um given that scientists are using different sorts of approaches, what approaches will tend to stay in the community, what approaches will tend to spread, uh which ones will be more prominent and so will be copied more. And you know, this isn't, it's not like um this is entirely novel, you know, there's this whole field of cultural evolutionary theory which thinks about how do we take evolutionary theory and apply it to human practices and norms and actions and beliefs and systems. And so you can do the same thing within science. It, it, time I think pretty successfully.
Ricardo Lopes: Mhm. Bicultural evolutionary theory, you mean, uh, the work, uh, been being done by people like Robert Boyd, Peter Richardson, Joe Eric and perhaps, uh, more on the side of the Parisian school than Spur, I mean, people like that.
Cailin O'Connor: Yeah, those are a bunch of the, like, most prominent people doing cultural evolutionary theory though. There's actually, there's a lot of threads of it, you know, there's um people who do more game theoretic approaches, people who do other types of different sorts of approaches. There's a lot of uh work in anthropology on cultural evolutionary theory that's less modeling, more empirical. Um It's a thoroughly interdisciplinary area. And so you have all these kind of different paradigms or frameworks for thinking about cultural evolution.
Ricardo Lopes: Yeah. I it in fact, integrates a lot of different things. There are people that work uh with everything from evolutionary psychology to anthropology, cultural evolution, human ecology. Basically they apply all of those tools in cultural evolutionary theory, right? Yeah, that's right. So uh let me ask you about another topic regarding science just before we move on to other subjects. So, um there are scientific retraction sometimes uh when it happens, does it usually work or not?
Cailin O'Connor: Uh So this is I I take it you're asking this question because I have this paper on scientific retraction. Um This is a topic that has been really interesting to me because, you know, we think of science as this kind of, you know, we would want science to be an ideal community of learners, right? And that's what science is trying to be a group of people who are using the best methods, the best practices available to learn about the world and develop good beliefs about the world. And like science is pretty successful in doing that. But of course, there are, it's, it's all it's people in the end. So it's not gonna be perfect. So one thing that's been really widely observed in people who do empirical studies of scientific communities is that retractions often fail or aren't fully successful. So maybe someone um uh is caught having committed fraud and some of their papers are retracted often if you look later at those papers, they're being cited in the literature and they're not being cited as retracted or fraudulent papers, they're just being cited sort of straightforward by people who don't know that they've actually been retracted. And so this happens kind of again and again and again. Um AND then related to that, I think you see similar things happening outside of science just in broader societies. So for example, during the COVID-19 pandemic, it happened dozens of times that there would be some uh scientific claim shared by journalists, you know, for example, uh there was this very influential early study done in Northern California that um claimed that the fatality rate of COVID was much lower than it actually is that study was really widely cited, uh turned out they had, had an error in their initial reprint that they quickly fixed, but their initial numbers propagated pretty far. And I don't think the like report of the error and the proper numbers propagated nearly as far. Right. So people didn't get the message that their initial numbers were wrong and in fact, underestimated the fatality. Right. So that's another kind of instance, they're both instances where you have these scientific claims that have like viability legitimacy, they spread in social networks when they are reversed or known to be false. Not everyone finds out that they would were false who initially found out about this claim. So some co-author um Anders Guy and Travis Laquan and I built models where we would have a network and we would have something like a false claim spreading in the network and then we'd introduce a retraction and let that spread as well. And we would ask, well, what are the situations where people find out about the retraction? When do they continue to hold the false belief, et cetera? Um And we found that in a lot of cases, people would just hold the false belief as an accident of history. You know, you have these things spreading from person to person. And so some people just get the false thing and they never hear about the true thing. It's just what happens randomly, right? Because of chance. But we found some things like would influence how often that happened. So for example, we found that retractions or reversals that are instigated by the original source are much more successful. So if I'm hearing the network and I say something false, it starts to spread. If I say oops, that was false, that spreads to the same people and they care about that. But if some other person is like, no, they're wrong and they're not sort of connected to the people finding out about this false thing in the first place. People don't care as much about this retraction, right? So they're the right people who hold false beliefs aren't finding out they're wrong. So that was one thing. Another thing is that weirdly like very prominent false beliefs were often more reversible because, you know, imagine, um some study that turns out to be wrong but nobody knows about it. Well, someone comes and says this was wrong. Well, nobody cares. Right. So there aren't that many people to spread the retraction. Whereas a very prominent false belief, if someone shows that it's wrong, there's a lot of people to spread the retraction to spread that it was wrong. So we found in our models that sometimes uh some false belief being really widely held meant that it was easier to reverse.
Ricardo Lopes: Mhm. And I mean, if that happens, um would you have any suggestions as to how people should deal with it? I mean, when the paper gets retracted. If for some reason, there are some people out there that never get to learn that it was retracted and keep citing it in their own papers. What would be perhaps good solutions to try to deal with that?
Cailin O'Connor: If we're talking about within science, there are some things that we recommend that um other people have talked about too. So one thing is that authors and journals aren't usually incentivized to talk about something being retracted because who wants to say like I was wrong or we published a paper where the person committed fraud. Uh So especially with journals, a lot of them are not very active about sharing that something's been retracted. You know, they're not like splashing it on their front page. Uh They're often not even like making a very clear sort of retraction statement on the page where it might be linked. Sometimes you'll just see ones where it just says retract in like a tiny little word. So I think journals should be much more active about communicating about retraction. Um Search engines like Google scholar should be much better about making clear when things are retracted. Like have a big note on the top of a paper when you search it saying that it's been retracted. Um As much as possible individuals when they're being responsible should share when they were wrong or when they cited something wrong. But of course, it's hard to get people to do that uh, the other thing I think would be good is, you know, when academic journals publish a paper, um, you know, there's a whole process of like checking over that paper, if we have a database of retracted articles, that would be a natural time step to just compare all the citations in the paper with things that have been retracted and flag ones that might be false as well.
Ricardo Lopes: Mhm. So, changing topics now, I mean, I've already had many discussions on the show with moral philosophers, particularly moral philosophers about this, but also with uh with uh some scientists is naturalism congruent with any me ethics. I mean, if we, if we tackle things from a naturalistic perspective, I mean, are we committing ourselves necessarily to, for example, moral realism or moral anti realism?
Cailin O'Connor: Um So I, like, you know, as a grad student, I co wrote a paper on this with like my advisor. It's not, it's not really my area of study. Um I mean, I can tell you what we said in that, which was something like you can both have a meta ethics that's thoroughly naturalistic, you know, things that like human brains evolved the way they did for all of these sort of practical and pragmatic reasons that evolution explains why we have the moral beliefs and instincts we have. So we wanna say you can hold that true and at the same time be like, but I have first order moral beliefs and I still gonna argue for them with all the sort of force that I did before. Like, I can believe all that and still think, like, hurting people is wrong. And so I can still argue about what we ought to do and not do whether or not I have a fully naturalistic picture of human morality.
Ricardo Lopes: Mhm. Uh, OK. So another topic that you've, uh, wrote a bit about is perceptual categories. So, do we know if they have evolved to track properties of the world or not?
Cailin O'Connor: Yeah. So this is something that, um, gets into a lot of like really deep discussions and debate in philosophy, philosophy of perception, but also in cognitive science, I mean, obviously, so we have our perceptual senses to mediate our behaviors in the world, right? Where organisms, our goals, I mean, when I put goals, our goals as shaped by evolution are to like eat, drink sleep, keep our bodies alive, eventually reproduce, right? Um, AND maybe keep our Children alive so that they can reproduce. So we have bodies that are trying to do those things. Perception is useful in as much as it allows us to do those things. Um, SO some people like Don Hoffman have made arguments like, well, there's no reason to think that our perception needs to accurately track the world, then all it needs to do is promote our fitness. And so it could be the case that it promotes our fitness in ways that, um, but by, you know, creating perceptual categories or perceptual experiences that don't sort of glom onto the stuff that's really there, the structures that really exist. Uh Now I think that, and then, and he says for that reason, we have no reason to think that our sort of perception and our perceptual experiences is anything like what the world is like or tells us about what the world is like. Um SO he uses models to do that. I use similar models to make a different argument, which is something like, uh, in fact, there's plenty of reason to often think that the structure of the world is really important in determining how our fitness is going to look like. Um, THAT there's at least going to be some reasons why our perception needs to tell us veridical or world tracking things in order for us to survive. And so making the jump to, like, our perception has nothing to do with the world seems like much too strong of a jump to me for that reason.
Ricardo Lopes: Yeah, I, I've in fact already had, uh, Doctor Hoffman twice on the show and the, yes, he, he has at least some compelling arguments to support his position. But, uh, I mean, since you come from, uh, I'm not sure if you would label it the opposite place or at, but at least a different place. Uh, HOW do you look at it exactly? Do you look at it through a pragmatic lens, for example, since you're also coming at it from an evolutionary perspective.
Cailin O'Connor: Yeah. So um I do think of perception through this kind of pragmatic lens and like an evolutionary lens where I assume, you know, there's some things, some structures in the world, uh there are actions that would be better or worse for organisms to take in the presence of these different structures. So for example, if humans are in the presence of blackberries eating is a good action. If they're in the presence of uh juniper berries eating is not such a good action. Um So there are ways in which the structure of the world impacts like our fitness based on how we act. So I use models that assume that and then, um I ask in these models, well, what, what will our perceptual categories look like? And there are certain kinds of things that will often be the case. So part of um Dawn's argument is that, for example, uh you know, there might be things in the world that are different from each other and we should respond to them in the same way and then we can categorize them together. So maybe something like a lower level animal, say a dragon fly can have the same perceptual experience for many different types of food because all they need to do in response to that food is eat. And so their reception doesn't have to disambiguate those things um, and he would say, well then their perception isn't tracking the world. My response would be something like there's ways in which it's not tracking the world in ways in which it is, you know, it is putting whatever all the, I don't know what Dragonflies eat, all of that kind of little bug into the same edible category. Right. And all of this kind of bug are in the same category. So it's tracking sameness among those different food sources. It's failing to track difference between them. But that doesn't mean there's no sort of important correspondence between perception and the world. Mhm
Ricardo Lopes: So uh I would like to ask you now about learning generalization and evolutionarily stable strategy. So, and to compare them as models for learning basically. So uh first of all, tell us what is learning generalization.
Cailin O'Connor: Yeah. So this is a completely like ubiquitous, widespread learning behavior where um so an organism encounters something in the world, it learns something about it. So maybe it learns like touching good eating good, avoiding good, whatever it learns. Um But that learned response is not just applied to that single state that the organism encountered, but to many similar states. So if I try eating a blackberry and it tastes good, I don't learn to eat only that exact blackberry with that exact weight, that exact color, that exact smell, that exact location I learned to eat blackberries in general. Um And that sounds like, well, Yeah, obviously, duh. Right. Of course, you should do that, but it's actually a nontrivial task, you know, uh when you expand out from that first initial stimuli, how many things do you include, like, how dissimilar, how similar, what counts as something that you ought to eat in response to this initial learning experience? Um So that's what learning generalization is. Uh Do you want me to talk about like why it's interesting from the perspective of like, OK, so there's this classic argument in evolutionary game theory. So the study of the evolution of strategic behaviors uh having to do with learning. So when you have these strategic situations, there are certain kinds of behaviors that are um are going to be the ones you expect to evolve. Basically, the ones that can be stable in an evolutionary scenario. And the argument is something like this, you should only expect the selection of learning that will learn those ideal strategies. Because if you have a learner who doesn't learn those ideal strategies, then they're gonna get a lower payoff than the ones who do, then they should be selected against. So expect organisms to learn these stable strategies in these strategic scenarios. But, and you know, I've done work arguing this other people have as well. But the issue with that argument is that there are tradeoffs and benefits in learning. So learning doesn't happen all in one second. It takes time and you're getting payoffs over the entire time that that's happening. So learning generalization is a behavior that doesn't lead to these ideal evolutionarily stable strategies. These perfect strategies basically because it leads you to extend learning to situations where you didn't uh experience the um the first learning experience, right to new situations. And so you extend your learning to situations where it might not be quite as good as the ones where you learned it. So I use models to show that this is the case learning generalization isn't going to fit with this kind of classic argument about learning, but very clearly, it has evolved. And the reason is that it helps you learn faster. So imagine if you had to learn the right behavior in response to every single blackberry you ever saw individually. Well, you're not learning, you're not really learning at all, but in any case, you're learning very, very, very slowly. And so you're gonna have all these times in your lifetime where you could have extended a lesson to a useful scenario, but you didn't. And so the idea is that selection is going to account for that need for speed, even if this kind of speedy learning behavior necessarily stops you from learning kind of most perfect precise thing you could.
Ricardo Lopes: Mhm But does that mean then that learning generalization uh contradicts evolutionarily stable strategies as a model for learning or does it implement in any way?
Cailin O'Connor: Well, so OK, so evolutionarily stable strategies, they are um outcomes in a game that you expect to evolve. So that's a little bit separate from learning. It's not the case that in fact, they always evolve even in evolutionary models, but that's a whole separate thing. Um So they're the sort of behaviors you expect to evolve. But you can also ask what learning behaviors get selected for. And so the argument is about, will you see learning behaviors that always lead to those evolutionarily stable strategies or will you see other learning behaviors? And you know, the argument is you'll see other learning behaviors because they're faster or have other kinds of benefits besides getting to the perfect strategies?
Ricardo Lopes: Mhm So uh the last topic I would like to get into today is misinformation during and about the COVID-19 pandemic. So, uh first of all, what do you think uh what were the questions that you found more interesting uh about this topic from coming from the perspective of the perspective of game theory?
Cailin O'Connor: Um Well, so I thought so not just coming from the perspective of game theory. So a lot of my work on um misinformation that isn't in game theory but in other kinds of modeling. Uh But so there were a lot of things that happened during the pandemic that were like really interesting from the point of view of thinking about misinformation, spread of beliefs, social spread of beliefs, all this stuff. So here was one thing. Um So Jim Weall and I published this book in 2019, the misinformation Age where one of the things we argued is that in cases where false beliefs really hurt people, they should be less likely to hold those false beliefs. So for example, if everyone is like, oh Red Kool Aid is the most delicious. It's even if that were somehow not true, uh It's totally fine for me to conform with everyone and drink Red Kool Aid or to trust everyone in saying, say they said it was nutritious to trust everyone and saying it was nutritious. It doesn't matter. It's not gonna hurt me if I drink Red Kool Aid, right? It's just not that bad for me, even if this belief was a little wrong. On the other hand, if everyone was like, oh it's safest to drink cyanide, we shouldn't see that false belief spreading, right? We shouldn't see everyone starting to drink that because it really matters to people. If they get it wrong. Now with COVID, I would have thought before the pandemic that this would be a scenario. That's more like the latter case because you could make choices during the pandemic where if you make that choice today, it directly leads to your death in 24 days, right? You just die in a couple weeks and in particular exposure, choices could put people at risk of death just very directly and there's a pretty high fatality rate from COVID, but you still saw all of these kinds of social effects on belief and misinformation related to COVID. In spite of its very prominent, like visible real world danger to people, you still saw people choosing to be exposed based on partisan identity, for example, um or choosing whether or not to wear masks based on their social identities or whether they were conforming with other people in their social networks. Uh So I thought that that was very interesting and surprised me quite a lot actually.
Ricardo Lopes: And, uh I mean, how do you make sense of it then?
Cailin O'Connor: I mean, I guess the way I make sense of it is thinking that like, you know, social pressures are just stronger and more important to people than maybe I initially would have thought. And they're just, you know, a whole thesis of this book was that like, social stuff matters to belief a lot. And I feel like, oh, I, I guess that was right, even more right than I expected, you know, um, that people would be influenced by these sort of social factors so much that they would be willing to take fairly significant risks to their home.
Ricardo Lopes: But, I mean, from the perspective, you've, you've used to tackle this topic, uh do these beliefs that are epistemic wrong, I guess, uh still serve uh particular functions, namely social function functions. And is that the main reason why they spread, even though they might be harmful to people who believe them and spread them
Cailin O'Connor: Yeah, that's right. They often can. So um this is something that I haven't exactly model but, you know, I've looked at other people's work on one thing that beliefs do is play roles as far as in group out group signaling, communicating your social identity, bonding with other people, impressing other people. So holding beliefs and sharing beliefs and taking actions based on belief, it all has functions, you know, with respect to like our bodies and their relation to the world, but it also has functions with respect to us and our social relationships. So for example, I might put on a mega hat to signal to the people. I want to signal to I'm part of your in group and they will assume in my wearing that hat that I also hold various sorts of beliefs about the world and how it works.
Ricardo Lopes: So do you think that if we uh get a good understanding of how all of these works? I mean, why people hold certain beliefs and why they communicate them to other people that perhaps we could, could use this knowledge to tackle misinformation?
Cailin O'Connor: I certainly think it can help. Um So for example, understanding online platforms and how the spread of belief works on online platforms, I think can help those platforms develop better information environments or better algorithms to protect people. So just to give a little example, again, not my research, other people's research, but um it seems like emotional language has a big impact on how wide widely spread things like tweets on Twitter are. Uh And so knowing that can help a platform realize, OK, if you have emotional language attached to say misinformation, um maybe that's the sort of thing the algorithm shouldn't actively promote. And so that's an example of knowing something about how humans work, possibly helping to improve information spread in a social group.
Ricardo Lopes: Mhm. Yeah. And perhaps, I guess that holding certain very influential people accountable for what they say, like for example, Donald Trump would also help a little bit.
Cailin O'Connor: Yeah. Ok.
Ricardo Lopes: Yeah. Uh OK. So Doctor o'connor, uh just before we go, would you like to tell people where they can find you and your work on the internet
Cailin O'Connor: there? I have a website. It's just my name Kelly o'connor dot com. I put up all my preprinted papers there. So any research that I've done can be found there except my books. Um AND I'm also on Twitter. It's Ken Meister is my handle like my name and then Meist er my dad used to call me the Ken Meister and then when I became, you know, when I actually got a master's degree, like I thought it would be funny.
Ricardo Lopes: Yeah, it totally makes sense. Ok, so uh I'm leaving links to that in the description box of the interview and Doctor o'connor, thank you so much again for taking the time to come on the show. It's been really fun to talk to you.
Cailin O'Connor: Yeah, nice talking to you, Ricardo and hopefully we'll catch each other around sometime.
Ricardo Lopes: Hi guys. Thank you for watching this interview. Until the end. If you liked it, please do not forget to like it, share, comment and subscribe. And if you like more generally, what I'm doing, please consider support the show on Patreon or paypal. You have all of the links in the description of this interview. This show is brought to you by En Lights learning and development. Done differently. Check their website at alights.com. I would also like to give a huge thank you to my main patrons and paypal supporters per Larson, Jerry Mueller and Frederick Sunda Bernards O of Election and Visor Adam Castle Matthew Whit Whitting Bear, no wolf, Tim Hollis, Eric Alania, John Connors Philip Forrest Connelly, Robert Winde Nai Z Mark Nevs Colin Holbrook, Simon Columbus, Phil Kor Michael Stormer, Samuel Andreev for the S Alexander Dan Bauer Fergal Ken Hall, her og Michel Jonathan lebron Jars and the Samuel K, Eric Heins Mark Smith Jan We Amal S Franz David Sloan Wilson, Yasa, Des Roma Roach Diego, Jannik Punter, Da Man Charlotte Bliss, Nicole Barbar Wam and Pao Assy Naw Guy Madison, Gary GH, some of the Adrian Yin Nick Golden Paul talent in Ju Bar was Julian Price Edward Hall, Eden Bronner, Douglas Fry Franca Bertolotti, Gabriel Pan Cortez, Lelis Scott Zachary Fish, Tim Duffy, Sonny Smith, John Wiesman, Martin Aland, Daniel Friedman. William Buckner, Paul George Arnold Luke Lo A Georges the often Chris Williamson, Peter Oren, David Williams the Costa Anton Erickson Charles Murray, Alex Shaw and Murray Martinez Chevalier, Bangalore atheists, Larry Daley Junior Holt Eric B. Starry Michael Bailey, then Sperber, Robert Grassy Rough the RP MD, Ior Jeff mcmahon, Jake Zul Barnabas Radix, Mark Campbell, Richard Bowen Thomas, the Dubner, Luke Ni and Greece story, Manuel Oliveira, Kimberly Johnson and Benjamin Gilbert. A special thanks to my producers is our web gem Frank Luca Stefi, Tom Weam Bernard ni Ortiz Dixon, Benedict Mueller Vege, Gli Thomas Trumble, Catherine and Patrick Tobin, John Carlo Montenegro, Robert Lewis and Al Nick Ortiz. And to my executive producers, Matthew Lavender, Si Adrian and Bogdan Kut. Thank you for all.