RECORDED ON JUNE 19th 2025.
Dr. Hanna Schleihauf is Assistant Professor in the Department for Developmental Psychology at Utrecht University. She studies the roots of human diversity: although each of us shares 99.99% of our genes with every other human on the planet, there is massive variation in our socio-cultural practices driven by our ability to learn from and interact with others. Her research investigates socio-cognitive underpinnings of cultural learning, focusing on how cultural novices, children during early and middle childhood, grow into proficient cultural beings.
In this episode, we first talk about when and how children start considering other people’s beliefs, the kinds of beliefs people care about, fact-based beliefs and value-based beliefs, and intuitions about people’s control over their own beliefs. We then talk about belief revision and how it develops in children. We discuss what people consider to be good reasoning. Finally, we talk about recent exciting findings that suggest that chimpanzees respond to higher-order evidence.
Time Links:
Intro
When do children start considering other people’s beliefs?
The kinds of beliefs people care about
People’s control over their own beliefs
Belief revision
What do people consider to be good reasoning?
Exciting findings that suggest that chimpanzees respond to higher order evidence
Dra. Schleihauf’s current work
Follow Dr. Schleihauf’s work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello, everyone. Welcome to a new episode of the Dissenter. I'm your host, as always, Ricardo Lobs, and today I'm joined by Doctor Hannah Schleihoff. She's uh an assistant professor in the department for Developmental Psychology at Utrecht University. Her research investigates socio-cognitive underpinnings of cultural learning, focusing on how cultural novices, children during early and middle childhood, grow into proficient cultural beings. And today we're going to talk about children's ideas about other people's beliefs, belief revision, revision, and other related topics. So Hannah, welcome to the show. It's a pleasure to everyone.
Hanna Schleihauf: Thank you for having me. So,
Ricardo Lopes: when exactly do children start considering other people's beliefs?
Hanna Schleihauf: So that's a, that's a pretty good first question. Um, SO maybe let me first talk about what belief is in general. And I know you've had a lot of people talking about this on the show and I think compared to the definitions that they gave, mine is rather, rather minimal. Um, BUT belief is, um, a representation, a mental representation that we have, and it has the goal that it is a representation of the world as, um, as close to reality as possible. So the goal of a belief is to, to be true. But sometimes our representations, our knowledge about the world is incomplete or maybe even false. So a belief can be true, but it can also be false. And I think it is important when we think about these mental representations that there are many other kinds of mental representations that we can have. So, beliefs is one of them, but we can also um have mental representations um about hopes, about goals, and about perspective of, of others. And there is It's like a developmental trajectory of how these develop. And beliefs actually is rather the, the end, uh, goal of this trajectory. So when can we think about other um people's beliefs, it's something that develops rather late compared to the others. Um, AND for example, there, so, um, even there, there is a lot of variance and a lot of inter interindividual variability in this, I think, um, there's certain timelines or certain age ranges, um, that are considered or that are associated with important developmental changes. And one of them is around 9 months of age. So there is a time when kids start to understand other people's perspective and goals. So, for example, they understand that um somebody else sees something differently from their own, so that they have a different perspective. And there are studies, for example, where you see children see two objects, but they, um, another person sitting across from them only sees one. And then, um, they expect that this agent reaches for the object that the agent can see and not the object that only the children can see, for example. And around the same time, children also start to develop an understanding of other people's goals, and that these goals can also be different from their own. And they also start to predict or expect that people behave rationally considering the goals and the perspectives that they have. So that you, um, take your own perspective and your own goal into account when you make a certain, um, action, when you plan your actions. And then a little bit later, around the age of 18 months, children also start to understand that people's desires could be different from their own. And there's one famous study by Alison Gopnik, where the, the experimenter um pretended to like broccoli much more than crackers, and the child themselves obviously liked the crackers much more than, um than the broccoli. And then the experimenter asked, so, can you please give me, give me something of the one, That that I liked the most, and kids younger than 18 months, I think the sample was actually they were 14 months old, they handed over the crackers, but around 18 months, the kids understood that the experimenters' preferences are different from their own and handed over the broccoli. And then a little bit later, again, around the age of 4, children really become to be um understand that also people's beliefs can be different from their own, and also can be different from the, the true state of the world so that we can also hold false beliefs. And around that age, children typically pass these very famous like false belief tests, and I think the most famous one there is, is, um, When children observe how an object is hidden in one of two boxes, and, um, an agent sees how this is happening, and then the agent leaves the stage, so the agent thinks the object is, for example, in box A, and then somebody else comes in and moves this object and puts it in the other box. And then, um, the kids observed this whole scene, but the agent, the first agent was gone, so he doesn't know where it is. But comes back and then the children are asked, um, where would this person now look for, for the object? And kids around 4 years of age start to answer this question, right? So they say the agent would look in the first box where they um thought, where they believed the object was hidden, even though it's not a state of reality anymore. And kids that are younger usually um still say that's in the other box, and, Yeah, around that age, you really, we would really say that children really start to consider another person's belief because then they have this like full blown understanding of um what a belief is and that the belief can also be false.
Ricardo Lopes: And why is it that people, not only children but also adults, care about the beliefs that other people hold? Why does it matter?
Hanna Schleihauf: I think this is really fundamental to us being such a social species, so understanding and being able to navigate the social world is necessary for our survival. And just to give you a few examples where I think this matters, um, for example, when we Yeah, predict other people's behavior, so imagine that um I'm like going to a concert with my friend tomorrow, but then I learn the concert is being canceled, but my friend doesn't know it, so I know that they would probably still show up, so I predict that they show up at the concert hall, if I don't inform them that the concert was canceled, and um then I can, Use this information to help them and write them a text and tell them, your, the concert was canceled, um, you should not go there. So I can use this information to predict other people's behavior and then help them. But for example, you could also use it to deceive others if for whatever reason, I do. I, the concert is actually happening, but I don't want my friend to to go. I could also tell them the concert was canceled even though it's still happening, so I could also deceive someone with this knowledge. And it also enables us to really share a common vision so we can have, if we know what another person believes, I can, um, change this person's beliefs so that we have a common vision. So it's really a sort of, The foundation of collaboration and us working together. And I think one other um important topic that is important to us humans, we are really concerned about what other people's beliefs about, believe about us. So we're really concerned about our reputation and being aware of what another person believes helps us also to manage our reputation and maybe behave in certain ways, um, uh, in front of certain people or maybe also in others. So I think those are just a few examples. Um, THAT really illustrate that we use the skill to think about other people's beliefs pretty much all the time every day.
Ricardo Lopes: And what are the kinds of beliefs that people tend to care about?
Hanna Schleihauf: I think that um people care about others' beliefs when those beliefs are related um or in some way like meaningful and relevant to their um own lives, to themselves, um, and also in certain contexts. And just to give you a simple example, I think when I'm like on. Uh, IN the traffic, I'm driving a car and I believe that the other driver thinks that the traffic light is green, even though I know that it's red. That is like an important like piece of information that is um That could save my life. And this is a fact-based belief. So here I'm like making some inferences about the, the other um potential like facts that the other person thinks that are facts um in the world, but there's also other kinds of beliefs um that are, are seem to be almost a little bit more complex, for example, when I'm trying to, Decide who um to spend time with, uh, with, um, who are my friends, what kind of values do they have, and what, yeah, kind of people do I want to surround myself with, then I might also think about what kind of core beliefs, what kind of core values they have, so that I can decide um who, who to spend my time with, who to learn from, who to befriend. And I think those beliefs are more like value-based beliefs. So there we are not making inferences so much about Facts and evidence, but more about what do people, what values do people hold.
Ricardo Lopes: Right, OK, so you've already told us that about fact-based and value-based beliefs, we're going to come back to them later, but it is one thing for people to hold particular beliefs, but it is another thing for them to have control over their own beliefs. Uh, WHAT kinds of intuitions do children and adults have about that?
Hanna Schleihauf: Yeah, that's a great question there. Um, I need to mention here, Joshua Konfer, who is, um, a PhD student at UC Berkeley and actually just, um, just graduated, and he is really interested in this topic and ran a few studies where he is interested or investigated what kind of intuitions adults and also younger children have about What kind of beliefs other people can hold and how free they are in choosing of what kinds of beliefs, um, they, they choose to hold and um he starts his paper by giving this example that if you look at fact-based beliefs, so sometimes we have the feeling that it's almost impossible to believe something else if there is evidence speaking for a certain belief. So, imagine, um, I am now telling you that the United States is a colony of Great Britain and you have a lot of evidence that speaks against it. So even if I would offer you a lot of money, you probably couldn't change your mind. So you really are like convinced you have so much evidence that the US is no more, no longer a colony of Great Britain, that you would not change your mind. So here it seems like. Evidence is um uh like a limiting feature of how free we are to choose what to believe. And he investigated this in um a study with younger children, uh, 5 to 6 year old children, um, also, and adults, and ran 3 different conditions where he manipulated how much evidence a person or what kind of evidence a person has, um. To base their belief on. And he did this by showing the kids, um, A picture book story, and in this picture books, the kid saw a character and the character formed the belief that the weather is sunny, it's sunny outside, but there were different conditions and how much evidence this character had for this belief. There was a strong evidence condition where the character um went to the living room, looked outside the window and saw that the sun was shining. So there, the character had strong evidence for their belief. And then there was another condition where the character had no evidence for their belief, so the character stayed in the bedroom that didn't have any windows and still formed the belief, it's sunny outside. And in the third condition, there was counter-evidence. So, the character went to the living room, looked out of the window, it's raining, but formed a belief, it's sunny outside. And then the kids were asked, and also adults, so would it be possible that this character would hold the opposite belief instead, so that it's raining outside. And interestingly, um, even the youngest children as well as the the adults really seem, um, perceive that this evidence is restricting the freedom that this character has to form their beliefs. So, in the strong evidence condition, they said that it's almost impossible to, to change their mind. So you kind of, you almost must believe that it is sunny outside if you've seen it. But when there was no evidence or if there was even evidence leading into a different direction, they said it's very possible that you hold a different belief, um, or almost you need to change your mind if there's evidence, uh, speaking for the contrary. Mhm.
Ricardo Lopes: But, uh, do the kinds of intuitions people have about other people being able to control their own beliefs or the kind of control they have over their own beliefs depend on whether we are talking about fact-based or value-based beliefs?
Hanna Schleihauf: Yeah, um, that is actually the second condition that Josh, um, the 2nd study that Josh ran there he looked at value-based beliefs. And here, similar to for the fact-based beliefs, it is the evidence that is limiting what beliefs are possible, and his hypothesis here was that for value-based beliefs, it's morality, whether we think a certain belief is possible to hold or not. And here he also again had three conditions. Um, ONE condition where a character formed a moral belief, an immoral belief, and one where it was neither moral or immoral, where the character just like had a, um, stated like a certain opinion or preference. So to give you a more detailed example of how this looked like, it was a character called Ashley, who is um in a park and she sees, A boy um who, like, falls from his bike, and then in the moral condition, she formed the belief, it's bad that this uh boy fell from the bike and hurt himself. And in the immoral condition, Ashley formed the belief, it's good that this boy fell from the bike and hurt himself. And in the opinion condition, um, the, the boy, she was just seeing that the boy sits next to his bike, she didn't observe the falling process. And, um, formed this opinion, oh, it's, it's good that this bike, um, is yellow. And then we presented uh the kids and adults with, um, the, an alternative belief and us, is it possible to hold this alternative belief? And when they had formed, when Ashley had formed a moral belief in the beginning, we presented. The um with an immoral belief and ask, could it instead be also um the case that Ashley thinks that it is good that the boy fell from the bike? Or if Ashley had formed the immoral belief, we asked them, could it instead be that it's um bad, that Ashley thinks it's bad that the boy fell um from the bike. And in the case of the opinion condition, we simply asked them. Could it also be that Ashley thinks it's good if the bike was blue instead of yellow? And here we also found interesting condition differences. So in the moral condition, most or the majority of participants, um, Had the feeling that it is actually not so likely that you change your mind, versus in the immoral condition when Ashley was holding an immoral belief, it's good that the boy fell from the bike, it was very different, so they had a feeling it, she should be drawn to the opposite, so she should be drawn to um thinking it would be possible um that it was bad, so drawn to, towards the moral side. And also in the opinion, um, condition, kids and adults had the feeling it's, um, or the preference to, to say that it's OK to, you have a lot of control of your of your beliefs. So you can believe that it's good that the bike is blue or yellow. Um, AND then we found in one of the conditions also an interesting age difference or age effects, and that was in, in the moral condition. There we found that the youngest kids had the feeling, it is really like this, this morality is really restricting of what you can believe or not. So, if you believe it is bad, the um boy fell from the bike, it's really almost impossible to believe the opposite. But this actually this effect um reduced the older participants got with adults, Um, thinking that it is actually still pretty possible that you could think the opposite, that you also could hold an immoral belief. So it seems like Um, especially for the younger age groups, morality really seems to restrict the beliefs that seem to be possible. Compared to the older ones. Mhm.
Ricardo Lopes: Right. And are there big differences between the judgments made by children and adults?
Hanna Schleihauf: Yeah, the, the differences that we found are, are restricted to the second study where we looked at value-based beliefs and, um, the differences were really the ones that we found in this morality condition. So only there in the morality condition, if the, uh, the kids and the um, we had 55 and 6 year olds, 7 and 8-year-olds and adults, and there we, we found that the Um, younger children thought that the beliefs are really restricted by this morality, so they thought it's almost impossible to hold an immoral instead of a moral belief. But the older the kids got, so even the 7 and 8 year olds and the adults thought it is more possible to also change your mind to a moral belief, and we, Oh, we, we don't really, um, exactly studied what are the underlying mechanisms for this, but our interpretation was that the more you are um uh confronted or more experiences you also get with like immoral behaviors of other people, the more likely you see it, you, you think it is that people can also change their mind, um, and hold immoral beliefs.
Ricardo Lopes: So let's get into the topic of belief revision. What, what is that?
Hanna Schleihauf: Yeah, we um have already slightly hinted at this. Um, SO belief revision is the process of changing your mind when new information arrives, and especially when this information contradicts what you already believe. And I think it's important um to distinguish two concepts there. So one is belief updating, and the other is belief revision. And belief updating is um that you hold a certain belief. Um, FOR example, uh, going back to the, um, false belief example, in the beginning, a reward is in one box, and then you see, for example, how an, uh, agent is taking this, um, object out and you remove it, and we removes it in the second box. And then you can update your beliefs, so you see this change happening, you can update your belief that uh now the object is not anymore in A but in B, and you've observed this process. So there is not really a conflict happening, but still your belief has changed. And that is what we call belief updating. And if there is a conflict, um, for example, imagine you haven't observed this process of how the object is removed from A to B. Um, THEN it's really a conflict. So, if you suddenly see, oh, now the object is in B, then you realize, oh, it cannot be true anymore that it is in A. So there is a conflict between these two beliefs, and you need to decide, is my new evidence strong enough that I need to abandon my former belief and take on my new belief.
Ricardo Lopes: And and how does belief revision develop in children?
Hanna Schleihauf: So far what I know of in the research, most of the studies look at kids in the age range from 4 and 5 years. For 4 and 5 years and that also makes sense and so far if you think about that, um, this is the age when they are able to understand the concept of a belief. So they understand that beliefs can be false and um maybe are more flexible in um in changing their beliefs. And in my own studies where I also looked at um 4 and 5 year olds, we found that. At this age range, like so early, they actually, kids already changed their beliefs in very rational ways, and that they consider, Um, not only, like, new information coming in, but they also consider what foundation they base their initial beliefs on.
Ricardo Lopes: Mhm. And on what basis do children revise their beliefs then?
Hanna Schleihauf: Yeah, so They, they do consider what we found in our own study that they do always consider, yeah, their prior beliefs and the, uh, the strength of their prior beliefs, and then they compare this to the new evidence coming in and the quality of new, of the new evidence and how we did this in our study, um, we also had again two boxes, um, that is how a lot of our research works and The kids, um, we're told in one of these boxes is a reward for you. And then, We, um, manipulated the prior belief of whether the children thinks it's in box A or box B by asking the children, can you bring the boxes over to the table where the study is taking place? And when the kids picked up these boxes, one of the boxes was, was either heavier or made a noise, so that they thought, oh, it's probably the reward is in there. So they had some evidence that the reward is in one of these boxes. Um, So in that condition, they had strong evidence actually, in, for example, the, the reward is in box A because it's heavier. And then we presented them with an alternative, uh, new evidence that the reward is in the respective other box. And what we did in our study that is, was that we gave them verbal reasons. So we were interested in how, um, can children like evaluate verbal reasons and integrate them with their prior knowledge. And in that case, one of the reasons, the reason was either good or was bad. So for example, um, imagine you have strong evidence that um the reward is in box A, but then somebody tells you, but the reward is actually in box B because I've seen it in there, so I know it because I've seen it in there. So that would be a good reason, and now we have like two good reasons, like, like, um, contradicting each other. Alternatively, if you have this strong evidence and somebody else tells you, oh, I think it's in box B because that box is my favorite color, then it's rather a weak reason, so here you would rather not change your mind. And in another condition, we also manipulated whether they have prior evidence at all. So if they brought over the boxes and both of them were equally heavy or none of them made a noise, then they didn't have any evidence, prior evidence to base their belief on. So there we were, maybe it would be more likely that they're swayed by the argument presented afterwards. And what we found was that actually kids relate change their beliefs in a very rational way, so they really incorporated the strength of their prior belief, whether it was based on evidence or not, and the quality of the argument that was presented afterwards. So if they didn't have any prior belief but were presented with a strong argument afterwards, they very often change their mind. On the other hand, if they um had very strong ri belief, but presented with a weak argument, it's only my favorite color, they almost never change their mind. And in these two other conditions where they had strong evidence and were presented with a strong argument, we found that actually in 50% of the cases, they changed their mind. And when they did not have any evidence, and then they were presented with a weak reason, we found that they changed their mind a little bit more than 50%. So it seemed like it is, they, they don't have any clue. It is a bad reason that it's a favorite color, but it's maybe still better than nothing because you don't have any idea what is happening. So they're, they still, um, some of them followed this, this argument even though it was a bad one.
Ricardo Lopes: So those are the kinds of reasons that children consider when revising their beliefs, right?
Hanna Schleihauf: Yeah, so that is um Those reasons are everywhere mostly interested in something that you also call like first order evidence, so we um really gave a reason that speaks directly for or against the belief. And we also in an additional study in the same paper, we ran, um, we looked at what the kids also consider. So-called second-order evidence and second-order evidence, um, the way we defined it was either like confirming or like undermining the evidence for a belief. So it was, um, in our case, we called it also meta reason, so it really was a reason about the reason and um to give you an example again, we ran a study where um So the kids were looking for a pet that was lost, and they were presented with a picture where they saw like two bushes, a bush with red berries and a bush with purple berries, and they saw that there were like duck prints. So the animal that was lost was a duck, and they saw that there were duck prints, um, footprints, um, leading to one of the bushes, and the kids were asked, where do you think the duck is hiding? And Um, then the kids, most of them were correctly, like, inferred the duck is hiding behind the bush, um, where the footprints lead to. And then in the next step, we gave them either confirming or undermining evidence. So we either said, yes, you're right, so these footprints look like duck footprints, and we showed a picture of how duck footprints look like. And in, in, in the other condition, we actually said, oh wait, these footprints don't look like duck footprints because duck footprints look like this, and we showed them how duck footprints look like. And then we ask them, um, again, so where do you think the duck is hiding? And in the undermining condition, so where we're presenting, presented them with this undermining second order evidence, kids much more often change their belief compared to the condition where their um initial evidence was confirmed.
Ricardo Lopes: Right. OK, so changing topics, what do people consider to be good reasoning?
Hanna Schleihauf: Yeah, so, um, I think what people, the way people can evaluate other other's reasoning is like twofold. So you can see whether a person gets to the right conclusions, gets to the right outcomes, or you can also um like investigate and evaluate the process that people use to go to, to get to a certain outcome. And, um, sometimes these two things, um, interfere or don't, uh, um, lead in the same direction. So, for example, if you're presented with, um, Somebody who reaches a correct conclusion, but you know that the processes that they used were completely irrational, so maybe to decide whether a reward is in box A or box B, they just were throwing a coin, then you would say this was an irrational process that they used, and they were simply lucky to get to the right conclusion, but still they got it right. And this is something that we looked at in um children and also adults from different cultures, so we went to study from kids. In the United States and in China, and we presented them with pretty much exactly the same scenario that I just explained to you. Again, it was a lost pet and um the kids were tasked, so where is the pet? Um, AND then one of the characters in the story was using an irrational procedure of figuring out where, where the pet is hiding, and they were just using, so we had several, uh, versions of this, but for example, just using like, um, Um, what do you call it? One of these boards where you with an error and you, um, Make the arrow flip and then you um uh it points at a certain um color in this board. And then um uh the kids were asked, um, or the characters said, OK, I believe that the pet is hiding behind um the bush with the purple berries because that is what um my random process, um, just like. Came up with, um, but by accident, the character was right. Um, AND the other character, for example, used, um, the correct procedure. And he, the correct procedure was in this case, or a rational procedure, which was in this case, looking for evidence. So looking for, for example, for footprints on the floor and inferring, oh, the animal is in this hiding location. But unknown to these characters, um, the animal has changed location in the meantime, so it was not anymore in the bush where the footprints lead to. So in the end, the character who had used a random procedure. Came to the right conclusion and the character who has used the um rational procedure came to the wrong conclusion. And then we asked kids, um, which of these characters do you think did the better job and also which of these characters would you prefer to um ask for help if you needed help in finding something. And then we found an interesting um difference, an interesting developmental difference that 4 and 5 year olds had always a strong preference for the character who got to the right outcome, independent of whether they used uh wrong or, or like let's say a rational or rational procedure. And adults had a strong preference for the character who used the rational procedure, independent of whether they reached a correct or incorrect outcome. And we found some interesting differences actually in the middle age ranges between like 6 and um 9 years of age, because we found that the kids in China actually had earlier on a preference for the character who used a rational procedure over the character who came to the correct outcome. And the kids in the United States had the same switch, so they also, um, uh, This developmental switch that they at some point preferred the character with the rational procedure, um, but it happened a little bit later. In the United States, it was around 78 years of age. And we were, um, wondering, so why did we find this developmental difference in these two cultures. And um thought it could be that in China, um, people are often like raised with a more like holistic picture of the world and trying to really see relation among, for example, like processes and outcomes and maybe this helps these kids to like earlier on, um, make this developmental switch and evaluating rather the process compared to the outcome when they observe um, Yeah, actions or like reasoning of other people.
Ricardo Lopes: Yeah, that, that's really interesting to find and explain these cross cultural differences, but also the aspects in which there is universality, I guess. So, uh, I have one last topic I would like to ask you about. So you have some. Recent exciting findings that suggest that chimpanzees respond to higher order evidence. Could you tell us about that? First of all, tell us what higher order evidence is and then what you did there in that study.
Hanna Schleihauf: Yeah, um, yeah, so in this study, it actually we, we started out by also um investigating belief revision. In chimpanzees, and we found it very similar to the study that I explained earlier with kids, it was that chimpanzees revise their beliefs in rational ways, so if they form an initial belief based on weak evidence, but afterwards are represented with stronger evidence, then they change their mind. However, when it's the other way around, when they are first presented with strong evidence and they base their initial belief on strong evidence, and now later on presented with the weak evidence, then they stick to their initial belief. And in the second step, we were interested in, uh, can chimpanzees also um understand the second order evidence, also similar to um to what we have done with children. Um, AND here we looked at whether they understand also undermining. Undermining evidence. And in that case, in the kids' version of the study, we had this like confirming or disconfirming um reasons and we said, oh, these duck, these footprints don't look like duck, duck footprints, and of course that was not possible to do with chimpanzees. So we had to come up with A nonverbal version of this task. And here again, we had like two boxes and the question was, um, in which of these two boxes is the apple hidden, and then we presented the chimpanzees with different pieces of evidence. And in this study where we looked at um the second-order evidence, we first showed them a weak evidence for one box, and in this case, we just lifted this box, we shook it, and we put it down, and the chimpanzees probably inferred, um, as we know from previous work, um, they can use this clue, the apple is in this box where I heard the noise. So they could make a choice, they chose the one where they've heard the noise. And then in the next step, we presented them with strong evidence, so we were turning around the box and we showed them, like, through a blurry window, that there was um an apple in there. They saw something that looks like an apple through a blurry window, and there was strong evidence, strong visual evidence that, oh, probably the noise that I just heard was something else, it was probably not the apple, but now I've seen it, so it must be in the other box, so I, I should change my mind. Um, AND that is also what most of the, um, chimps did, as we've known from the, uh, previous study. And then in the next step, we actually showed them like undermining an epistemic defeater. So we showed them that this evidence, this strong evidence that they've just based their belief on is not really valid, and how did we do that? So, we took, um, like a second screen that was in this box out that had a picture of an apple on it. So what they've just seen earlier, what was not the actual apple, it was just a picture of an apple. And now if they can make this inference that, oh, there's strong evidence that I've just seen was now defeated by the evidence of like, it's just a picture, they should actually change their mind, and they should change their mind back to um what they've heard initially. So now, The shaking, the shaking noise that they heard initially was actually the strongest evidence that they had overall because this other piece of visual evidence was defeated. And we, we found that if we defeated this evidence, if we showed them this picture of the apple, the chimpanzees were much more likely to change their mind and switch back to the initial uh box that they had chosen. Um, COMPARED to when we didn't defeat this evidence, so in a controlled condition, we also took a screen out of this box, but there was no picture of, of the apple on it, and then most of the chimpanzees stuck to their strong visual evidence and chose the box where they had seen the apple.
Ricardo Lopes: So would one of the goals here be to understand the phylogenetic roots of this kind of reasoning and perhaps trying to establish a link between chimpanzees and humans when it when it comes to the evolution of this kind of reason.
Hanna Schleihauf: Yeah, certainly. Um, SO, I think the important question here in the background is like about rationality. So, rationality is one of these like features that many philosophers, like, I guess, like starting or maybe even before Aristotle, um say that rationality is really just what makes us human. And, The question is, uh, is that really the case? So, it's really rationality one of these defining features that only humans possess. And if you look at, um, rationality also has many definitions, um, but the definition that we here, um, looked at was really um responding to reasons. So are we a rational agent responds to reasons and how can you test whether a rational agent responds to reasons? And the idea was you can pretty much test that if somebody follows a reason, yes, that is something that we've already known that chimpanzees also follow evidence, for example, they Um, can follow like yogurt traces and know, OK, that, that that's the trace that leads to, to the cup, um, where I can actually find the yogurt so they can use evidence, but the question was, can they also understand that evidence can be defeated? And this really shows that they understand pretty much a reason as a reason, because when they know that this reason is not valid anymore, they don't use it.
Ricardo Lopes: Right. OK, so Hannah, uh, what kinds of work are you going to be doing in the near future? I mean, what kinds of topics are you expecting to explore?
Hanna Schleihauf: Yeah, I'm super curious to keep on working on um the cross-species comparison um that is related to rationality. And I'm currently planning a study where we look at the social aspect of rationality. So for example, do we consider what um What evidence another person bases their beliefs on, and then do we like consider this and also like compare this with our own evidence, and that is something that we could also like do with chimpanzees and see whether they consider if they get a clue from another individual where the food is, do they consider whether this individual had evidence for their beliefs or not. Um, SO this is something that I'm very interested in. But more recently, even though that is not yet uh a very planned out, I'm also really interested in, in AI how humans use AI as evidence um to form their beliefs and maybe how and what kind of features AI would need to have that we evaluated maybe more critically. Um, BUT those are very um unbaked ideas at the moment. Um, BUT I would love to look in this, um, a little bit more in the future.
Ricardo Lopes: Great, so if people are interested, where can they find you and your work on the internet?
Hanna Schleihauf: I, I don't have um like an. A website on my own, but they find me over the website of uh my university, the Utrecht University, or if they just Google my name, Hannah Schleihoff, there's not that many people with that name, so they should be able um to find me. I'm also on, um, Blue Sky. Um, THAT is also one of the channels, um, I think where people could reach me.
Ricardo Lopes: OK, so thank you so much for taking the time to come on the show. It's been really fun to talk with you. Thank you. Hi guys, thank you for watching this interview until the end. If you liked it, please share it, leave a like and hit the subscription button. The show is brought to you by Enlights Learning and Development done differently. Check their website at enlights.com and also please consider supporting the show on Patreon or PayPal. I would also like to give a huge thank you to my main patrons and PayPal supporters, Perergo Larsson, Jerry Muller, Frederick Sundo, Bernard Seyaz Olaf, Alex, Adam Cassel, Matthew Whittingberrd, Arnaud Wolff, Tim Hollis, Eric Elena, John Connors, Philip Forst Connolly. Then Dmitri Robert Windegerru Inai Zu Mark Nevs, Colin Holbrookfield, Governor, Michel Stormir, Samuel Andrea, Francis Forti Agnun, Svergoras and Hal Herzognun, Machael Jonathan Labrarith, John Yardston, and Samuel Curric Hines, Mark Smith, John Ware, Tom Hammel, Sardusran, David Sloan Wilson, Yasilla Dezaraujo Romain Roach, Diego Londono Correa. Yannik Punteran Ruzmani, Charlotte Blis Nicole Barbaro, Adam Hunt, Pavlostazevski, Alekbaka Madison, Gary G. Alman, Semov, Zal Adrian Yei Poltonin, John Barboza, Julian Price, Edward Hall, Edin Bronner, Douglas Fry, Franco Bartolati, Gabriel Pancortez or Suliliski, Scott Zachary Fish, Tim Duffy, Sony Smith, and Wisman. Daniel Friedman, William Buckner, Paul Georg Jarno, Luke Lovai, Georgios Theophannus, Chris Williamson, Peter Wolozin, David Williams, Dio Costa, Anton Ericsson, Charles Murray, Alex Shaw, Marie Martinez, Coralli Chevalier, Bangalore atheists, Larry D. Lee Jr. Old Eringbon. Esterri, Michael Bailey, then Spurber, Robert Grassy, Zigoren, Jeff McMahon, Jake Zul, Barnabas Raddix, Mark Kempel, Thomas Dovner, Luke Neeson, Chris Story, Kimberly Johnson, Benjamin Galbert, Jessica Nowicki, Linda Brendan, Nicholas Carlson, Ismael Bensleyman. George Ekoriati, Valentine Steinmann, Per Crawley, Kate Van Goler, Alexander Obert, Liam Dunaway, BR, Massoud Ali Mohammadi, Perpendicular, Jannes Hetner, Ursula Guinov, Gregory Hastings, David Pinsov, Sean Nelson, Mike Levin, and Jos Necht. A special thanks to my producers Iar Webb, Jim Frank Lucas Stink, Tom Vanneden, Bernardine Curtis Dixon, Benedict Mueller, Thomas Trumbull, Catherine and Patrick Tobin, John Carlo Montenegro, Al Nick Cortiz, and Nick Golden, and to my executive producers, Matthew Lavender, Sergio Quadrian, Bogdan Kaniz and Rosie. Thank you for all.