RECORDED ON AUGUST 26th 2025.
Dr. Dries Bostyn is a social psychologist, statistician and philosopher currently doing a BOF senior postdoctoral fellowship at the Social Psychology Lab at Ghent Unversity. His primary research focus is the study of moral psychology.
In this episode, we talk about trolley-type moral dilemmas. We start by discussing what they are, and their different types. We talk about an experiment where people had to decide whether to allow two confederates to receive a painful electroshock or shock a third confederate. We discuss the action principle, the contact principle, and the intention principle in sacrificial dilemmas. We talk about iterative sacrificial dilemmas. We discuss the dual-process model for moral cognition, whether cognitive ability is correlated with a preference for utilitarianism, and whether trolley problems are realistic. We also talk about how people infer the moral character of others based on how they resolve sacrificial moral dilemmas, and whether people choose to exit social dilemmas in real life.
Time Links:
Intro
What are “trolley-type” moral dilemmas?
The action principle, the contact principle, and the intention principle
Iterative sacrificial dilemmas
The dual-process model for moral cognition
Is cognitive ability correlated with a preference for utilitarianism?
Are trolley problems realistic?
How people infer the moral character of others based on how they resolve sacrificial moral dilemmas
Do people choose to exit social dilemmas in real life?
Dr. Bostyn’s future research
Follow Dr. Bostyn’s work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello everyone. Welcome to a new episode of The Dissenter. I'm your host, as always, Ricardo Lops, and today I'm joined by Doctor Dres Boston. He's a social psychologist, statistician, and philosopher currently doing a BOF senior postdoctoral fellowship at the Social Psychology lab at Ghent. University. His primary research focus is the study of moral psychology, and today we're going to talk mostly about truly type moral dilemmas, sacrificial dilemmas, moral cognition, and some other related topics. So Doctor Bostin, welcome to the show. It's a huge pleasure to everyone.
Dries Bostyn: Thank you for having me and looking forward to our chat.
Ricardo Lopes: So, uh, just to introduce the topic for people who might not be familiar with what we're going to talk about here, what are trolley type moral dilemmas?
Dries Bostyn: I mean, so there's not one official definition for the trolley type dilemmas, of course, um, but I would say there are, there are a class of moral dilemmas, um, that, um, the most well-known example of them is the, the trolley dilemma, which is a well-known, uh, moral dilemma, possibly, maybe even the, the most well-known dilemma, um, and so I, I, I usually always start just by giving people the example even though by now many people are so familiar with it. Um, BUT just kind of ground the discussion, uh, what we're talking about, um, so destroyer dilemma is basically a moral dilemma that, um, depicts a scene where a runaway trolley train is about to run over 5 people. Um, SOMETIMES these are 5 people that are tied to the tracks, sometimes they're just unsuspecting workmen, but in any case, um, the train is going to run over 5 people, and the moral dilemma poses a question, um, imagine that you could divert, avert this disaster, um. Usually there are multiple ways in which this can be done, and the most classic way is imagine you can avert this disaster by diverting the train to a secondary track, um, which would be great, but then unfortunately there's one other person on the track, and so then the moral dilemma emerges. Should you sacrifice one person to save the 5, and the core idea behind these dilemmas is just that they. They contrast um a desire to minimize harm, to strive towards the greater good, with a, a sort of refusal, a sort of um. Uh, DISAVOWMENT of actively harming others. Sometimes that's framed in terms of norms or rules first and principles versus consequences. Uh, SOMETIMES people say this is about deontology versus utilitarianism, um, but I prefer just kind of keeping it as, as, as low level as possible and just saying it's about, it's about, um, do you wanna actively harm, like, are you OK with actively harming others to pursue a greater good? Um, I would say that is what trolley tight dilemmas are all about. Um, WHY trolley type, um, I guess because trolleys are the, the quintessential example. Um, THERE'S many variations within the trolley dilemmas, but there's also many other dilemmas that have like this same basic shape, the same basic structure that do not involve trolley trains whatsoever, um, hence why it's a, it's a class of moral dilemmas, yeah.
Ricardo Lopes: So, I mean, please correct me if I'm wrong, but I, I think that one of the goals with studying or doing studies where we expose people to truly type moral dilemmas and see how they respond to them is to try to understand whether people prefer uh the ontological or a consequentialist approach to them. So what do people tend to prefer? Do we know that?
Dries Bostyn: Um, IT, it, it really depends on what specific dilemma that you give them. Um, SO there are, there are some dilemmas, the classic trolley dilemma where you just have to divert the trolley train to a secondary track, and they're like 80% of people will prefer the, uh, the so-called utilitarian response, the response that corresponds with, um, sacrificing one person for the greater good, um. There's some debate on whether we should really label these specific responses as utilitarian or deontological. Uh YOU'RE certainly correct in that, that the, the classic traditional way of looking at these dilemmas and looking at these responses is to sort of see them as proxies of these underlying philosophical ideas, and it's about contrasting, um, the ontological versus utilitarian. Um, BUT it's very easy to switch people's opinion on these dilemmas like the, the, the, the, again, the, the, much of the theorizing in this field started with the, the observation that if you give people the classic troy dilemma, they prefer deleitarian response. But if you change it a little bit around, if you ask people, um, about a variation of a troy dilemma, the footbridge dilemma, where it's not about diverting the trolley train to another track, but where you have to stop the trolley. By pushing a large stranger in front of the train and using their considerable mass, then people tend to refer to, the deontological response. So it really depends on, um, the type of dilemma that you give them. But having said, um, there is a, a consistency across dilemmas though. So people, um, there are a certain set of personality, uh, differences that sort of help determine what response that people prefer in general. And so you could say that there are some people that are consistently more likely to prefer the so-called utilitarian choice. There are people that are consistently more likely to prefer to prefer the so-called deontological choice, yeah.
Ricardo Lopes: Uh, BUT in one of your studies, you found out that when confronted with a trolley problem, people conform to the ontological but no consequentialist majorities. Could you explain that?
Dries Bostyn: Yeah, so what we did in that study was, um, we, we confronted people with a, with a series of these, these, these dilemmas. Not all Troy dilemmas, some of them were the same basic structure but different, different contents, um, and then we told people like, hey, the majority of people prefer this option or the majority of people prefer that option. And what we found there is that indeed, um, people, um, were more likely to shift their responses in the direction of the deontological response of not harming. Of not committing the sacrificial harm, of not harming the single person to save the 5 versus the other way around. Um, THE, the way that we interpret that effect is that, um, I mean, moral judgment is to some extent about judging behavior, about judging actions, but it's also about judging people, um, and, uh, when we, we make a moral judgment, obviously we, we, we will usually try and pursue the, the course of action that we think is best, but we do take into account what other people. Might think about that as well, obviously, um, and the moral judgments that we make, they are in a way also a sort, they serve a communicative purpose as well, um, and there's been some research, and I'm sure we might talk about this later as well, uh, that, that has suggested that in general, um, people tend to prefer theontologists over utilitarians and the way that we interpret it. Our, um, our results is that people strategically shift their responses towards the ontological side because it's the, it's the more socially safe side to, to pursue. Um, IT'S a little bit more risky, uh, people, people might distrust you a little bit more if you make utilitarian judgment, um, and that is why, uh, we found a, a larger conformity effect, uh, in the deontological direction versus utilitarian direction, yeah.
Ricardo Lopes: I mean, let me ask you this, with these kinds of studies, are you interested in or is it possible to tell whether people would be in a sort of innate way predisposed toward the ontology or toward consequentialism? I mean, is that something that you're interested in exploring and is that something that these studies can tell us or not?
Dries Bostyn: Do you, do you, do you, do you mean, uh, like, like, like personality wise, are people with like more of a utilitarian personality, people with,
Ricardo Lopes: I mean, I mean, if, if, if, if in terms, uh, uh, of course these are probably not the best terms to use, but if in terms of our human nature we tend to be more predisposed toward preferring the ontology or consequentialism.
Dries Bostyn: Um, I, um. I think there certainly have been people that have been trying to, to show one way or the other. Um, I'm, I'm myself would be a little bit hesitant to say that the type of studies that we're doing would be able to answer that kind of question, um, because mainly uh the amount of variety that I see in in how people respond to these dilemmas and how quickly people shift shift from one. Um, FROM one option to the other, I find it's very hard to, to make a kind of general statement like people overall preferred the intelligence, people overall preferred utilitarians. I think a lot of the dilemmas that we have been using in the fields are also a very specific set of dilemmas, um, and it's, it's, it's. We haven't explored a sufficiently broad set of dilemmas just yet um to be able to really provide a, a good answer to that question.
Ricardo Lopes: Yeah, no, that's fair enough. I just wanted to know if this was something that these kinds of studies could answer or not. So, uh, you have done experiments on people where they had to decide whether to allow two Confederates to receive. A painful electroshock or shock a third confederate. I mean this is also a trolley type kind of problem. Which results did you get and what motivations did people provide for their choice?
Dries Bostyn: Yeah, so the, the, the, this is, this, this is somewhat have been my, my, my, my most recent focus, um, actually that, that, that paper just got accepted and this will be upcoming, uh, pretty soon, um, so very timely question, um, but yeah, so the, the, the main idea of that study was, um, we're doing all of this research on rota dilemmas and for like very obvious reasons we can't go. Um, HIRE trains to run over people like that would not be a fairly ethical way to conduct research, um, and so a lot of the research that we have been doing in moral psychology is just asking people to reflect on these moral dilemmas. Um, THERE'S a little bit of a Asking someone to reflect on a dilemma or having them actually respond to a dilemma with real consequences, um. The two things are not fully the same psychologically speaking, and so, so I have been looking for ways to create, um, sort of trolley type dilemmas within the lab and confront people with an actual consequential real dilemma and see like how do they behave. And so that was the, the, the main idea behind that study and the way we did that is like you described, um, I invited participants to a lab and then I had 3 confederates there, 3 targets, all of which were hooked up to electroshock machines and I would tell my participants, hey, um. We're going to do a moral dilemma today. Two of these people are going to get a shock unless you decide. Um, TO shock the single person instead. Um, THE results that we found, I thought they were quite interesting, and there's many different angles that we can, I can look at, um, but as you pointed out, one of the things that we also did is we had people make these decisions and then we asked them why did you make these decisions and, um, the motivations that people gave for those, those decisions, I thought they were quite interesting. Um, AND so just to give like a sort of general summary of, of what we found there is that when people make. Uh, THE so-called utilitarian choice when they decided to shock the single person to save it to others, um, in almost all cases, they did motivate that with a sort of cost-benefit analysis. They just said that, well, I mean, if it, if, if people are going to get hurt, I'm gonna pursue the greater good, and it's better if only one person gets hurt versus two. WHAT is interesting is what, uh, how people motivated their choices. When they didn't decide to do that, uh, there we actually found what I feel is a great variety of, of motivations. Um, THERE were some people that, um, I wasn't really expecting this ahead of time that, um, sort of said, um, I did not want to hurt the single person because that person was sitting alone and at least the two people, they have each other. Um, SHARED harm is, is better than having to suffer alone. Now just to be clear, like, like everyone was sitting in one room, like it was not like the one person was separated. Uh, FROM the two waters they were like everyone was sitting like 1 m apart, um, but still like a lot of people were like, oh, I didn't wanna. Uh, THE single person was already sort of punished because they were sitting alone, um, so that was a motivation that occurred quite often. Some participants said, well, I didn't feel like it was my responsibility to make this decision. Um, THE, the sort of universe and fate had already decided that these two people were going to get shocked and, and who am I to interfere and interject my own moral opinion in this sort of situation. Um, DID it have a group of people, um, which I would say corresponds a little bit more with like the classic way that we're thinking about these dilemmas that did actually say like I did not want to actively harm others. I didn't wanna, um, I thought it was bad. Um, I thought, um, everyone had an equal right not to receive a shock, um, and so, so what right do I have to like sacrifice an innocent person. Um And then you also had people that uh just expressed like a a a and it might might appear a little bit silly, but I don't think it's, it's, it's, it's that silly a a preference for like a specific person like like oh that person was smiling at me, so I didn't want to shock them with this, um, obviously from like a philosophical perspective might not be the best argumentation for that type of judgment. To me it sort of corresponds with um. And maybe there's a little bit farfetched, but, but almost like this moral dessert type of thinking. I did not, I feel like that person didn't deserve to be shocked. Um, AND so one of the takeaways that we have from that study is that And we tend to see these, uh, to look at the judgments that people make, it's just like it's deontological versus utilitarian, um. But I struggle to, um, you could obviously argue that what we're seeing in the deontological case is that, um, the various types of motivations that weren't governing there, that's sort of a post hoc, post hoc reasoning that people are doing to sort of, um, they don't want to commit to the judgment and so they, they, they grasp onto every possible explanation that they can, um. And that is like we can't prove that with the studies that we've done so far that these are actual things that have influenced people's moral judgments. Um, BUT if you're willing to sort of assume that, hey, maybe this is actually cool, maybe, maybe people are being sincere and maybe um. There are these different types of motivations driving these judgments, and all of a sudden there appears to be um all of these layers that we've, that I feel had previously gone and explored. Um, AND I think we can actually make a good case that that is what is actually going on because a lot of the things that we're seeing, um, we are seeing in, in other types of moral judgments that people make when And For instance, the, the no responsibility type of judgment, people that are saying, oh, I don't want to get engaged. This is not my, my responsibility. We make those judgments all the time. Like if you're walking down the street and you see, you, you see a homeless person, that is a sort of moral dilemma that you're encountering then. But you, and in many cases people will decide actively, this is not my responsibility. This is not my, my moral responsibility to solve, and I think that some of what we're seeing. In, in this real life sacrificial dharma is, is exactly that and um so one of the things that we certainly want to, want to try and do in the future is, is, is, is disentangle that a little bit more and see if we can really see are there different types of psychological processes underlying these types of adjustments versus just the ontological versus utilitarian.
Ricardo Lopes: I mean, that no responsibility approach that people might have uh, I find it very interesting because, I mean, from the perspective of moral philosophy or ethics where instead of just trying to describe how people react or respond to this kind of. Of problems or moral dilemmas where we're trying to establish what is the most moral way to conduct ourselves, whether it is to divert the trolley toward the other track and kill one person or just let the other people die, um. I mean, is the, when people say that they don't want to just interfere or do anything because they're not responsible for what's happening, wouldn't that be a philosophically valid approach as well, because I mean, I, I'm just thinking that if you find yourself in a situation where a trolley is coming down the track. I mean, it's not you causing that, and if you do something, whatever happens, then it's your fault, right? So, I mean, uh uh as a philosopher, because you're also a philosopher, what do you think about that? Do you think that's a valid uh ethical approach as well?
Dries Bostyn: Um, um, I mean, I would, I would say it is, um, I, I, I, I think, um, I mean, I gave the example of the homeless person, right? Um, WE have to, we can't decide with any moral dilemma situ, we can't engage with any possible moral dilemma situation out there. There's, there's plenty of moral dilemmas and, and horribly immoral things going on over the world, all over the world that we're actively choosing not to engage with. And so we, we, we have to make those judgments in, in, in some way, um. And so I think, I, I think it's a, it's a valid philosophical position to take. I also think it's just, it's, it's a psychologically necessary thing that we need to do. Um, IT'S, it's, it's. But it's also something that I feel um we haven't fully taken into account when it comes to studying how people approach moral dilemmas because if we, if we present people with a, with a hypothetical moral dilemma, we're sort of assuming that they've already engaged with the situation, right? We've, we've sort of, um, you won't have, I mean, some, some, some, some people will say like it's, it's, it's not my, um, If you confront with a troy dilemma, a lot of people will say, oh, I don't want to get like sued, or I don't want to get like a, um, I don't want to murder someone, and then, you know, um, so there's some of that there, um, but I, I, I don't think we fully reckoned with the, the possible difference between people refusing to engage with a moral dilemma versus people actively deciding I don't want to harm someone. I think, I think there, I think there's a, there's a difference there, um, that we need to explore, uh, further. Yeah,
Ricardo Lopes: mhm. No, yeah, that, that's very interesting. So let me ask you now about sacrificial dilemmas and then some of the principles associated with them. So, but first of all, what is a sacrificial dilemma?
Dries Bostyn: So the, the, the, the, the sacrificial dilemma, it's, it's, it's, it's sort of, um, I tend to use it as a similar term as trolley type dilemma. I think I early in my career, I think I used to call things troy type dilemmas, and then eventually it's, I think the fields were still sort of figuring out how do we, how do we call this broader class of dilemmas, and then, um, we eventually settled on sacrificial because it's Um, that is, in a way that the, the core problem that is underlying Islam, the problem of sacrificial harm is appropriate to actively harm, to, um, pursue it as a sort of greater good. So that, I would say they're, they're troll type and sacrificial. You can use those, uh, um, in the same way.
Ricardo Lopes: OK, OK, fine, uh, but, uh, tell us then about the action principle, the contact principle, and the intention principle and how they play out in. SACRIFICIAL or trolley type dilemmas.
Dries Bostyn: I mean, so there's, there, there, there's, there's a lot of variations that you can make in, in Troy de lamas and, um, I guess the action, like these three principles that you're, that you're calling out, they're, they're big ones. They have been studied quite a lot, um, and the, the core idea of the action principle, for instance, is that people tend to, uh, if you, uh, tend to think it's, it's, it's worse to actively cause harm than to cause harm by omitting an action. Um, IT, it's, it's, it's, it's maybe a little bit of an, an, an over the top example, but it's, it's sort of, it's worse to push someone into a river causing them to drown than to, if you see someone drowning, not save them. Um, SO committing a harm by action is worse than committing a harm by inaction. Um, SIMILARLY, if you, if you intend, uh, a harmful outcome. Um, THAT is considered to be worse than if the, the harmful outcome is the, is the result of a, a sort of a by effect like it's collateral damage. You do not necessarily intend to the harm, but it is a, is a, is a, is a side effect, um, and the contact principle is just that people tend to think harm is worse when uh the harm that you commit is, is, uh, a close and personal, is, is direct. Uh, IF it requires physical contact versus if it's a little bit more indirect, um. And those, those, those three differences, um, so those three principles have been studied lots through sacrificial dilemmas and depending on like how you, how you, which side of the principle that you're on, um, you can push people's preferred response around a little bit.
Ricardo Lopes: Mhm. And what are iterative sacrificial dilemmas, uh, and how do they work and what kinds of results do you get there?
Dries Bostyn: Yeah, so the, the, the core idea behind iterative dilemmas is that, um, when we, when we give people moral dilemmas in a typical, uh, experiment. Um, THOSE moral dilemmas, they're also, they're always sort of isolated. They're always sort of within their own little universe. Um, PEOPLE give one response and then, um, there's nothing that happens after, there's nothing that comes before, they're abstracted from time and space, and, and, and if you think about that, that's actually very different from how moral dilemma situations are in real life in, in, in real life, um, there is a prior history, there is a future history. And there's never only one thing that you can do. Um, YOU can react and then you can, you know, see the outcome and you can possibly react again and react again and react again, and um. The, the, the, the, what we've done is, is essentially we, we, um, confronted participants with a series of sacrificial dilemma and essentially it was always the same dilemma with the same targets, the same victims, the same people involved, um, and we were just interested like is that gonna change how people respond to these dilemmas and, um, just like as, as a, as a, as a classic example or less as an easy example. In that uh that electroshock study that we did, and that also included an iterative element. So participants entered the room, they were asked, OK, so the two people are going to get shocked. Do you want to shock the single person? And then, and they decided, some people got shocked, and then, and they did not notice ahead of time, we asked them, we basically said, OK, we're going to do that same dilemma again. The same two people are going to get shocked, and, and you can decide once again to shock the third person instead. And now, even though that is in a way exactly the same dilemma situation, right, it's the same people sitting there. Um, IT'S the same type of shock. It's the same type of do you want to harm one person to save to others to pursue the greater, a greater good, um, the, the, because we've repeated the dilemma, that the underlying moral conflict has shifted and all of a sudden it's not only about do you want to minimize harm, it's also about how do you want to balance harm, how do you want to, do you want to pursue a fair outcome or do you want to pursue. Uh, AN outcome, um, that, um, leads to the greatest good. And, and so what we, what we see is that, um, And maybe this isn't too surprising, but in a way it is quite surprising is that people do really shift their responses quite heavily, um, and I think the, the, the electroshock study, and we did it twice and in the first, the first time we did it, I think 35% of people switched their response. I think in the second time we did it, it was a little bit lower, um, maybe 25% of people shifted their response in the direction of um. Just just just just switch their response, um, and if you then ask people like why did you do it, um, all of a sudden you see this new type of motivation emerge, a motivation that wasn't present, um, when you ask them about their motivations for the first decision, and that is this, this, this idea of, oh, I actually wanted to pursue a, a fair outcome, um, I wanted to give everyone one single shock and so that's, that's why I switched. And I think it's important because it's, it's, it's something that we've also been neglecting in the way that we've been studying moral dilemmas. Um, WE'VE sort of been thinking about, um, moral dilemmas as though they're every decision is independent from another decision, and that's not how morality works works at all. Um, EVERY moral decision is dependent on decisions that come before it and decisions that come after it and so. And if we truly want to understand moral cognition, then we have to also start to investigate that dependency across decisions and so that's kind of the idea that we've, uh, about these, these iterative dilemmas.
Ricardo Lopes: Mhm. Uh, WHAT is the dual process model for moral cognition and how does it work?
Dries Bostyn: Uh, SO the, the dual process model, uh, for moral cognition, um. I, I think we're, we're, we're almost approaching an anniversary for the model right now. Um, I think that it was originally formulated in 2001, uh, by Joshua Green and, and his colleagues, um, and, uh, essentially, um, the dual process model posits that, um. It's sort of a classic dual process model. It's, it's intuition versus deliberation, um, and it links intuition and deliberation to two types of responses when it comes to sacrificial dilemmas. Essentially it says when you confront people with a sacrificial dilemma, if you give them some time to deliberate, um, if they are, they're motivated, if they have a high motivation. Um, AND if they're not too emotionally impacted, that will lead to a higher likelihood of, um, a utilitarian decision of a, a decision that weighs the costs and benefits. That is sort of what the deliberative process is doing. It's weighing costs and benefits. It's looking at outcomes, whereas, um. It could also be the case, for instance, in the foot bridge dilemma, um, you have to push someone in front of the train, and that is a very, um, well, emotionally impactful thing to have to do, and so people like all of these, these emotional alarm bells are ringing and then people are like, oh, I don't even care about the outcomes anymore, um, and that is what causes, um, a deontological decision. So it's, it's, it's, it's sort of like this, this idea that cognition will lead to utilitarian decisions and, um. Emotion, intuition drives this sort of um evaluation of the action itself, um, this, this avowal of harm, yeah.
Ricardo Lopes: I mean, does, is there a relationship between cognitive ability and consequentialist judgment? I mean, is it that people with higher cognitive ability tend to favor consequentialism or is there no relationship at all there?
Dries Bostyn: Um, SO there, there, there are studies suggesting both that it is and, and some studies that suggest, um, That it's not the case. Um, I think in general, I think, I think most people in the field would say that um there is an association, it might not be a super strong association between cognitive ability and, um, respond, condoning the sacrificial harm, responding in an egalitarian way and sacrificial dynammas, um. We don't always tend to find it. I myself have been very unlucky. It seems uh in most of the things that I've done, um, I don't really find this effect of, of either motivation or ability on the types of judgments, um, that people make, um, so my My personal opinion is that I do think there's, there's something there, um, I think it's mostly going to be there in these types of dilemmas where there's a strong emotional response that has to be overcome. Um, I'm a little bit more hesitant to say that if you were to look at all possible sacrificial dharmas, um, you would still find that association. I, I think it's likely true for a subset of dlamas. Um, I don't think it's true for all the lamas, um, for what it's worth in the, in the. Electroshock studies that we've been doing, um, I tend to find no effect whatsoever of, of, um. Short measures of ability or measures of motivation on the types of judgments that people prefer. I, I see no. No difference.
Ricardo Lopes: Mhm. And do we know if there are any other types of psychological traits that might predispose people toward preferring the ontology or consequentialism slash utilitarianism?
Dries Bostyn: Yeah, um, uh, so the, the. Actually, the, the one, personality measure that I always seem to find has an effect, um, is, is on the emotional side of things. It's, it's, it's, it's for instance like something like primary psychopathy, measures of, of anti-social behavior, empathic concern, um, how easily are you affected by these horrible things that you might have to do in a dilemma, um, the, the, the, the. The less that you care about those, the more likely that you will be to condone the sacrificial harm, and I tend to find much stronger and consistent effects of those versus um Versus the the the deliberation measures.
Ricardo Lopes: Right, so in that case, people who have the the that kind of trait would prefer consequentialism,
Dries Bostyn: yeah, yeah, yeah, um, yeah, so the, the, the, the, the, the, if you score higher on like, um, measures of anti-social thinking, um, you will tend to prefer, uh, utilitarian side of things which has also caused some debate in the field like are we actually measuring utilitarianism which is supposed to be this. Good thing worth pursuing if we're finding that um psychopaths tend to tend to favor those types of judgments um and and there's been some work on that um. Um, SOME interesting work that, that, that, that also suggests that, that we should think about utilitarianism as having multiple facets. Um, THERE'S a dimension of this instrumental harm dimension. Do you wanna, are you OK with using others to pursue a greater good, um, but also this dimension of, um, impartial beneficence. Um, ARE you OK with, do, do you, do you, do you want to spread well-being, um. To everyone, basically. Um, AND but the the the people that make that decision, that that split tend to say like, oh, it's the, it's the instrumental harm part that corresponds with antisocial thinking versus the, the beneficial part. Mhm.
Ricardo Lopes: Yeah, OK. So, uh let me ask you now because I know this is something that moral philosophers and experimental philosophers debate sometimes. Are truly problems realistic? I mean, do they really tell us how people would behave in a real life situation?
Dries Bostyn: Um, I think I think there's a lot of layers to that question. Um, SO, so, um, One of, one of the, so one of the reasons why I'm, why I'm trying to study these, these things in the lab is because I, I do genuinely think that there's a difference between being actually confronted with a dilemma and um thinking about a dilemma in the abstract. Um, IF, if, if If the question is like, are these dilemmas realistic, they're obviously not realistic. Like, like a lot of them are very unrealistic. The Troy dilemma itself is, is horribly unrealistic. And that having said, it's not because it's unrealistic that it might not, it might still trigger psychological processes within our minds that corresponds to the processes that would be triggered in a real life similar situation. Um, SO it's, it's, I. I would say that the realism is, is something that we should worry about and, and there's some very interesting research showing that um if you confront people with these trolley dilemmas, um, they, they do tend to be engaged, they do tend to like them, they like the psychological intellectual exercise that they are, um, some people find them very fun to do, um, and then you could wonder, well, is that actually like, like, like if you would encounter such a situation in real life, it would not be very fun. Um, AT all, um, and so it is, it is something that we, I think do need to take into account. I think it's something that we have been neglecting a little bit too much as a field, um. We like these dilemmas. They're, they're, they're fun to think about, and I think that's why we've been, been using them so much, and they allow for, um, for a lot of types of experimentation as well, right? Like it's, it's, it's, it's very easy to shift vignettes around a little bit and see, oh, what change, what, what this change, what causes, what resulting change that we see in our, in our moral cognition, and but still, um, we should do more to try and ground, um, our research a little bit more in reality. Um If you're actually going to compare like how do people think about it and what do they do in the lab, um, we've, we've done some studies on this, um. A good while ago, um, we did a study where, uh, sort of similar to the electroshock dilemma that I explained, um, but where I invited people to the lab and I did not have people hooked up to electroshocks. I had cages of mice, so I had one cage of 5 mice and I had another cage of 1 mouse, and I said, You know, um, very similar setup. The 5 mice are going to get an electric shock, um, but you can decide to shock the single mouse instead. And then we had, um, some participants think about that dilemma. Just we just read text-based vignettes, and others were invited to the lab. And what we did find there was that people were actually more likely to commit the sacrificial harm, to shock the single mouse in the real-life case versus the hypothetical case, um. So there does seem to be like a shift that happens like if you, if you, if you're comparing the two, then I would say there's some evidence, um, there's also similar evidence when you look at VR virtual reality studies, people tend to prefer sacrificial harm more in a VR virtual reality version of a dilemma versus when they're just reading the text and thinking about it. Um, AND so there does seem to be something about making the dilemma more real that shifts responses towards, uh, the sacrificial harm side, utilitarian side, and so that could be an argument to say that there's a difference here. Um, ON the other side of things, um, we have also looked at. If you ask people to respond to a bunch of trolley dilemmas and then you, you, you, you see, do their responses predict what they actually do in real life, we do also find a little bit of a, an association there. So it's not like, like thinking about these dilemmas is completely different than being exposed to them in real life. The way that we think about them does predict what they do in real life as well. Um, IT'S not a super strong correlation. Um, I think across studies we have something like a correlation of, no, we have about um. 66% of variation explained, um, which is not a tremendous amount, but then again, like it's, it's, it's, it's psychology, right? So with 6% of variation explained, maybe we're already happy, I don't know, it depends on your perspective. So my, my personal opinion on the matter is that um. What we're doing in, in, in these hypothetical dilemmas like, like, obviously that is not completely um It's, it's, it's, it's, it relates to how people behave in real life. Um, THAT having said, if you put people in a real-life dilemma situation, all these sorts of other elements start to, start to sort of get pulled in situations, and you can't fully replicate that in, in a hypothetical vignette study. Um, ONE of the problems that we've So I did those, those mouse studies and, and um we, we tried to replicate those, those initial results and actually the the the that paper is also upcoming pretty soon. Um, AND just we made some minor changes in the way that we were administrating the dilemmas and that already led to quite a shift in how people behaved, um, and it is really hard to sort of, if you have a hypothetical dilemma, just a text-based dilemma, to create a situation in real life that is fully comparable to that hypothetical dilemma. Just imagine, um, in the, in the real-life case, for instance, we typically work with a time limits. Let's say you have 20 seconds to react. And you could implement a 20-second time limit when you confront participants with a hypothetical dilemma as well, but 20 seconds in the real life situation might not be comparable to 20 seconds in a hypothetical situation. In a hypothetical situation, there's all of these other sort of impulses coming at people. It's a much more emotionally impactful situation and just thinking about it. And so, Then all of a sudden you have to wonder, OK, so what would be a good comparison here? Like, like how many seconds do we need to use in, in one instance versus the other to, to, to be able to even compare these two. Dilemmas that are at the core they're the same, but they're still very different experiences, right? And so if, if the, if the answer is, can we, can we use these hypothetical dilemmas to study real-life cognition, I think we can, uh, but there's also real limits to what we can learn from these hypothetical dilemmas and at some point we as a field will need to actually put people in the lab and confront them consistently and systematically with, uh, with real life dilemmas to see how people behave.
Ricardo Lopes: Mhm. So another kind of aspect of sacrificial moral dilemmas you study or have studied is how people infer moral character the moral character of others based on how they resolve these moral dilemmas. So, uh, what results do you get there? I mean, how do people tend to evaluate the moral character depending on how others respond to this dilemmas.
Dries Bostyn: Yeah, we, we, we talked about it a little bit previously already, um, and indeed like, like, like every moral judgment is also like it has a communicative purpose. Like if you make, if, if you, if you, if you hate the, if, if you disagree with the things that I disagree with, with, or if you agree with the things that I agree with, um, that will usually form a bond between people. And what we tend to see in, in these sacrificial lemmas or what we used to see, um, I'll, I'll, I'll add a little bit of a nuance at the end there. Is that if you confront people with a sacrificial dilemma, um, People will usually prefer others that make deontological judgments versus those that make utilitarian judgments. Um, THAT basic effect has been, um, I think, um, Professor Jim Everett was the first one who wrote the first paper about it, and it has been replicated a bunch of times. We've replicated it as well. I've found that results quite often, people tend to. Uh, DISLIKE utilitarians more than they like the intelligence, than to trust utilitarians less than they trust theontologists. Um, That has a lot of downstream consequences. There's even some studies showing that, um, obviously in the abstract, but if, if, if you, if you ask people who would you prefer as a mate, for instance, uh, people will tend to favor the ontologists versus utilitarians. There's a few wrinkles to that. In some cases, people do actually prefer utilitarians. Um, IF you, uh, meet a sort of like a general, like a cold ruthless leader. Um, THEN you might actually prefer someone that is a bit more utilitarian and a bit more pragmatic in the types of moral judgments that they make, um. Than than theontologist, I, I think some of the, the reasons why we tend to find the judgments, I think are related to what we've talked about previously as well is that um one of the reasons why someone might prefer utilitarian judgment is because they might just not care about the harm um if someone scores high on our anti-social measures, um. They prefer utilitarian judgments and maybe some of what we're seeing is that where people are making these utitarian judgments and think, oh, this might be a genuine concern for the greater good, but it might also just be a psychopath, um, and that is why we, we sort of see this distrust emerge, um, but also be related to, um, there's some research suggesting that it's related to, to, um, predictability. Um, SOMEONE that's a dontologist, some of that will always go like, oh, I don't want to harm anyone else. Um, IT'S a bit more predictable and might because of that, be a little bit more preferable as a, as a partner, as a cooperation partner than, you know, a utilitarian, which is great when you're on the, the right side of the track, but if you're on the wrong side of the track, you, they might decide against you. Um, But I will say um some of the more recent research that we've been doing is um Cause when we A few years ago, I started doing this type of who do people prefer uh with iterative dilemas, um, like in these sequences of decisions, and who do people prefer there and there we consistently were finding that people actually liked utilitarians more than they liked the theonhologists, which was Um, very strange and which went against an entire literature and initially confused us very much, um, and so what we hadn't been taken into account, I think hasn't been taken into account in a lot of this research is that I've already hinted at it. People tend to prefer similar odors, and within the fields, um, a lot of the dilemmas that we have been using have been dilemmas that most people will prefer a, um, uh, a deontological response to. And so some of the preference that we're seeing for Utilitarians does not appear to be like a, uh, on every type of sacrificial dilemma. People prefer theontologists over utilitarians. It really depends on the dilemma, um, and so we've, we've, we, we've done, we've done one study where in study one we found, oh, people prefer, prefer utilitarians, and in study 20, people prefer deontologists, um, and what seems to be going on is that a lot of the difference is just explained by similarity. People prefer. The ones that have the same judgments as they do, the anthologists prefer the anthologists and utilitarians prefer utilitarians, and once you start controlling for that, um, the, the difference between Um, trust for citytarians and trust for city nithologists, it, it sort of gets, gets diminished a lot. I think in general, um. Based on, because there is this huge literature that people prefer the anthologists, I would still be tempted to say there likely is something there, um, but it is, the effect is much more variable and much more vulnerable than, um, if you had asked me this question five years ago, I would have been like, oh yeah, certainly people, uh, people hate utilitarians. Now I'm like it really depends on the type of moral dilemma that we're dealing with. Um, YEAH.
Ricardo Lopes: Yeah, no, that's very interesting. So I have one last question then. uh, IN the real life situation, people can also choose, and we've already touched a little bit on this, but I want to ask you directly, so people, people can also choose to just exit the social dilemma. Do people do so and if so, why?
Dries Bostyn: It's, it's, it's. It's also something that we haven't really been, been, been studying a lot, um, and, and you're right, like, like, like, like, like when, when people are confronted with a hypothetical dilemma, again, like engagement is assumed, right? And so, so we don't really allow people the possibility to, to exit the dilemma. I think it happens in real life. I think people, um, decide to leave moral dilemma situations all the time, um, but it is hard to study. Um, I, I, I guess one way of Sting it would be in like sort of the lab studies that we have been doing and in those lab studies we have encountered a few times, not a lot. I think by now I've done. I think I've exposed over 1800 participants like this, these, these types of real-life lab dilemmas. So far, um, and across the 1800, I think. Maybe 1011 or 12 when they encountered the dilemma we're like, oh, I would rather not participate in this. I would like to exit the situation. Um, IT happened a bit more in the mouse dilemmas than it happens when in the, in the electoral dilemmas with people, um, and the, the main driver of that was always, um, people that have this, this very high concern for, for animal welfare, um. There were a couple of people there that had patronized themselves and that just, just, um. Really could not participate. They were like, they were like like this, this, this, this, this, this is, this is potential animal abuse. Um, I don't want to partake in this at all. And so that's, it's, it's, it's in those cases, it was like a really strong moral concern, um, about the animals, um. Maybe I need to add though, like, like in those experiments we didn't actually harm any animals like we, we, we just pretended that we're going to harm animals, um, and then as soon as people made a decision, we just stopped the experiments and so no mice were harmed, um, over the course of that experiment, but that was the main, the main, the main driver of people deciding to engage, to disengage from the situation. It's just a concern for, for animal welfare welfare, um. I, I, I think in general, I, I think people might exit social dilemma situations for multiple reasons. Um, I think one of the reasons might be they're just emotionally overwhelmed and so they're just shutting down. Um, I think another reason might also just be they, they, um, They're confronted with, with a bunch of options and they don't like any of them. And so, um, as a sort of result of that, they, they rather than pick one of the, the options that they really don't want to pick, they just go, I don't wanna, I don't want to deal with this at all, um. Or it might also be just, just, just. People were never fully engaged with the situation at all. Um, SORT of what we talked about previously with this, this, there's no responsibility, uh, idea is that, is that people were, are like. It's not my, it's not, it's not my place to make a decision here, um, and that that might cause people to sort of step back from the loan situation. Um, I think it definitely happens. I also think we, I also don't think that we have sufficient good data on it just yet, and I, um, everyone would definitely be interested in, in doing some more research on in, in that direction, yeah.
Ricardo Lopes: Mhm. OK, so would you like to tell us what are the kinds of things you will be working on in the near future and are they also related to these kinds of uh psychological and philosophical exploration of truly type moral dilemmas or something else?
Dries Bostyn: Yeah, uh, so, so, um, My main focus for the future, um, though I, I, I will need to get some more grant money to actually get it going, um, well, like, like I want to put people in laps and confront them with, with, with real-life moral dilemmas, and, um, I think the basic setup that we have right now with the, the electroshocks and, and shocking people is a, it's a very rich setup already. Um, THERE'S a lot of things that you can tweak about that setup to see how it impacts people's moral judgments. Um, SO like just to give like one example, um. In, um, in the studies that we've conducted so far, like the basic, basic version is just, um, you have 3 people sitting there and they are all men or all women. Um, ONE manipulation that you could do is you could do, oh, it's, it's 2 men versus 1 woman, 1 woman versus 2 men, and see to what extent does the, the, the general characteristics of the victims, to what extent does that drive the types of judgments that people prefer, or you could, you could manipulate the way in which that harm needs to be done in one version of the experiments. Um, WE separated the participants and the, the targets, uh, with a one-way, one-way mirror, um, so participants could see the targets, the Confederates, um, but not vice versa, and another way, in another version of the experiments, and they were all located in the same room and then the participants had to. Commit the sacrificial harm in the one way, the mirror version, they could do it in, in their room, they just have to press a button there and in the, in the one room version, and participants had to walk up to the victim and actually touch them on the arm and so it's much more emotionally direct experience, um, and there's, there's also, there's all these sorts of parameters that can play, but you could give participants the option to shock themselves, for instance, and see, um, when will people decide in favor of, of self-harm. Um, YOU could, you could tell participants something about the people that are sitting there. Like maybe someone is like a volunteer for a charity. To what extent are people going to weigh that? And I think it's a very rich setup that we can use to explore, um, not just how people respond to sacrificial dilemmas, but how they respond to, to moral dilemmas in general because I feel. And we have, we talked about the dual process model, which is a, um, which has been a, a very influential and, and, and important model to sort of explain how do people respond to sacrificial dilemmas, um, but there's many different types of moral dilemmas out there, right? Um, IF you, um, I don't know, if you, if your co-worker did something wrong and you have to like wait, do I want to tell my boss yes or no. Um, THAT is also a moral dilemma, but it's not really a sacrificial dilemma, and so we don't really have a, a good, um, process theory for what goes on in the mind when people are confronted with a dilemma like that, and I think, um. By confronting people in the lab with real dilemmas, um, we can work towards a more general sort of process model for how people cope with moral dilemmas, and I think we can start with the dual process model, um, but what some of our studies with these real real dilemmas have already shown is that the dual process model cannot explain everything that's out there. The dual process model does not really explain why people, when you confront them with these types of iterative dilemmas, why they switch. Um, IT talks about not wanting to harm others. It talks about utitarianism. It doesn't talk about when people go, actually, I don't want to just minimize harm. I want to spread harm fairly. That's sort of new moral concern that's emerging, and we don't know what drives people to stay with the utilitarian choice or what drives them towards the fairness choice. And so I think we, we have a, a core set up here that will allow for a lot of more, um. Variation in moral judgments to explore in a real-life lab set setting and just by just systematic manipulations, we will be able to uh create better models. Um, EVENTUALLY, I would also like to just. Design out of real life world delamos that we can set in the lab um. The, the, I very much like this electroshock paradigm. I think it's, there's a lot of things to be done there, um, but just in the way that, um, roda lamas are not realistic, um, like going into a lab and having to make decisions about who you want to shock is arguably also not really realistic. Like it's not a type of choice that people encounter a lot in, in, in, in their, in their day to day, um. Again, I do think that we can use it to study some of the psychological processes that work in real life cognition, but it's still a very artificial situation, and I would like to explore a wider variety of moral dilemma situations in the lab. And so that's, that's essentially gonna be my, my focus for the, the next couple of years, um, try and get some money to do this, to, to do more of these studies and, um, really explore the, the difference between. What can thinking about these lamas, what can that teach us versus if you put people in these situations, how do they react and, and that tension and the similarities and differences that emerge that is really going to be my main research interest.
Ricardo Lopes: And where can people find your work on the internet?
Dries Bostyn: Um, SO I am on Twitter, uh, still, or X or whatever we call it nowadays. Um, I should actually, I don't even know my username username. I think DH Boston, um, will probably, if you just Google this Boston, um, I'm, I'm sure I have, I have a personal website, it will emerge. Um, I'm very easily findable on scholar, um. People should not hesitate to reach out. Um, I'm always happy to talk. I'm always happy to have a chat, especially about these sorts of things, um, yeah.
Ricardo Lopes: OK, thank you so much for taking the time to come on the show. It's been a real pleasure to talk with you.
Dries Bostyn: No, thank you very much for having me and for dealing with the, the, the little bit of the situation we had halfway through. Um, NO, it's been a, it's been, it's been a very enjoyable conversation, yeah.
Ricardo Lopes: Hi guys, thank you for watching this interview until the end. If you liked it, please share it, leave a like and hit the subscription button. The show is brought to you by Enlights Learning and Development done differently. Check their website at enlights.com and also please consider supporting the show on Patreon or PayPal. I would also like to give a huge thank you to my main patrons and PayPal supporters, Perergo Larsson, Jerry Muller, Frederick Sundo, Bernard Seyaz Olaf, Alex, Adam Cassel, Matthew Whittingberrd, Arnaud Wolff, Tim Hollis, Eric Elena, John Connors, Philip Forst Connolly. Then Dmitri Robert Windegerru Inai Zu Mark Nevs, Colin Holbrookfield, Governor, Michel Stormir, Samuel Andrea, Francis Forti Agnun, Svergoo, and Hal Herzognun, Machael Jonathan Labran, John Yardston, and Samuel Curric Hines, Mark Smith, John Ware, Tom Hammel, Sardusran, David Sloan Wilson, Yasilla Dezaraujo Romain Roach, Diego Londono Correa. Yannik Punteran Ruzmani, Charlotte Blis Nico Barbaro, Adam Hunt, Pavlostazevski, Alekbaka Madison, Gary G. Alman, Semov, Zal Adrian Yei Poltontin, John Barboza, Julian Price, Edward Hall, Edin Bronner, Douglas Fry, Franco Bartolati, Gabriel Pancortez or Suliliski, Scott Zachary Fish, Tim Duffy, anny Smith, and Wisman. Daniel Friedman, William Buckner, Paul Georg Jarno, Luke Lovai, Georgios Theophanous, Chris Williamson, Peter Wolozin, David Williams, Dio Costa, Anton Ericsson, Charles Murray, Alex Shaw, Marie Martinez, Coralli Chevalier, Bangalore atheists, Larry D. Lee Junior. Old Eringbon. Esterri, Michael Bailey, then Spurber, Robert Grassy, Zigoren, Jeff McMahon, Jake Zul, Barnabas Raddix, Mark Kempel, Thomas Dovner, Luke Neeson, Chris Story, Kimberly Johnson, Benjamin Gelbert, Jessica Nowicki, Linda Brendan, Nicholas Carlson, Ismael Bensleyman. George Ekoriati, Valentine Steinmann, Per Crawley, Kate Van Goler, Alexander Obert, Liam Dunaway, BR, Massoud Ali Mohammadi, Perpendicular, Jannes Hetner, Ursula Guinov, Gregory Hastings, David Pinsov, Sean Nelson, Mike Levin, and Jos Necht. A special thanks to my producers Iar Webb, Jim Frank Lucas Stink, Tom Vanneden, Bernardine Curtis Dixon, Benedict Mueller, Thomas Trumbull, Catherine and Patrick Tobin, John Carlomon Negro, Al Nick Cortiz, and Nick Golden, and to my executive producers, Matthew Lavender, Sergio Quadrian, Bogdan Kanis, and Rosie. Thank you for all.