RECORDED ON APRIL 27th 2024.
Dr. Andrew Delton is Associate Professor in the Department of Political Science, the Center for Behavioral Political Economy, and the College of Business at Stony Brook University.
Dr. Talbot Andrews is Assistant Professor of Political Science at the University of Connecticut. Her research focuses on how institutions, public policy, and the physical environment shape preferences and behavior related to climate change.
They are both authors of Climate Games: Experiments on How People Prevent Disaster.
In this episode, we focus on Climate Games. We first talk about the challenges of climate change, and the best solutions we have for it. We discuss why climate change is a “global social dilemma”, and how we can use economic games, like the disaster game, to study how people make decisions that are relevant to climate change. We talk about how people react to different kinds of technology, to what extent they are willing to help other groups of people and avoid disaster for others, and the relationship between leaders and citizens. We also talk about how we can study inequality through disaster games. Finally, we discuss whether we can be optimistic about tackling climate change.
Time Links:
Intro
The challenges of climate change, and the solutions
Climate change as a “global social dilemma”
Economic games
The disaster game
How people react to different technologies
Helping other groups of people, and avoiding disaster for others
The relationship between leaders and citizens
Studying inequality through disaster games
Are there reasons for us to be optimistic about climate change?
Follow Drs. Delton and Andrews’ work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello everybody. Welcome to a new episode of the Center. I'm your host, Ricard Lobs. And today I'm joined by Doctors Andrew Dalton and Talbot Andrews. Doctor Andrew Dalton is a returned guest. He is associate professor in the Department of Political Science, the Center for Behavioral Political Economy and the College of Business at Stony Brook University. And Dr Talbot Andrews is assistant Professor of Political Science at the University of Connecticut. And they are both authors of climate games experiments on how people prevent disaster and uh say something, Doctor Dalton for people to, to see
Andrew Delton: this book. Uh The book just came out. Um It's available on Amazon in both paperback and hardback and you can also get it directly through University of Michigan Press's website. And if you go to their website, uh the book is actually available completely free as a download too. So if you just want an electronic copy of the entire thing, it's a one stop shop, just download it for free. Um And if you go to either my website or Talbot's website, there'll be links. So you don't, don't remember any specifics, you know, just remember one of our names and you can find the book if you decide you're interested in following up after today.
Ricardo Lopes: Great. So uh I'm leaving links to that in the description down below by the way. So uh to start off with before we get into the sort of game theory slash psychology of all of these, and now tackling climate change is a sort of, let's say global coordinations game. Uh Just to set up the stage, let me ask you first. At what point do we find ourselves uh at right now when it comes to climate change, I, I mean, how much are we into it? And what are some of the worst potential effects and consequences of it?
Talbot Andrews: Yeah. So climate change is this huge complicated problem and we're unfortunately already seeing the impacts of climate change. So we're seeing especially more extreme weather, we're seeing more severe hurricane seasons, we're seeing longer and more severe wildfire seasons and as of right now, it's mostly just going to get worse. Um But where we are in fighting climate change is it's no longer just a technological problem. We know how to stop climate change. We have all these tools available to us. The main challenge now is a more political problem. How do we govern and implement all of these different tools that are available to us and that's where our work comes. And so we're specifically interested in the public side of the problem. How do we get everyday people to care about climate change, to spend money, to support these often expensive policies. And how do we design policies that will most get people involved in paying for climate change mitigation and disaster prevention?
Ricardo Lopes: And so I I mean, we're going to get more into that as we go through the questions here, but just uh give us perhaps a brief, a brief overview of what would be the best solutions that we are that we have available right now. And the uh that are the ones that we would need to implement and convince people to implement.
Talbot Andrews: Yeah, unfortunately, there is no one thing that we can do that will totally stop climate change. But thankfully, we have a lot of different tools available to us. So one thing we absolutely have to do is we have to reduce the amount of electricity we generate from burning fossil fuels. We need to be building renewable energy infrastructure like wind power and solar power, but alone, that's not enough. So on top of that, we also need to be building more infrastructure to adapt to the disasters that are already happening. We're already feeling some of these impacts of climate change. Even if we switch to completely fossil fuel free electricity today, we would still face some of those impacts because of inertia in the climate system. So we also need to be investing in climate change adaptation and infrastructure. Uh This one's a little bit more controversial but we probably also need to be investing in technologies to help buy us more time for this transition to renewable energy. So investing in research on different types of geoengineering, so technologies that help either temporarily cool the planet or that help take carbon dioxide out of the atmosphere. This doesn't solve the problem of us still emitting a lot of fossil fuels, but it just buys us some more time before we see the worst impacts of climate change to make that transition. So it's a huge problem and we're going to need a whole collection of different solutions. Some combination of these
Ricardo Lopes: things amongst the technological solutions, you will uh I mean, they would also include the things like renewable energy and nuclear power, right.
Talbot Andrews: Yeah. Anything that will help us reduce the amount of carbon dioxide that we're emitting.
Ricardo Lopes: Yeah. Uh And by the way, when we're talking about solutions here, what would these solutions be for? I mean, because uh basically what I'm asking you is at this point is climate change still reversible in any way or are we just talking about mitigating climate change?
Talbot Andrews: Yeah, so I'm really optimistic about these things. Um I maybe would go a little bit insane studying the problem if I didn't think it was somewhat reversible. But my read of the science at least is that it's on my side that it's not too late to do something, we are certainly going to see some negative impacts and we already are, but we do still have time to reverse some of the worst impacts of climate change. It's just going to take a lot of coordinating a lot of investment to make that happen effectively.
Ricardo Lopes: And uh so getting into the game theoretical side of things now, why do you call the book Climate Change Global Social Dilemma with the?
Andrew Delton: Yeah, good question. Um So uh the the first thing to understand is the idea of a social dilemma is a huge topic going back decades and decades in political science, sociology, psychology, biology. So one classic example of a social social dilemma is basic law and order in a society. So, you know, it's good for each one of us that we don't have to worry too much that criminals will assault us or that if we're brought before a court, we won't be treated fairly. Um So, you know, basic law and order is good for everybody. But if you think of a fairly large society, like even just, you know, a state in the United States, not in the whole country, there's a state. Um WHEN in order to fund law and order, you know, we think of taxes pay for that but you as an individual citizen, your taxes are a tiny, tiny drop in a pretty large bucket. So even though you benefit because crime is being deterred by the work of police and the court system, um there's really no individual benefit for you in paying your taxes. If you withheld your taxes, you'd have that money. And really the courts would function, the police would function just as before. So it's, you know, that's so, it's, that's why it's called a dilemma. It's a social dilemma because we're all better off when this thing is providing. Um, BUT we, you know, individually, it might make sense if we could somehow shirk uh contributing to it for the kind of social dilemmas like law and order or things like national defense, the, the one, you know, a centralized state like the US government steps in um you know, makes us pay taxes and then, you know, that, that solves that problem in, in that way. So that's a basic social dilemma when it comes to the reason we call um climate change and a mi and emissions, a global social dilemma is they happen, it ha it happens beyond the national borders, emissions that are emitted anywhere in the world are going to affect the entire world. Um So, and, and it's a social dilemma in many of the same ways, you know, everyone would benefit if the climate isn't changed. But if you as an individual, human being, any emissions you make are a high ky drop in a very large bucket. So just like with your taxes and law and order, why bother restricting yourself when it comes to contributing uh emissions that might change the atmosphere. In fact, even some, some, you know, the US is not this way, but some countries are small enough that it really doesn't matter much what they do. Like a pretty small country with a couple 100,000 people. Their choices aren't gonna matter too much compared to huge juggernauts like the US or China. Now, what makes climate change and global, you know, emissions a, a difference from law and order is that there's no centralized government. We don't live under a one world government. So we can't look to, I guess there's no single leviathan that is overseeing us all who can force us to do something that might be beneficial for everybody. Um So what that means is that if we're going to solve a problem, a global social dilemma, it involves independent actors. In this case, the world states or the world's nations working together to, to solve the problem with no one, no one entity who can, you know, make everyone do something they might not otherwise want to do.
Ricardo Lopes: And I mean, we're probably talking about the largest scale social co ordination problem that we probably have at our, in our hands now right now, right? Because I mean, it would have to involve the entire globe, it would have to be done on a global scale, right? And there's probably nothing bigger than that when it comes to social coordinations,
Andrew Delton: right? Yeah, that's yeah, a lot of what we explore in the book are all the problems that come along with the fact that you're trying to coordinate, you know, people with potentially different interests, different motivations, different sorts of knowledge, different resources um as is, you know, the actual case with the world's countries. Um AND the people in the world. Yeah. So it's a huge, uh you know, one of the more complex kind of social and people have dealt with usually social dilemmas are at best national or maybe subnation or it might be like, how do we manage, you know, the a particular fishery off the uh the coast of Alaska? How do we manage a water supply like the Colorado River, which is already, you know, a challenging enough thing, but this is much, you know, a much bigger scale than any of those kind of uh you know, classic real world examples of a social dilemma.
Ricardo Lopes: Mhm A and when we get into the economic games, we present in the book, we'll probably come back to uh why doing this at such a scale will is also a complication. But in the book, you get into some strategic challenges of climate change. You talked for example about uncertainty, deciding for others self created disaster, tensions between policymakers and the public. Could you tell us a little bit about that? I mean, what are these challenges and how did you uh come up with them?
Andrew Delton: Yeah. So the, the reason we came up with these um is just, you know, there's many challenges we thought these were ones that were interesting that were and were tractable to study with the kind of tech tools we uh we are experts in using. Uh SO that we're certainly not saying these are just, these are the only challenges uh that are involved in the prom. So one of, one of the, the ones you mentioned is uncertainty. So um the I PC uh IPCC, the Inter uh Intergovernmental Panel on climate change, they regularly release reports where they try to summarize the evidence on various aspects aspects of climate change. Um A few years ago, one of their reports said that they have high confidence that things like tropical cyclones are going to increase but less confidence for other types of precipitation due to climate change. So there's different levels of certainty with various elements of what's gonna happen uh in the climate. And if we don't exactly know what the targets are that we're trying to avoid or things we're trying to hit. So there's also uncertainty and things like how will a potential technology work out if you try to scale it? Like carbon capture and storage? What's you know, what's gonna, what are the upsides or downsides of that? What are the upsides of geoengineering or solar radiation management uh techniques that involve speeding the atmosphere with aerosols? If we're not sure how these things are going to work out or what their effects will be downstream, it's harder for countries to figure out? Well, do I want to support this? Do I not want to support this? Do I care whether other people are going to support this? So, uncertainty is a big problem in figuring out what we should do if I don't know what you're gonna do? How do I know what I should do? Um Another problem that we investigate is deciding for other people. So this is a very common kind of political problem where often decision makers are at a remove uh from the people who are gonna be most affected by their decisions. One example we use in our book is a tiny little island nation called Kiribati. In the Pacific. There's about 100,000 people who live on this island country. Um AND various project projections lead to different numbers, but many projections suggest the water is gonna ocean is gonna rise substantially in the vicinity of this island. Several years ago, the World Bank used the island as what they call the demonstration project to try to, you know, to try to show that outside experts investing millions of dollars can help a small country uh avoid disaster. One of the things they did was to help the country build what are called sea walls, essentially walls that help stem the water from, you know, rising water won't get farther inland and it turned out that the sea walls ended up being an embarrassing disaster. Uh Some people in the area called them part of a larger graveyard of short term investments in infrastructure. And at least one of the act, one of the reasons people thought they turned out so poorly is because they were spearheaded to, to a large degree by people who were remote, at a distant distant advisors. Uh Not just not, you know, we didn't have a full understanding of what local people wanted or how to deal with the local environment. Uh So, you know, that's just an example of this larger political problem. Outsiders often don't have the same knowledge or motivations of the people who are local. Um YOU know, people who are working at the World Bank, they wanna help Kiribati, but they also have the funders and the bosses above them who are even more remote and who maybe are pulling them in different directions. And this is, you know, a general problem with climate change because often many important decisions will be made by people who are not especially um affected by the potential for disaster. So people in rich, you know, industrial countries will make a lot of important choices. But it turns out it's people in, you know, poor developing countries that face larger risk of disaster. And of course many important decisions will be made by people who are alive right now. Uh But it's gonna be mostly people in the future who are affected. Um So that's, you know, this, this general problem of you know, how do we make good decisions? Can we make good decisions when those of us who are making these decisions, aren't the people most affected by them?
Ricardo Lopes: Uh So let me just ask you one question about, uh I guess that this would fit into uncertainty but when it comes to the fact that it's very hard for us, at least intuitively to connect uh gas emissions with uh climate change or, or to climate change. Does uh would that fall into uncertainty? I mean, because uh for people who are not scientifically literate, it's really, really hard for them to understand how would that connection work? Would that also play out here or not?
Andrew Delton: Yeah, I think so. I think that, yeah, I think that would tie into the same theme of uncertainty. There's all, there's all sorts of uncertainty, there's uncertainty in what, how is what we're doing now affecting the climate, how are technologies we could roll out? How do those affect the climate? Um How does economic choices we make? How are those going to affect living standard? Um And so there's also a balancing act between um you know, making good choices uh that, that aren't going, you know, you, you're not trying, you don't want to impoverish people because that's, that, that's kind of defeating the purpose of solving the problem. Um So there's uncertainty all the way, I guess, I think we used the phrase multiple times in the book, uncertainty all the way down about a lot of these things.
Ricardo Lopes: And so why did you decide to focus specifically on economic games in the book? What kinds of knowledge can we get from those types of games that would be relevant for a co ordination problems such as climate change?
Andrew Delton: Yeah. So what you know, one thing we, we want to make very clear is we're behavioral scientists. So we're not people who are crunching the numbers on major economic policy uh policies. We're not the ones who are directly designing uh uh technology for engaging in carbon capture and storage. So what we study is how people think and we want to study how people think about the problems of climate change. There's lots of tools that we use other social sciences use to study how people think about political issues, things like survey experiments or public opinion polling or focus groups. Uh What we want to do is use our comparative advantage. So our research team uh myself Talbot and our co-author uh Ruben Klein, who couldn't be here today. Um All of what what all of us do is in addition to other kinds of techniques we use uh experimental economic games. So the basic idea of these things, it's really simple. What you're trying to do is to simulate in a laboratory, the kinds of strategic problems people and politicians face out there in the real world, but you do it in a laboratory and in a way that makes it really transparent to the researcher. So basically, it's you, it's called a game for a reason. You're literally given a series of rules and you can make choices within these rules that affect how much money you and other people earn. So one of the classic games in this, I'll get two quick examples. One of the classic games that maybe uh many of your listeners are familiar with is called the dictator game in the dictator game. Um We might, if I was the researcher, I might pair you and tal it up anonymously through a computer network, randomly pick one of you give you $10 and say you can divide this $10 any way you want between yourself and the other anonymous player. And that's the simplest possible uh game that people typically study, you know, how would you divide a fixed sum of money and then one. So that's, that's it. We can see what would you do. So if you give a lot, we might infer that you're generous or that you're reversed to inequality or if you keep a lot, we might infer the opposite that you're stingy or that you're happy with inequality, things of that nature. And then to make it just a little bit more complicated, a variation on that game uh called the ultimatum game simulates bargaining. So that one's very similar. So in, but instead of you Ricardo making a, a decision all on your own. You make a proposal essentially an ultimatum to Talbot. Uh So you might say out of this $10 Talbot, I'm gonna give you $1 and keep nine. And again, this is usually anonymous, but I'll use people's names. So Talbot then considers that proposal and she might decide. Um, WELL, no, thanks. She rejects your proposal. And in that case, uh what happens is essentially the money, you know, bursts in the fire and disappears. So no one gets anything. So if you Ricardo don't make a, a uh a proposal that Talbot's willing to accept, no one gets anything. Uh And so that's designed to simulate kind of a high stakes one shot bargaining situation. So the benefit of this. So, you know, that we and we'll come to the games we, we tended to use. But again, the benefit is by making these things really concrete and very transparent to the researchers, we can try to understand uh very what you all are doing. The problem is if we try to just look at the real world now, that's important. Many me, you know, a lot of useful research does that, but the problem with the real world is we don't know all the rules, all the institutions, all the constraints, all the material incentives or other types of incentives that people face like a real president making some decision about, say going to war. We don't know what intelligence the president had privy to, we don't know what kinds of pressure groups we're putting the most behind the scenes, you know, pressure on them, all sorts of things. We might not know that led to their decision. So it's important to, you know, to understand how real presidents make decisions, how real policymakers make decisions. But we like these games because you can really dig in and see exactly how people respond to problems that you as a researcher create. So you know, everything about the little world that your research participants are facing.
Ricardo Lopes: But just before we get into the specific kinds of games you explore in the book, there's perhaps two sets of important questions here. So the first one is um I mean, what are the limitations of these games in terms of how much should we expect them to really translate into real life? Right? I mean, because there's all, there's always the issue when we study things in the lab that they might not have uh much ecological validity, let's say that they wouldn't work the same way uh outside of the lab in an ecologically valid uh setting, let's say. So, I mean, to what extent does that criticism apply here?
Andrew Delton: Yeah, so there's kind of two types of data that I I would point to. So of course, that's a reasonable worry, very reasonable worry. One type of data I would point to is a an obvious concern you might have is uh you know, I use an example of $10. Now, almost any real world political problem that we care about has stakes much bigger than $10. So you might worry that these games because the stake sizes are gonna be relatively small on the order of a few dollars, $10.20 dollars, uh, that they don't, might not tell us anything. So, researchers actually have tested that. Um, SOMETIMES, you know, researchers with lots of money have put pretty big stakes on the line and you find um not ex not quantitatively identical results, but people play the game pretty much the same way, even if the stake sizes are, are dramatically a larger, like some great examples of this come from work uh by anthropologists who took game like the ultimatum game and the dictator game around the world. And if you're from a, you know, relatively rich industrialized society yourself with reasonable grant money and you go to a small scale forging or horticulturalist society, the kind of money you have available, you're able to pay what for them might represent the wages they would earn over a week or even a month for a single brief session of these games. When anthropologists do that, they tend to find some quanti quantitative variation uh society from society, but broadly speaking, like in the dictator game, um you, you tend to find that across huge changes in the type of ecology, the type of the place in the world people are living, they tend to give away 25% to 50% of the stake. Um So they tend to kind of a narrow range they give and again, that's like with a huge amount of money, sometimes a week or a month of wages on the line. Um Another way you could look at like, well, do, what do these games capture particularly for our purposes? Lay people like the people who are normal experimental participants, random people who are willing to participate online or ST undergraduates at a university, they're not the people who are in the room when governments make big decisions, right? So you might worry that lay people don't behave like politicians. Uh But other political scientists have taken the gains similar to the ones I've been talking about or other kind of decision tasks that, you know, um that psychologists have studied for decades on, on lay people and have asked various samples of actual politicians to play these games or other people who are uh elites in some way or another, maybe not politicians themselves, but people who might have worked at the state department or worked in major corporations. And when you do this, you also, you find that these people tend to play the game again, not literally identically with laypeople but broadly in the same way. Um So if you, you know, if you drive, you Ricardo a layperson, drive a hard bargain and most people might drive a hard bargain, then these elites will drive a hard bargain in a similarly constructed game. So those are kind of two kinds of evidence. I would point to that suggest um these games are to the typical way these games are played for laypeople playing for smallish stakes are still likely to capture um at least some important elements about the real world. One thing you that it is more difficult to create in the game just real briefly would be to create long term identities that are important to people. I can't tell people to just suddenly be a Democrat or be a Republican when playing my game, if that's not who they are in real life, right? Uh So those kind of issues are, are difficult to experimentally create. Although of course, you can bring in Democrats and see how they play or bring in Republicans, but you can't easily create these long term enduring identities in a game or like nationalities. Of course. So if you're interested in that, you can, you can study people, but you can't exactly create that in the same way, we can create a similar acronym of trying to cooper uh for mutual benefits. So I think those are, you know, those are there are, you know, this is why we shouldn't, we're not saying you should only use games. It's just this one source of data that, you know, we think that's our comparative advantage here is to use that to understand how climate uh people think about the climate.
Ricardo Lopes: Uh But then, uh I mean, to what extent should we expect the results of these games to uh scale up in the sense that they would apply to? Uh I mean, in this case, a global, potentially a global scale because that's the scale we're talking about here uh be because I mean, you mentioned for example, politicians and also regular people, but we need both and ideally the entire world in the sort of coordinations game. So uh to what extent can we say that for example, the results we get from uh making people play these games in the lab and in different kinds of controlled settings would uh scale up, let's say,
Talbot Andrews: yeah. So I, I think what I would say about this is the mechanisms that we observe in the lab, we believe persist outside of the lab. But there are all sorts of other mechanisms that we're holding constant within the lab that are then operating outside of the lab. And so I I think that the things we observe and the insights that we find are real and they persist. The more complicated question is how do they interact with the other things happening in the world that might shape that behavior? Um But so one example of this in some of my own ongoing work, I'm now asking people to contribute real money, not to prevent simulated disaster, but to contribute real money to actual charities after disaster to engage in that kind of relief in the insights that we find from the book are persisting outside of the lab. And so not only would we say you shouldn't only do econ games, but we as researchers are also engaged in taking those insights to other methods and are observing that those mechanisms are still working.
Ricardo Lopes: And which kinds of specific economic games do you think better capture the aspects of social coordinations that are most relevant to solving climate change?
Talbot Andrews: Yeah. So we we specifically rely on a threshold public goods game, we call it the disaster game. And uh we'll talk a little bit more about that in a minute. Um But to use a pretty famous quote that I love very much, all models are wrong, but some models are useful from George Box, right? So kind change is a huge complicated problem and we are specifically interested in the social dilemma nature of climate change and how do we get people to contribute despite there being an incentive to free ride off of everybody else's contributions. So in this case, a modified public goods game makes sense because it captures that social dilemma. But there are other dimensions of the climate problem that could be modeled with very different types of experiments that I think would also give us insights into the problem. So for example, other people have brought up that climate change is also a distributive problem who's going to pay for climate change and also who's going to benefit from climate change, climate change mitigation isn't just costly. It also creates all sorts of benefits for all sorts of different types of people. And so there's room for very, very different types of games that could also give us insights into the problem just because the scale of the problem is so big.
Ricardo Lopes: And so let's perhaps explore some of the specific kinds of issues that these games could apply to. And I mean, the different kinds of factors that play a role in how we coordinate our behaviors when it comes to tackling climate change. So uh in the book, at a certain point, you talk about how we can use the disaster game specifically to study how people respond to different technologies to stop climate change. So, uh I mean, could you tell us a little bit about that, which factors play a role in this specific case and what tend to be the kinds of reactions that people uh have to this kind of game
Andrew Delton: here? Sure, tell me, do you want to lead us through the basic game? And I'll take on like how one of the earlier studies we did on technology?
Talbot Andrews: Yeah. So just to give you a foundation of what does the disaster game look like, um the way it works is that players are in groups of 4 to 6 people depending on the experiment. Uh And together, they face an oncoming simulated disaster that's going to destroy their money that they have at the start of the game. They have to simultaneously uh decide whether and how much they're going to contribute to disaster prevention. If together they contribute enough, then disaster stopped. They get to keep the remaining money. Um If they don't contribute enough disaster, still strikes. And either way, whatever they've contributed to disaster prevention is gone forever. So the basics of this game captures two of the most important dimensions of the climate problem. So first, it's a social dilemma, everybody's better off if they contribute enough money to stop the disaster. But everybody also has an incentive to contribute nothing and hope the rest of their group will carry the burden of stopping disaster. And then second, it captures this threshold nature of climate change. And I think this is really important in two different ways. So first thinking broadly about the climate problem, you've probably heard if global mean temperature rise exceeds two °C, this is where we're certainly going to see more and worse damages. So there's not a linear relationship between how much carbon dioxide we emit and how bad climate change is going to be. There are these tipping points in the system where if we pass these tipping points, things are going to get a lot worse, much more quickly and potentially be irreversible. And so that threshold captures that dimension on a more local level, this is also important um building half a levy to prevent a flood, doesn't prevent half a flood, right? You have to actually finish building this kind of prevention, infrastructure to reap those benefits. And so that's what the disaster game is meant to capture. And then throughout the book, we have all sorts of different ways that we change that basic framework to answer all of these questions. So Andy, do you wanna take it away with the first?
Andrew Delton: Yeah. So one of our, one of the first projects we worked on as a team, um The kind of the starting point was that organizations like the Intergovernmental panel on climate change, the IPCC, they, they argue that uh we're not going to decelerate temperature rises fast enough if we use kind of straightforward mitigation, um we're gonna have to use, they, they argue at least that we might need to consider technologies like carbon capture and storage. So we're not gonna just omit less. We might have to, for instance, suck some carbon out of the air. And in a sense, these technologies are risky. Uh Not because they might blow up in a nuclear holocaust or something, but because they have potentially high upsides, but no one's ever scaled them before. So it could be a lot of investment that just Peters out and leads nowhere. So we need to, the I PC IPCC argues we need to try them but you know, maybe they'll be a dead end. Um So what we wanted to know is, will, you know, will people be able to integrate the problems they face? Like how, you know, t Talbot mentioned, this is like a threshold, how high is the threshold with how much risk they're willing to bear in order to try to meet that threshold. So again, imagine yourself in one of our games, you're in a four person team. Um And we tell you, you each got some money and uh let's say you have, you know, you have $20 and the way that you can prevent disaster is by contributing to this threshold, we give you a monetary amount. But um we give you two ways to contribute. If you want to contribute, you can just give your money directly or you can basically put it into a slot machine and if you win your money is doubled, but if you lose your money just disappears. So the idea is that this is sort of simulating the contrast between, you know, more traditional mitigation ideas like making more windmills or uh finding ways to emit less by more efficient vehicles or something of that nature versus this sort of big picture big, you know, kind of go go big or go home technology where maybe it will pay off, but maybe the investment will just go Kaput. Uh So will people uh you know, how will people decide how to invest their money between these two forms. So the what we did then in these experiments was again, if you were in our group, imagine, you know, there's uh ever if you, some groups have a really low threshold where it's easy to just contribute some money directly and you can probably meet the problem. Other groups are giving a higher and higher and higher threshold. And if you do the math, um you know, some very complex math, you would discover that it's probably in your interest then as a group to have a lot of people engage in these risky investments. And so we wondered whether normal people who were the ones who they don't make the decisions in the big rooms with presidents, but they are the ones who elect presidents and prime ministers. So their opinion still matters quite a bit. Would they understand this problem and being, uh, you know, be, be willing to engage in risk to solve ever higher thresholds? So, um, that's what we did. We tested whether people could do that and we actually, there's different kind of theories out there. Some theories are pretty pessimistic about people's abilities to reason correctly about risk and about complex decisions. Um, BUT, uh our back, some of my background and Talbot's background is both, uh, we're from partially trained in evolutionary psychology and it turns out that evolutionary biologists, um, and, and psychologists have discovered animals and humans are actually really good at balancing riskiness with the kinds of thresholds that they face. So we were, we were thinking that people would actually be pretty good at this problem. And in fact, the quality of their decisions astounded even us, even though we were already kind of optimistic, people were really sensitive to the threshold and did a very good job of balancing which kind of investment to make, depending on just how difficult meeting that threshold would be. Um So, you know, uh basically, when people, when averting disaster became ever more difficult, people were more likely to, you know, take a gamble on this sort of a stylized choice meant to simulate real world uh technology like carbon capture and storage.
Ricardo Lopes: Uh So, but uh I mean, tell us perhaps about specific kinds of technologies that you've studied in these games and I mean, what kinds of reactions do people tend to have to them?
Andrew Delton: Yeah. So in other work, what we've looked at, so this is that was a study on like carbon capture and storage. Um We've also done ones that are designed to to more simulate things like um uh solar radiation management, a fancy word for doing any doing things in the atmosphere to reflect light back into space. So one kind of thing that's actually in the grand scheme of things relatively cheap is seeding the atmosphere with aerosols or other types of chemicals that could reflect light back into space. Um And that's risky in a different way. So I mentioned the carbon capture stuff. The big risk of that is it just doesn't pay off, but when it comes to messing with the atmosphere in other ways, adding more chemicals in the air, they're supposed to redirect light. Uh THAT could, there's various, you know, worries that that could change other aspects like weather patterns throughout the world. So not only could that simply fail, that could actually back fire um and hurt people. Um And if you try to implement that. And so in some of our studies, we tried to simulate that kind of problem where you have the choice of a technology that could actively hurt other people. And what we did there was you would play in these groups as before in one version of the experiment, you would have to make your decisions just as a group. Um Another version there would be oh and and one person would be one person in your group would also have the option of uh choosing whether to try, you know, to basically see the atmosphere with these aerosols. So we're, we're kind of proceeding along two tracks. Everyone in the group can try to just directly contribute to, you know, this threshold. Another person can try like a silver bullet that might meet the threshold immediately, but it could backfire and literally it hurt everyone in the group. So that's one version of the experiment in another version of the experiment just like what I said. Except the person who gets to decide whether there's geoengineering is a total outsider to the group. So they have no role in the group whatsoever. They don't suffer any personal costs or benefits whatsoever. So, you know, maybe they won't care enough to make a good decision or maybe they even be too risk taking. Maybe they're just like hell, yeah, let's just do this and see what happens. But in fact, when we, and when we run these experiments, people in the outsider role do exactly what insiders would have done in their place. So insiders are pretty good about picking when to use or not use geoengineering based on things like the size of the threshold, uh outsiders just as thoughtful, just as sensitive, they do exactly what insiders would have wanted them to do. So that's not. So that's, you know, another kind of technology um that we've tried to simulate in the lab. And also um uh you know, example of like this, the theme of making decisions on behalf of other people too.
Ricardo Lopes: Uh So that's about the technological side of things. But uh how about helping other groups of people? I mean, how do you study that through economic games? And what kinds of results uh do you get?
Andrew Delton: Yeah. So real briefly, you know, the, the earlier games I I mentioned. So the ones where uh it's about capturing a car uh excuse me, uh carbon capture and storage. So those ones we initially had people play just on their own behalf. So you have to delicately manage the size of your threshold and whether you contribute directly to it or will you take a gamble? Will you play a slot machine in other work? Um What we did was very simple, tweak. We had some people continue to do that as before and we had other people, uh, who were, uh, told you're gonna make these decisions, but they're only going to affect a bunch of strangers, they're not going to affect you directly. So just like the games where things could backfire with the uh aerosols in the air here, similar kind of deal where a bunch of outsiders have to make decisions for insiders. But the the difference here is this is back to a group of people. So now it's not just you in isolation being like the president making decisions, you're back to a group. There's still this kind of social dilemma aspect, but it's a social dilemma that impacts other people. So what do you do? Well, surpri like, like this is many points in our research program, we were frankly shocked at how nice and uh strategic people were. So in our games where the group making these decisions about whether to take a gamble or not, they were just as happy to contribute to a bunch of strangers as to contribute to preventing their own disaster. So they were very generous with helping other people prevent disaster. That's interesting. But you might wonder. Well, are they not good at doing that though? Do they not pay as much attention when they're doing it for other people? But no, they were just as uh accurate at, you know, t trading their riskiness with the quality of the, the difficulty of the problem they faced. So whether you're making decisions for yourself or for others, um, people were just as good as managing this kind of multiple kind of technological problems. So again, we were frankly kind of surprised how, how kind and, and good people were at making these decisions.
Ricardo Lopes: Ok. So I guess that's already some reason for optimism. But uh le let's talk now about the fact that as you mentioned earlier here, uh most of the costs are going to be paid very unfortunately by more vulnerable people, people from poorer countries. So how willing are people? Uh I mean, uh what extent are they willing to go in order to avoid disaster for other people? I mean, they've also said that. So what do we know about it?
Andrew Delton: So, yeah, we do have some data on that with these are some studies with our colleagues, Alessandro Delponte and Nick Seltzer along with our, our, the colleague who wrote this book with us Ruben Klein. Um WHAT we tested in those studies was, would people in these kind of games be willing to constrain their own behavior to, you know, kind of pass up benefits to prevent other people from experiencing disaster. So let me make this concrete in these games. We sort of, we tried to simulate sort of the broad sweep of human history where at one stage of world history, we engaged in economic development and industrialization where you know, you could take some, you could essentially in our game. We simulated that by just saying, hey, you can take some money from a common pool if you want, just take some cash totally cool. The problem is the more cash you take at that early industrialization, economic development stage. In a later stage of the game, you're gonna play the disaster game. And when you face that, you know, we have this threshold problem can you can you meet the threshold to prevent disaster from coming. Well, the more you take earlier, the higher the threshold gets. So there's sort of this balancing act between um you know, industrializing and make have an economic development versus some sort of economic uh environmental problems that you create later on. So in one version of the game, you yourself have to balance those two problems, your group has to. So there, you know, we using some game theory math, we could predict, you know, what, how much people should uh take from in the earlier stage of industrialization in order to sort of maximize their earnings across both of these stages. That's, that's easy. Uh But the problem is that's not the real world. It's different sets of people who are facing these two different kinds of choices. So we also had a condition where uh earlier different sets of people are harvesting money in the economic development stage and a separate set faces the environmental threshold problem that was created by the earlier people. So what we found was unlike some of our other results, people didn't literally play uh exactly as they would want to have played for themselves. They did take some more money, but they, they were relatively constrained in how much they took. So they were pretty good about not sending like a ridiculously hard problem on to future generations. So that's, that's, you know, kind of a, a reassuring finding. And one of the things we did too was most of, you know, some of our people are students playing in our lab in the uh in New York. On one version though we had people playing online uh Americans who are either creating problems for other Americans or Americans creating problems that were then sent to players in India. So a different country entirely on the other side of the world. So you might think when they send them to a totally different nationality, they stop caring as much. In fact, that didn't make any difference whatsoever. People were just as circumspect in creating environmental problems where they were, they sent it to Cocom Patriots um versus, you know, an out group member on another continent. So, um you know, again another set of like findings that sort of astounded us and how like quality, the quality of people's decisions and how reasonably generous they were.
Ricardo Lopes: But another very relevant question here is as individual citizens, it would also be ideal if we changed some of our habits and particularly people in more developed countries because we consume more and we have certain luxuries that we don't want to abdicate. I mean, how willing are we to constrain ourselves a bit more to avoid uh emissions? I mean, how do we study that?
Andrew Delton: Uh HM I don't know if we directly studied that particular question. We studied other questions that involve inequality between people but not whether inequality between people uh leads them to constrain themselves. Uh MORE, more or less. Yeah, Talbot, do you have any thoughts on that?
Talbot Andrews: So that we have not done this work? There is interesting work taking place outside of the lab on how do you get everyday people to actually ait less? And it, it's a huge complicated problem um with some fantastic work that's more focused on what are people doing in their day to day lives.
Ricardo Lopes: So uh another topic then would be uh tensions between leaders and citizens and their relationship. So uh when it comes to the more, let's say economic or game theoretical aspect or approach to this question, I mean, how do you approach it? And what are some of the most important aspects to tackle here.
Talbot Andrews: Yeah. So here, what we're really interested is the interface between publics and elites. So when does the public trust elites to deliver effective mitigation policy? And when do elites trust the public to behave accordingly and to support those effective mitigation policies? So to be clear in these experiments, we hold aside climate change denial, we hold aside partisan polarization and really focus on how different institutions might change people's trust. So take, for example, from the public side, the case of disaster prevention, um there's this big puzzle in the literature that finds the public tends not actually to be very supportive of disaster prevention spending, which is a huge problem. Um Disaster prevention spending is much more economically efficient than just spending on relief after a disaster happens. But there are also all sorts of things that you can't bring back that you could have prevented. So for example, loss of life, you can save people with disaster prevention, but you can't bring someone back after a disaster has struck. And this has been historically, largely interpreted as evidence that voters are just not that sophisticated that they don't realize they would be much better off if they were spending on disaster prevention. But there's another possibility. Um So the problem with disaster prevention is it's hard to know how bad would a disaster have been without the prevention because we just don't observe that world the way with relief I can see like there is a person getting rescued and there are like firemen coming and dealing with this problem. You just don't get the same thing with disaster prevention and this is especially problematic when politicians have a lot of discretion over those prevention funds. Um So one of my favorite examples of this, the mayor of Kingfisher Oklahoma, which is this tiny little town in Oklahoma that's built between these two rivers that floods every few years. He got this huge grant from the state of Oklahoma to deal with this cyclical flooding that the town had. And he took the money and just um bought out all of his own businesses that were on the floodplain and used his own construction company to do that. And it was one of those cases where this is disaster prevention, but it's much harder for the public to pay attention to that, the way that they can with relief and to monitor that. And so what we did to test this to test whether voters just don't realize they should be investing in prevention or whether they're worried about that kind of corruption. And taking advantage of the spending is we ran experiments where we introduced a leader. So the way that this worked is that players didn't know exactly how much it costs to prevent disaster and their leader does know and then their leader tells them like, hey, it's actually pretty inexpensive or I'm sorry, but disaster prevention is just going to be really expensive. And we essentially manipulated whether the leader had an incentive to be corrupt, could they skim money off the top? And we find that players are incredibly sensitive to this, that they're much more trusting of leaders who don't have that incentive to behave in a corrupt kind of way that essentially these funds are pre committed. The leader doesn't get to skim. And in this instance, voters really support disaster prevention policy. And so we interpret this much more optimistically that some of these roadblocks to getting voters to support climate change mitigation and to support disaster prevention aren't because they just don't realize it's important. But instead it's an opportunity to design institutions that will increase support for this kind of spending. Um From the leader side, we were also curious whether leaders could anticipate this kind of sophistication. And we looked at this in a slightly different domain. So we looked at this again with the case of geoengineering and solar radiation management. Um A big concern in the policy space is that if we deploy geoengineering, if we research geoengineering, if we even talk about geoengineering that basically people will stop supporting climate change mitigation, they'll say like, oh I trust this. Um LIKE wild technology is just going to solve the problem. So why would I bother like supporting building more expensive wind turbines which is a problem because we need to, we need to still build wind turbines, we need both of those things even if we do successfully deploy geoengineering. And so this is called uh a moral hazard. Um WHERE deploying geoengineering will leave everybody worse off because they won't support climate change mitigation. We find limited, no evidence that everyday people actually engage in that moral hazard. And this is echoed by a ton of survey research, a ton of other experiments showing that people still support climate change mitigation even when geoengineering is brought up. In fact, sometimes they want mitigation even more because they think the idea of spraying aerosols in the air to reflect sunlight is terrifying and horrible. And so they say I will build as many wind turbines if you want me to build, if you promise not to do that. But from the leader side, we find that people anticipate others will engage in a moral hazard. And so they propose policies that are less effective because they underestimate the sophistication of their peers. And so we get this sort of backfiring effect and this is largely what we're interested in is this interface between the public and the elite when it comes to designing and getting support for effective mitigation policy.
Ricardo Lopes: So this has a lot to do with trust, right? I mean, to what extent, particularly common people can trust their leaders to really communicate a good and valuable information and really do what they say they're going to do or achieve the results. They say they want to achieve. Right. Mhm. Uh Do you have anything to add to that? Doctor Dalton or?
Talbot Andrews: Yeah, I let me just add one more thing on the, the question of trust. Um We focus specifically on institutions and how institutions constraining leaders can generate trust. So I trust a leader not to behave in a corrupt way because they're operating within an institution that doesn't let them behave in a corrupt way. I think this is one of the points where taking the insights outside of the lab is going to be especially critical because unfortunately, we know in other political science research that we tend to trust messages from political elites who we already trusted. And so Democrats like, aren't listening to Republican senators telling them about disaster prevention. Republicans aren't listening to the New York Times when it's telling them about disaster prevention. And so this is a place where I'm excited to see more of the marrying between these sort of incentivized games and the political complexity, especially in the United States.
Ricardo Lopes: So um another way by which people have used disaster games is to study inequality. I mean, I we've already mentioned the issue of inequality as it applies to climate change as well because of course, people from different backgrounds, different countries, poorer regions more vulnerable will suffer more from climate change than we in the more developed countries, for example. So what can we learn about inequality through disaster games?
Talbot Andrews: Yeah, so we we've run the disaster game and others have used the disaster game to add inequality. So there can be inequality in the risks that people face. How likely is it the disaster will strike or how much is disaster going to take from you? There can also be inequality in the resources people have to fight climate change. And so maybe some individuals are given more of an endowment, some of them are given less of an endowment. Unfortunately, we find inequality is one of the biggest impediments to getting people to coordinate around successful climate change medication. So when there's inequality in the risks people face or in the resources people have to fight climate change mitigation, it still tends to work out pretty well. The problem is when there's inequality in both, when some people face very high risk and those people have the fewest resources to fight climate change is where we see depressed levels of success. Um And unfortunately, that's probably the situation that closest mirrors what's happening in the real world. However, even in these situations where we see the worst outcomes, they're still pretty good. We almost never observe a situation. We never observe a situation where more than a handful of people refuse to contribute anything. And so even though there's decreased cooper operation with this extreme inequality, there are still very high levels of co-operation in what tends to help are clearly defining people's responsibilities, is making sure that people are contributing according to their resources or according to their risk and coordinating around who is responsible for addressing this problem can really alleviate those problems of inequality.
Ricardo Lopes: Great. So uh let me just ask you both one final general question. So we've talked about he here about how we can apply disaster games and probably also other kinds of games to understand uh people's psychology that is particularly relevant to how we can tackle climate change. And we've talked about uh helping other groups of people, for example, how the tension between leaders and their citizens and other topics here. Uh Looking at all of these studies and results that you and other people got. I mean, are you optimistic about tackling climate change or not?
Andrew Delton: Yeah, I think um I think our, our book is pretty optimistic. We and we are pretty optimistic people about it. Um You know, if we to summarize very, you know, very quickly, the kind of like what we found, we find, uh people are delighted seemingly to not only work together for, to, to help themselves as a group, but to help complete strangers. Um THAT if you design things, the correct way, leaders and citizens don't need to be at odds, they can trust each other and often they want to work together effectively. Um And that uh people are pretty good at making these kinds of many of these kinds of decisions, at least the ones we discovered. So like the stuff about balancing risk and reward. People are great at that. Uh In the closing chapter of our book, um we, we kind of, we summarize this and we kind of outline what we see as the big challenge going forward, which is that given that people seem really cooper really intelligent about this stuff. We think that the bigger the big issue going forward is uh that scientists who actually so not us, we're behavioral scientists, but scientists who study the climate and people who are technologists, you know, they, they and with the help of behavioral scientists, we need to convince citizens that we actually do understand these problems accurately. And the solutions being proposed are reasonable and appropriate solutions to those problems. Once citizens buy in to those things, the problem is so is described correctly and the solutions are described correctly, our game suggests then they're, they're happy to do the right thing. Um 11 interesting thing about our, our games is that, you know, we, we directly give you the rules, we give you the stakes and people believe us. So we have people leaving uh open ended comments at the end of our experiments and people who uh you know, scream that climate change, complete hoax. Um THEY play just like everybody else, right? So uh they, they once they buy in and they know these are the rules, these are the stakes, it doesn't matter what their, their beliefs about the Real World were so in the real world then, um We think that's one of the most important um, elements that is creating this buy in and in, in the public. Um So, you know, you know, gaining the trust of the public as far as the scientific facts about the problem.
Ricardo Lopes: Uh Would you have anything to add to that? Doctor Andrews or?
Talbot Andrews: Yeah, just that I agree with Andy, we have a lot of reasons to be very optimistic. Um Which is maybe not what you would assume, thinking about studying climate change and climate disaster. Um And the question is, how do we get people to believe in these rules of the game that climate change is a problem that it is caused by our emitting carbon dioxide to get people coordinating around this issue?
Ricardo Lopes: Great. So the book is again Climate Games experiments on how people prevent disaster. Uh I'm leaving of course, a link to it in the description box down below. And uh doctors Dalton and Andrews, would you like just to tell the audience where they can find you and your work on the internet?
Andrew Delton: Sure. Yeah. Uh If you uh type in Andrew dalton.com, that'll take you to my web page or you can go to uh to Twitter or X at at Andy Dalton uh that also eventually lead you to my web page and there's links for the book, like I said, it's free to download. If you go to University of Michigan Press website. Um So, oh, and also I would just as a plug, the book is written so that you are able to pick and choose what chapters you want to read, depending on what topic. So if one of the topics we talked about is really interesting, interesting to you and you don't care about anything else that's fine. The book is designed to be modular in that way. So just, you know, feel free to download and just read what you're interested in
Ricardo Lopes: and Doctor Andrews, where can people find you on being through that?
Talbot Andrews: Yeah, my website is Talbot dash. Andrews.com. So you can see a lot more of my ongoing work there. Um You can also find me on X and on blue Sky. Thankfully, there aren't that many talbots out there. So I'm pretty easy to track down on both of those sites. Um And just to add to Andy's plug about our book, it's free to download. Uh It has a pretty clear crash course in why use behavioral economics at all. So even if you're teaching something related to climate change, but about econ games, we have what I think is a pretty nice introduction to that method as well.
Ricardo Lopes: Great. So I'm leaving links to all of that uh in the description of the interview and thank you both so much for taking the time to come on the show. It's been a real pleasure to talk with you. Thank you, Ricardo.
Talbot Andrews: Thank you for having us.
Ricardo Lopes: Hi guys. Thank you for watching this interview. Until the end. If you liked it, please share it. Leave a like and hit the subscription button. The show is brought to you by the N Lights learning and development. Then differently check the website at N lights.com and also please consider supporting the show on Patreon or paypal. I would also like to give a huge thank you to my main patrons and paypal supporters, Perera Larson, Jerry Muller and Frederick Suno Bernard Seche O of Alex Adam, Castle Matthew Whitten Bear. No wolf, Tim Ho Erica LJ Condors Philip Forrest Connelly. Then the Met Robert Wine in NAI Z Mark Nevs calling in Holbrook Field, Governor Mikel Stormer Samuel Andre Francis for Agns Ferus and H Her me and Lain Jung Y and the K Hes Mark Smith J. Tom Hummel. S Friends, David Sloan Wilson, Ya de Ro Ro Die, Jan Punter, Romani Charlotte Bli Nico Barba, Adam Hunt Pavlo Stassi Nale medicine, Gary G Alman Sam of ZED YPJ Barboa, Julian Price Edward Hall, Eden Broner Douglas Fry Franca, Beto Lati Cortez or Solis Scott Zachary FTD and W Daniel Friedman, William Buckner, Paul Giorgio, Luke Loki, Georges, Theophano Chris Williams and Peter Wo David Williams, the Ausa Anton Erickson Charles Murray, Alex Shaw, Marie Martinez, Coralie Chevalier, Bangalore Larry Dey Junior, Old Ebon Starry Michael Bailey. Then spur by Robert Grassy Zorn Jeff mcmahon, Jake, Zul Barnabas Radis. Mark Kemple Thomas Dvor Luke Neeson, Chris to Kimberley Johnson, Benjamin Gilbert Jessica. No, Linda Brendan, Nicholas Carlson, Ismael Bensley Man, George Katis Valentine Steinman, Perras, Kate Van Goler, Alexander Abert Liam Dan Biar Masoud Ali Mohammadi Perpendicular Jer Urla. Good enough, Gregory Hastings David Pins of Sean Nelson, Mike Levin and Jos Net. A special thanks to my producers is our web, Jim Frank Luca Toni, Tom Vig and Bernard N Cortes Dixon Bendik Muller Thomas Trumble, Catherine and Patrick Tobin, John Carlman, Negro, Nick Ortiz and Nick Golden. And to my executive producers, Matthew Lavender, Si Adrian Bogdan Knits and Rosie. Thank you for all.