RECORDED ON APRIL 7th 2025.
Dr. Angela Potochnik is Professor of Philosophy and Director of the Center for Public Engagement with Science at the University of Cincinnati. Her research addresses the nature of science and its successes, the relationships between science and the public, and methods in science, especially population biology. She is the author of Idealization and the Aims of Science, and coauthor of Recipes for Science: An Introduction to Scientific Methods and Reasoning.
In this episode, we focus on Recipes for Science. We start by discussing why we should care about science, the limits of science, the demarcation problem, whether there is one single scientific method, and hypotheses and theories. We also talk about experimentation and non-experimental methods, scientific modeling, scientific reasoning, statistics and probability, correlation and causation, explanation in science, and scientific breakthroughs. Finally, we talk about how the social and historical context influences science, and we discuss whether science can ever be value-free.
Time Links:
Intro
Why should we care about science?
The limits of science
The demarcation problem
Is there one single scientific method?
Hypotheses and theories
Experimentation
Scientific models
Scientific reasoning
Correlation and causation
Explanation in science
Scientific breakthroughs
The social and historical context
Can science be value-free?
Follow Dr. Potochnik’s work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello, everyone. Welcome to a new episode of the Center. I'm your host, as always, Ricardo Lopes and today I'm by Doctor Angela Potosnik. She's professor of philosophy and director of the Center for Public Engagement with Science at the University of Cincinnati. And today we're talking about her book Recipes for Science and Introduction to scientific Methods and Reasoning. So, Doctor Potosnik, welcome to the show. It's a huge pleasure to everyone.
Angela Potochnik: Thank you. Thanks for having me.
Ricardo Lopes: So let me start by asking you, why should people care about science because, you know, most people are not scientists. Many people do not even tend to read science and related topics. So why should we, why should people in general care about science?
Angela Potochnik: Yeah, so maybe I have 33 answers to that. One is um that Science, it turns out, science and the institutions of science as they have developed, are one of our best ways to gain knowledge, uh, not all kinds of knowledge, right? But, but, uh, uh, very important kinds of knowledge and abilities in our world. So that makes science pretty important. Um, AND then beyond that, for, for people who are not scientists or interested to become scientists, um, or insiders to science, uh, there still can be an advantage to knowing something about how science succeeds. One, in that it can help you know, uh, when something that looks like good science should be trusted and, and when, in contrast, it might look like good science, but be deceptive in one way or another because unfortunately, there are lots of examples of that as well. And then secondly, people Um, in some cases can sort of approximate, and I think all of us do to some extent kind of approximate methods that are used, some methods that are used in science in our everyday lives, um, and going about our business. So it can be useful to know kind of how to make use of uh tips and tricks from science as well.
Ricardo Lopes: Yeah, and perhaps, I mean, now a question that perhaps might might even make people more open to being interested in science. In what ways does science have practical applications? Of course, I mean, since I'm familiar with it, I could perhaps think about 1000 different practical applications that science has, but what would you, what would be your answer?
Angela Potochnik: Well, um, Uh, so, of course, uh, medicine, right, and, and sort of health best practices. This is an important place where not only is, um, biomedical and other scientific research important, but also as individuals, we can get some benefit from knowing where the science stands on, on one topic or another, um, so that we can make evidence-based decisions about how to manage our own health, how to manage our children's health, um, etc. Uh, THAT'S one example. Um, WHERE else does it matter for people to, to know about scientific research, um, Uh, you know, quite broadly, uh, I think statistical tools, um, and, and just kind of basically statistical concepts, even if, uh, we're not going to be sort of, um, people who are using, uh, statistical tools at a level of sophistication, uh, that we would have like we had a stats class or something like that, um, just the very concepts behind the use of statistics and probabilities I think can be useful.
Ricardo Lopes: Mhm. Um, AND why is it that you think that public opinion so commonly lags behind scientific research? Because, you know, sometimes when we go on the street and ask people about certain scientific topics, they either know But almost nothing about them, or they give answers that are already outdated. Sometimes they are decades old answers. So what do you think about?
Angela Potochnik: Yeah, well, so I have 3 different answers. The first is, um, it's pretty natural that there's a sort of division of expertise in our society, right? So, uh, even scientists ask, you know, ask, um, a climate scientist pointed questions about dark matter research and and ask and um. Astrophysics and, and they may not have more expertise than than somebody off the street, right? So there's a division of expertise in our society. I would expect scientists to know more about their area of expertise than than the lay population and then people who are not in that area. Um, uh, THEY'RE also, I think, um. Can be some challenges to um keeping up to date with scientific findings or knowing and maybe more to the point, more importantly, knowing how to interpret and put into perspective scientific findings. And I think part of the work to be done there is a better job on the part of scientists and those who think. Science, like philosophers of science, as well as science popularizers to help create avenues and access points, uh, into the latest scientific knowledge, but maybe more importantly, ways to think about and put into perspective scientific knowledge, um, so, so that it, um, sort of all adds up for people who are not scientists. And then the third answer I think, um, is that, that, um, Uh, at least now, I think we find ourselves at a point, uh, and I say at least now, because maybe this is not unusual, um, uh, but right now we find ourselves at a point where, um, scientific knowledge is, is politicized in certain contexts, and so some people have some Um, personal and social identity bound up in, in not knowing more about science, or at least certain types of science and, and how it functions. Um, SO that's another kind of, of challenge and something that I also think philosophers science are well positioned to, to, uh, weigh in on.
Ricardo Lopes: Mhm. And I think that another very important thing for people to keep in mind is that uh science has certain limits. So from an from an epistemological perspective, what would you say are the limits of science?
Angela Potochnik: Yeah, um, well, what we say, and that's a good chance actually, this is the first time I'm explicitly referencing the recipes for science book, um, and I do want to say on the front end, this is of course a co-authored book, so it's not just me, but I worked with Corey Wright and Matteo Colombo, two other philosophers of science, um, on, on this book, um, and it now has two editions, and we worked together on both editions of the book. Um, OK, that was enough for me to forget the question. Uh, LIMITS of science, I remember, sorry, um. So, uh, the line that we take in that book is, um, something like, um, science can tell us about, um, uh, empirical knowledge or can help generate empirical knowledge about our world. And so, so this is a way of um cashing out the idea that Science um proceeds on the basis of evidence and primarily or at least uh including empirical evidence that is, uh, information from our senses, uh, that, that are then recorded in the form of data and shared among different people so that we can kind of like all uh work together to, to, um. Uh, COMPARE and contrast all of our different, uh, sense data that bear on, on one thing or another and, and, uh, formulate arguments about how the world is on the basis of that data. Um, SO I think that gives us some, uh, insight into what we might think of as the, the limits of science. IS that science is, is best positioned to tell us um things uh that uh or to answer questions that, that empirical data can bear on in one way or another, even if a lot of times it's pretty indirect, right? It's not sort of just looking to see what's right in front of us.
Ricardo Lopes: And what do you think about the demarcation problem that is distinguishing or demarcating science from pseudoscience and even sometimes also people talk about anti-science. I mean, do you think that it is really very important to be able to properly demarcate science from pseudoscience or not?
Angela Potochnik: Yeah, um, so, so I say yes or yes and no, and I want to start with a no. I think it's uh quite natural that that scientific projects are continuous with other kinds of human projects. And so, you know, when is engineering science versus um the application of science. I think that there are going to be gray areas for questions like that, and I think that's really uh natural, sort of natural of the ways that we use words and And define um categories. Uh, SO that I think is, is, is normal and not, not to be concerned uh about, not something we should be concerned about. Uh, THE thing that starts to become problematic as you suggest is that because there's so much societal investment in science, um, and because, um, many of us, right, have a kind of implicit trust for scientific findings because we, we know something about the methods that, that gave rise to those findings. Um, WE'RE in a situation where some of Other projects um find it valuable or lucrative to sort of pretend as if they're more scientific than they are. And that's where demarcation, I think, becomes important if, um, uh, projects that don't have the legitimate hallmark features of science are sort of working after the fact to try to suggest that they do in order to gain the trust that we should reserve for, you know, well vetted scientific findings, um, that then we as a society need to be invested in. Uh, SORT of, uh, demonstrating the ways in which those projects fall short of science.
Ricardo Lopes: By the way, we've been talking a lot about science, but what is science exactly? I mean, do we have a proper definition of it?
Angela Potochnik: Oh gosh, um, I wish I could tell you off the top of my head what the definition is that we use in, uh, in recipes for science, uh, and I don't remember off the top of my head. Um, BUT, um, I think I've given you the ingredients of what I think science is, uh, even if I can't give you a, a pithy, a single phrase in what I've said so far, which is, um, I think science is, uh, a set of methods, uh, rooted. Ultimately, uh, in some sense, in, in empirical data, empirical observations about the world, um, that's used to, uh, generate knowledge about our world and abilities to sort of act in our world effectively. Something like that.
Ricardo Lopes: OK. So, and when it comes to uh scientific methods or the scientific method, I mean, usually, we tend to talk about the scientific method, but is there really one single scientific method out there?
Angela Potochnik: This is a point philosophers of science love to make. No, there's not a single scientific method out there. Um, AND so this is uh one of the reasons that we call our book recipes for Science. Um, I say one because there are other considerations in, in sort of choosing that title versus um. Uh, SOME other possibilities that we considered. Um, BUT we want to emphasize, uh, the plurality there, right? As well as with the metaphor with recipes, a kind of open-endedness, right? So even if you're looking at a bread recipe, you might have margin notes about what worked for you last time, the kind of flour that you like. Using it, uh, etc. And so there's a kind of open-endedness and customizability, uh, to scientific methods, um, that I think it's important for us outside of science to, to recognize about it. It helps us kind of better, better understand what's going on and what to anticipate.
Ricardo Lopes: Mhm. So, now this is a question at the intersection, I think of philosophy of science and ethics. Is science simply descriptive or can it also be prescriptive? Oh,
Angela Potochnik: I think, yeah, I think science quite, quite easily can be prescriptive. Um, AND so I also think it's important and in recipes for science, my co-authors and I take a very broad perspective on what is science. I consider, uh, social science, for example, to be properly scientific, just as scientific as uh as physics and other natural sciences, um. And in social sciences is a is one place, I don't think it's the only place you get sort of normative claims from science by any stretch, but it's an easy place to look to for examples. Uh, IF we are using empirical methods to account for um features of human societies, then, then you're pretty naturally in a place where, um, taking on board certain aims for One question or another about about human practices and gathering data about that it puts you in a good position to make recommendations based on um what what aims um uh society might want to achieve that you as a scientist have sort of taken on board for your work.
Ricardo Lopes: Mhm. In science, when we talk about hypotheses and theories, uh, what does each of them mean and how do we distinguish one from the other? Yeah,
Angela Potochnik: so, so, um, we follow, I think, a pretty, pretty common way of thinking about these terms in recipes for science. Um, HYPOTHESIS, uh, uh, we use a lot throughout, uh, the book, and the idea here is that these are conjectures about the world, um, whereas theories, um, Uh, first and foremost, we can think about, uh, scientific theories as, as being corroborated, as having a substantial amount of evidence to support them. Um, uh, A sort of scientific theory that is communicated as such, named as a theory, shows up in a, in a textbook, etc. um, THIS doesn't happen until there's a lot of evidence, uh, to sort of shift the scientific community towards taking that theory seriously as something that might be, be true about our world. Um. Uh, SO that's a big difference between hypotheses and theories. Hypotheses can be kind of open-ended conjectures, right? You might be out on a limb or a scientist might be out on a limb on, uh, a particular hypothesis that they're investigating, whereas a theory is kind of well corroborated by the time it's named a theory. Um, SOMETHING that lurks in the background of what I've already said is, um, And I don't know exactly how to say this well, um, but we also say something like this in, in the book that, um, uh, theories are kind of, um, Uh, they tend to be bigger and grander, um, kind of, uh, networked, uh, series of claims about how the world, uh, is rather than, than hypotheses tend to be, uh, sort of small, um, specific claims that can be kind of operationalized for investigation. So there's also a way in which theories kind of put together a successful hypotheses and articulate kind of the, uh, a framework of understanding the world on, on that basis.
Ricardo Lopes: Mhm. And what are the different kinds of ways we can set up experiments in science? I mean, if I want to do a scientific experiment, how can I go about it?
Angela Potochnik: Uh, WELL, there too, right? The answer is lots of different ways. Um, AND this isn't, uh, I want to say on the front end, an area of philosophical expertise for me in particular. One of the nice things about having co-authors for this book is we did try to give a really even-handed, um, expansive treatment of um the different features, uh, of scientific methods and patterns of reasoning, um. Uh, SO, uh, so I don't work in philosophy of experiment in particular. Um, BUT in general, um, uh, experimentation in, in science is, is open-ended, as I suggested methods are in general in science. Um, HOW we talk about experiments in, in this book is starting with kind of a core, uh, description of, of, uh, what we think, and I think this, this follows kind of con conventional wisdom. What we think, um, experiments are. Designed to try to approximate, which is ultimately influence, you know, sort of extraneous influence on the independent variable, uh, control of all, uh, extraneous, um, variables, um, so that there aren't confounding variables and, and sort of, um, attention to or measurement of, uh, the dependent variable to see how it, um, how the dependent variable responds to a change to an intervention on the Independent variable. And then all of the variety that you see in experiments, sort of, um, uh, comes from different ways of trying to accomplish that goal. So one really basic, uh, difference is, are you in sort of, uh, a lab, uh, looking at a physical system where you can directly control most of the variables that aren't your independent variable or your dependent variable? Or are you trying to conduct an experiment out on, out. In the field somewhere, uh, and you need to indirectly control, um, variables with, for example, the use of statistical techniques. Um, AND, and then sort of differences, uh, sort of, as, as that suggests, differences kind of grow from there, right? So you might be in the field, you might be in the lab, depending on the types of uh phenomenon that you're investigating different types of techniques will be in order or, uh, you might be able to be in a lab or, or that might not even be possible. Um, I guess another difference is, um, hypotheses, uh, might be more directly amenable to experimental investigation or scientists might have to do some work before they even know what they would anticipate seeing, uh, and, and, and what kind of experimental system, uh, for some hypothesis that they have. Um. So that's pretty abstract, but that starts to gesture at some of the variety. The emphasis there is, I think that there is variety and that there's not sort of one best way to do it. So, so one of the things that sort of makes me grouchy is when, when, uh, scientists suggest that a certain way of doing science or a certain field of science is sort of better or more trustworthy, um. Uh, THEN another, and I think, I think that's not really fair. Mhm.
Ricardo Lopes: Is all of science experimental in terms of its methodology, or are there also non-experimental methods in science?
Angela Potochnik: Yeah, so that's something we really want to emphasize in the book as well. Um, SO we have a chapter on experiment and then we follow it directly with a chapter on, um, non-experimental um studies. Um, SO there's a range of different ways of conducting studies in science, uh, when phenomena are not. I should say when phenomena or the particular hypothesis under investigation are not sort of well set up for direct uh experimental um interventions. Um, SO yeah, there are a range of non-experimental uh studies that are techniques in science, um, uh. And so in that chapter, sort of some of the, the easiest way to see that are are sort of studies that that in one way or another kind of try to approximate some of the control that experiments give us, but then there are different kinds of techniques entirely, such as a meta-analysis, right? Using existing data to try to sort of um Uh, draw conclusions from the different kinds of data received in different, uh, across different experiments. Um, AND then there are, there's also another approach, um, of non-experimental studies, which is, uh, for us, uh, the focus of, um, yet another chapter following directly on our non-experimental chapters, which is, uh, scientific modeling, right? So, so using computers, mathematical modeling techniques, etc. TO stand in for some of the work that's done, uh, in experiments.
Ricardo Lopes: So you mentioned scientific modeling there. What is that? I mean, how do we model in science and what is the model?
Angela Potochnik: Yeah, so, um, scientific models, um, can take lots of different forms. Um, SEEMS like the sort of basic characteristics is, um, that, uh, scientists develop a system, so maybe I should use scare quotes, develop a system because that's where, where some of the difference come in. I'll say more about what I mean by developing a system in a second. Um, AND study that system in order to learn something about the world that they're interested to know about, right? So, uh, uh, maybe one kind of, um, well, actually I'll, I'll start with the starting point, uh, that we have for discussing models in the book, which is, um, physical scale models, right? Um, IN the book, we focus on, um, the San Francisco Bay model that was uh developed several decades ago by the Army Corps of Engineers. This is, uh, an example that philosopher of science, Michael Weisberg, uh, introduced, um, to the field of philosophy of science and sort of, um, is well recognized, I think, um, uh, from his work. Um, AND, uh, the Army Corps of Engineers literally built a scale model of the San Francisco Bay in order to study how, um, um, interventions on the real bay, uh, that, that, um, uh, policymakers were considering making, uh, would affect the bay, um, what, what the results would be, right? And the nice thing about that is that they could study this. They worked really hard to set up, you know, a really large Scale model. It's, it's more than 1 acre big if, if memory serves, but much smaller than the full San Francisco Bay in the world, um, and, and much quicker acting. And they were able to sort of make interventions on the model to see what would happen, because it's sort of relevantly similar to the real bay. Um, BEFORE, you know, if you had just gone in there and started building dams in the San Francisco Bay, then, then, um, you would have found out how that affected the system, uh, really kind of too late, right, after some of the damage had been done. Um, OK, so that's an example of a scale model of, of how you might be able to and why you might want to, to look at, um, sort of a sort of toy model over here to try to see what would happen in, uh, to the system in the real world that you're interested in. But the same basic technique I think is used, um, when scientists, as increasingly happen, um, develop computer models, right? Find variables and, and, uh, sort of, um, uh, dynamics in a computerized system, right, a computer game effectively. Um, 11 kind or another, um, to represent, uh, or, uh, kind of reflect how things would play out in the real world. Um, SO this is done, for example, a lot with climate modeling now, uh, climate modeling uses computer, um, modeling techniques, um, as one, as one example, lots of fields do. Um, AND then, and then we think also, I think we here is philosophers science in general that the same basic and maybe scientists as well, the same basic um practice is um playing out when scientists write equations, right? To, to, to try to represent a system, um, choose their variables to represent features of interest, and then uh solve the equations, um, uh, or see how, how the dynamics play out, um, in a way that represents, um. Uh, REAL activities in the world, and so those, uh, we think of as mathematical models. So all of those, uh, um, philosophers science are used to thinking of as kind of the same basic technique, even they, even though they go about, um, modeling systems in different ways.
Ricardo Lopes: And what is the relationship between the scientific model and the reality or the aspects of reality that it is trying to model?
Angela Potochnik: Yeah, we, we emphasize in our book that, um, uh, first of all, that relationship is, is often seen in philosophy of science as representation, right? In the, in the way that I, I talked about this sort of model standing in, in some sense for the real world. Um, YOU can study the model and then draw conclusions about the system in the world that's supposed to represent on the basis. On that basis. And, uh, we emphasize in the book that, uh, that relies on a kind of relevant similarity or set of relevant similarities between the model or features of the model and the system that's being modeled. Um, AND, but notice that effective modeling, right? The whole idea of, of using a model. INSTEAD of the studying, studying the system itself relies also on relevant differences. So I emphasize when I talked about the bay model right away that it's important that the San Francisco Bay model is a lot smaller than the real San Francisco Bay and that it's quicker acting, right? And so, so, um. Moers, scientific modelers, um, develop, uh, models again of one kind or another, mathematical equations, um, computer models, um, physical models, etc. um, IN ways, uh, that, um, uh, so that the models are, are relevant relevantly similar in certain respects to the systems they're interested in, but then also relevantly different, uh, so that they can sort of, um, Um, so they're convenient and, and, and usable in a way that that direct investigation of the system wouldn't be.
Ricardo Lopes: And how do we know that a particular model is a good or at least a good enough model? Yeah,
Angela Potochnik: it's a, it's a hugely challenging question. Um, NOTICE that that it's uh similar to questions that we have about experiments too, though. This isn't a special question for models, right? When is, um, an experimental system enough like the phenomenon in the world that, that what we saw happen in the lab is gonna happen in In the real world too. And so the same question happens for models. Um, HOW do we know that, uh, right, if the San Francisco Bay model uh responded this way to, to the intervention under consideration to building dams, etc. THAT the real San Francisco Bay will, um, it's, it's a hugely challenging question. Um, AND, and at the heart of it is again the sort of relevant similarity question, I think. And so checking to see whether the parameters of the model, um, that you think matter, that the scientist thinks matters to the phenomenon under investigation, checking to see that those parameters are the same or relevantly similar, right? Um, SO maybe scaled, right, in the, in the case of the scale physical model. Um. Uh, AND then, and then checking to see which parameters matter. So that, so, um, it's easy to just, uh, imagine that that we have answers to what the relevant similarities need to be, but it could be, and it regularly is the case that scientists discovered that something they thought didn't matter is something that's important, um, to, to, is an important feature of the system. And so, I kind of, um, got lost to the details there. So just characterize that again. This is a really hard question. When, uh, is the kind of results in the model trustworthy? There's not a single answer, I think, to like, here's how you know, and we've done that and we're done. Um, BUT that the answer includes at least two ingredients. First of all, ensuring that there's the kind of relevant similarity and the parameters that matter. Um, BUT then secondly, ensuring that that you've actually accurately identified the main parameters that matter for the particular phenomenon that you're interested in, um, in order to have sort of assessed that effectively.
Ricardo Lopes: Right. So let me now ask you a little bit about scientific reasoning. Could you tell us about the concepts of reasoning, inference, and argument?
Angela Potochnik: Yeah, so, so, um, this is a part of the book that, that my co-author Corey Wright, um, is more responsible for than I am, um, but I'll do my best. Um, SO, um, there's a big shift in the topics of our book from the kinds of topics that you and I have been talking about so far, right? How scientific experiments work, um, scientific models, etc. TO taking a step back and thinking about the, the reasoning patterns that you see in science. And so we want to emphasize that, um, Uh, in science, it's not as if you sort of set up an experiment and or it's not as if scientists set up an experiment, get the data, and then have a check mark or next next to their hypothesis, and they're done. Instead, um, data, uh, sort of are, are points where, uh, empirical information about the world weigh in on what actually can be quite elaborate scientific arguments. OK. So what we mean by arguments are sort of um um ways of Marshaling evidence um using reasoning in order to assemble, um, reasons in support of uh a scientific conclusion, right? So, so I, I suppose you can say that sort of um developing official scientific arguments, uh, it can, as a way of moving us from um empirical data weighing on hypotheses towards putting together. Um, A sort of, you know, uh, theoretical structure like we talked about a theory that kind of, um, uh, is, is a good guess of how the world really is. Um, SO one of the things that we wanted to emphasize in the contrast between reasoning and arguments is that arguments have, in the way that we use the term, have this, this structure of kind of uh socially available to the community of scientists, written down, right, or sort of um officially worked through, uh, in a way that um Uh, is it, is, um, useful for the, um, social, um, project that science is, right, where different scientists work across different projects and experiments to kind of um marshal evidence in favor of, um, or opposed to um different ideas. Um, EN route to to developing theories, um, about our world, um, and in contrast, reasoning is sort of, um, you know, quite broad, uh, an important feature of science, uh, where, where any individual scientist, um, does this on a regular basis, often without even recognizing it in how they connect empirical data to to ideas, confirming, dis confirming hypotheses, etc.
Ricardo Lopes: So, uh, and in science, we have not only inductive reasoning, but also deductive and abductive reasoning, right? I mean, because sometimes people have these, there's this very common idea that all reasoning in science is inductive in the sense that if it works, then it means it's true or scientifically true, but we also, we can also use deduction and abduction in science,
Angela Potochnik: right? Yeah, and this is one place where work that philosophers of science, um, uh, sort of has something to offer for how we more broadly think about, about, um, argumentation or reasoning in science. Um, AND so, um, I guess speaking for myself, I think that the three types of reasoning that my co-authors and I identify in that book, as you said, inductive reasoning, deductive reasoning, and abductive reasoning. Um, THERE'S something important. Uh, ABOUT each of those forms of reasoning that, that kind of um helps us see something deep and interesting about science itself. So starting with deductive reasoning, as we do in the book, um, deductive reasoning is this kind of super fancy special kind of reasoning that philosophers science think a lot about insofar as, as, um, we use logic and study logic, um. Uh, AND mathematics, etc. um, AND deductive reasoning, uh, is sort of super special and fancy in that, um, uh, it's a there it, it provides a kind of certainty and guarantee that inductive and abductive forms of reasoning can't. So, um, a valid. Uh, INDUCTIVE argument, uh, which is the sort of term of art, right, valid, um, uh, deductive validity here, means that uh if the premises, if the starting points of the argument are true, then the conclusion absolutely has to be true as well. And there are interesting ways that, that, um, Uh, scientific methods can make use of that feature of um deductive argumentation. And it's worth noting, um, in general, uh, this isn't something, uh, that can conclude the truth of hypotheses about our world, right, beyond a shadow of a doubt, um, but there's sort of ways of, of um recruiting, uh, deductively valid arguments, um, in order to sort of, um, Uh, structure, um, the, the patterns of reasoning that we see in science, um, in certain special jobs. Um, INDUCTIVE reasoning, um, as, as you already suggested, um, is sort of, um, important broadly in science and is generally the kind of reasoning process behind, um, marshaling evidential support for against a hypothesis. Um, AND I would say that the important characteristic to notice here about inductive reasoning is that it's the opposite of deductive reasoning in the sense that it goes beyond, right, the conclusion goes beyond the premises, the starting points of the argument. Um, AND so an important thing to notice there is that there is a kind of conjecture or, or moving beyond our evidential basis, kind of, you know, um, putting yourself out there, uh, in what you're concluding. Um, AND, and as you say, that's an um I think widely seen to be an important feature of, of science, that, that hypotheses really kind of no matter how much data we've gotten, um, hypotheses and even our theories, um, are, are well corroborated theories are kind of um conjectural, um, about the world. And then briefly abductive reasoning, what we want to emphasize there is, um, that there's also an important kind of reasoning in science that that doesn't just kind of generalize inductively from the sort of things we see in our, our world, but that looks at the sort of evidence we have access to, um, to, um, in some cases, put together a sort of a, a potentially explanatory story about why those features of the world seem the way that they are. That might appeal to um uh features of our world that we can't directly test experimentally, right? So, so, um, some scientific theories get beyond positing things that we can just directly, uh, check for, um, and posit sort of explanatory features of the world that we have to Um, you know, to some extent or another, uh, sort of take on faith, right? So, so, um, uh, the Higgs boson, right? The most recent, um, uh, fundamental particle that, that, uh, um, was announced as, as having been discovered. Um, THERE'S evidence, uh, for the Higgs boson, but nobody can sort of look at one under a microscope.
Ricardo Lopes: Mhm. Right. And what role do statistics and probability play in science and is all of science statistical and probabilistic or not?
Angela Potochnik: Yeah, good question. So here I'll say that my co-author Matteo Colombo is really our expert on probability and statistics um for our author group, um, but uh I'll do my best again. Um, um, uh, PROBABILITY and statistics are incredibly important. Science, um, that said, not all science, um, proceeds, um, with the use of statistics, right? So it's not as if, um, you have to check and make sure that, that a scientific article is using statistics before you know whether it's real science or not, right? Um. The reason, or a reason that pro probabilistic reasoning and um the tools of statistics are so useful in science, uh, from my perspective is that it's an exceedingly complex world that we live in. Lots of things are interacting all over the place, uh, including in the systems that we're interested in, including even in sort of well, um, uh, set up. Experimental systems, right? Experiments. Um, AND statistical tools can let you, or, or have, uh, uh, again, lots of different tricks, right? Lots of different ways of, of helping scientists, um, find, identify patterns, um, and, and sort of move, put to the side that the noise, the different influences that, that can obscure patterns.
Ricardo Lopes: Mhm. So, uh, this is a question that I think uh relates to something that people, even lay people tend to be more exposed to, particularly in the news when it comes to statistical methods in science. Um, HOW can they be used to make, to make estimates about the population from a sample? Because, you know, many times when people hear about statistics uh uh in the news, they think, oh, but that didn't particularly, for example, in medicine. Uh, THEY, they tend to say, oh, but that didn't really happen to me, and I know someone who had, for example, side effects of vaccine or something like that. So, uh, tell us, how can we make estimates from just a sample of the entire population.
Angela Potochnik: Yes, this is an incredibly important use of statistics, um, and so, so as you suggest, um, uh. 11 use of uh statistical methods, um, very widespread use is to um uh look at um uh only a subset of um a group, um could be people, could be, you know, etc. um, DOESN'T have to be people, uh, a subset of a group of one kind or another, uh, and, um, uh. So this actually starts to, what I'm going to say at least starts to sound a lot like what I was saying about modeling. Um, SO first of all, uh, the scientists check to ensure that the group under investigation, let's just stick with people because that is an intuitive example and it's, it's uh where you started us in, in thinking about it. Um, AND so scientists, first of all, check to ensure. Sure that the group under investigation is relevantly similar to the full population. So if your full population that you're worried about is, um, right, all Americans, uh uh um trying to avoid anything socially controversial, um, sorry, I'm not good with uh examples off the top of my head. Um, IF we're looking for all Americans, uh, and, um, what the health impacts are of eating at least one serving of green vegetable a day, right? Um, THEN you want to make sure the scientists want the health researchers want to make sure that the population under study is relevantly similar to all Americans, uh, if this is the, the population. Investigation. So if um they're researching a class of kindergarteners, this isn't this isn't going to, to work, right? Uh, WE need to make sure that we have um people of different ages, different types of lifestyles more generally, different uh uh background health conditions, etc. SUCH that the the um group under investigation is relevantly similar to the full population. Um, BUT then there's something kind of magical that happens, which is if the, uh, group is sufficiently large, right? You can't just study one person and see how eating at least one serving of vegetable a day is good for that person. Um, YOU have to have some, some variety among your subjects, but if you have a sufficiently large group, um, actually it doesn't have to be all that large, and I can't give you a number off the top of my head, but like 100 people is probably good. Um. Uh, OFTENTIMES medical studies are, are much larger than that. Um, SO sufficiently large group that's relevantly similar to the population under study, um, then you're able to project what uh the features of the full population are on the basis of the features or the, the responses of, um, this, this group in particular.
Ricardo Lopes: I mean, we many times hear about correlation and causation and how correlation doesn't necessarily imply causation. So in science, what is the relationship between correlation and causation?
Angela Potochnik: Yeah, so there's not a single answer to give here. I mean, I think you just characterized the most important thing to remember, um, and that is correlation and, and identifying correlations in science is an incredibly in a variety of circumstances is an incredibly important guide to causal relationships. But as we've all been taught, you know, through school, uh correlation does not in itself constitute causation. There are other questions to ask when you see a correlation, a simple correlation before you know there's a causal relationship. Um, SO one way that correlation is, is really important is in, um, uh, statistical, uh, hypothesis testing, right? So, so when you have, uh, an experimental group and a control group, let's go back to our, um, people who eat at least one vegetable serving of vegetable a day. So for our experimental group is um being asked to have. At least one serving of vegetable a day and we measure our control group and say see that on balance they don't achieve that, something like that. Um, THEN the correlation uh in the health outcomes of the experimental group, right, the, the reduced incidence of cancer, uh, to, to choose one real effect of, of increasing vegetable consumption. Um, IS a guide to, um, uh, there being a causal relationship, that is a guide to, um, across people with, with all of different, uh, sort of background health conditions, ages, etc. um, uh, THAT if you choose to or if you consume at least one serving of vegetable a day, you'll see sort of a correlation with decreased, uh, incidence of cancer. Um, SO that's 11 context, scientific context in which correlation is an important guide to causation, but like I said, sort of there are lots of different ways in which it can be. Um, BUT, right, uh, kind of a lot of the, the, um, hoops that scientists jump through in creating controlled studies or using uh statistical techniques, uh, and, um, different, um, uh, techniques, uh, for conducting non-experimental studies are all developed to try to, um, Keep correlations from fooling us, right? To, to try to make it so that we don't infer causation mistakenly from correlations, because all sorts of things are correlated in our world, uh, getting, getting back to sort of the, the complex world, um, uh, that we live in, as I, as I pointed out earlier.
Ricardo Lopes: Mhm. And so, how do we determine causation in science then?
Angela Potochnik: Yeah, so, so, so lots of different ways. This is, I think the same question is asking me, uh, sort of, um, uh, um, basically the same question is asking me sort of uh what experiments are in science, um. Uh, WHETHER there are multiple different scientific methods, etc. So much of that is focused on, um, trying to, uh, I guess it's, I'm overstating slightly, but, but so much of scientific methods are, um, trying to determine where there are causal relationships. Um. And so, um, variable control, right, intervention, um, in an experiment with control, with uh variable control, control of extraneous variables is one technique. Uh, STATISTICAL methods, um, to try to discern patterns across, um, variety, uh, and outcomes across a data set is another technique, um. Uh, CAREFUL, uh, uh, development of arguments, right, on the basis of, um, statistical reasoning from data, uh, is another part, uh, of a technique that's, that's quite important, uh, and the list goes on.
Ricardo Lopes: Mhm. So let me ask you now a little bit about the explanation in science because there are different, or at least in the book you talk about different uh conceptions of explanation, including nomological pattern-based and causal conceptions of explanation. So, tell us about that.
Angela Potochnik: So this is a place um where the book, uh, our book becomes kind of more similar to what you see and more work in philosophy of science, right? So philosophers of science, including me, uh, love to argue about uh what successful scientific explanations look like. That is what features successful scientific explanations have. Um, AND, uh, at least some characterizations of that debate among philosophers of science identify, um, sort of as kind of core um positions, at least that have been held, held historically, uh, the philosophers who want to emphasize, uh, a deductive relationship, um, um, between, um, uh, an explanation and the phenomenon it explains, those who want to emphasize the importance of, um, Um, as, as we put it in the book, uh, sort of, um, uh, demonstrating patterns, right, or, or, or revealing patterns, um, uh, general patterns, uh, by providing an explanation that sort of bringing together multiple different kinds of phenomena, and then, um, Uh, more recently and, and, and really significantly in the field of philosophy, those who want to emphasize that explanations cite causes. And so the move that we make in recipes for Science is to sort of try to consider um what each of these philosophical views of explanation gets right, and then sort of What seemed to be the kind of downsides or shortcomings of each of these accounts. And so we're not sort of setting this up as a philosophical debate about what, what type of a kind of explanation is correct, but, but rather suggesting that there are resources um for thinking about explanatory reasoning and science from, from each of these uh philosophical views about scientific explanation.
Ricardo Lopes: So earlier we talked a little bit about the limits of science, but now I, I want to ask you a more specific or uh question. What, what are the limitations of explanation in science?
Angela Potochnik: Um, Can you say a little bit more about what you mean?
Ricardo Lopes: Uh, I mean, when we're trying to explain something in science, uh, what can be the limitations of doing that through scientific, uh, through the scientific method or the scientific methods.
Angela Potochnik: Yeah, all right, thanks for saying a little bit more. I needed a little bit more to kind of motivate my, my instincts on how I wanted to answer. Um, um, I, I, I don't know that I have anything general to say here. Um, AND I don't know that there is something that can, can be said on the front end about sort of, uh, where science needs to end, right? Or something like that. Um, BUT I do think at least one place to start to look, um, to, to get some instinct for Where that line might be is in, um, this connection that we made, uh, or that I made between, um, scientific knowledge and, um, sort of, uh, uh, an, a basis ultimately in, um, empirical information about our world. Um, AND so there, when we get sort of distant enough from Uh, any basis in empirical evidence, um, is where it starts to get unclear, um, whether we're still in a place that we can sort of generate properly scientific knowledge.
Ricardo Lopes: So let us talk now a little bit about scientific breakthroughs and revolution. So what counts as a scientific breakthrough because we hear that uh concept or that term a lot, what does it mean?
Angela Potochnik: Yeah, good. So, so maybe I just want to say that scientific breakthroughs are when Um, some major advance, um, has occurred, and, and often this means, uh, in a way that we couldn't have predicted, right? So, so the Higgs boson discovery I mentioned earlier and we use that example in, in the book, um, this was a breakthrough because, right, it could, the data could have, could have turned out otherwise, um, but, but scientists, uh, Uh, you know, the, the Large Hadron Collider, um, got data that confirmed that the, um, existence of a new fundamental particle that had been conjectured, conjectured. Uh, SO there's a breakthrough in what we take to be true about the world, uh, in, in that base, uh, sort of in that kind of occasion.
Ricardo Lopes: Mhm. And this change in science occur in revolutions or is it mostly non-revolutionary? Good.
Angela Potochnik: Yeah, so, so we do, we talk about, um, you know, Thomas Koon's really famous work on um theory change and specifically this idea that Um, if we look at the history of science, we don't just see cumulative progress and gaining more and more sort of knowledge about our world, but we see, uh, as you say, scientific revolutions. We see, according to Kuhn, major changes in what scientists take to be fundamental to the world, um. And Coon really emphasizes that um science can have this character and that there's a way in which that threatens cumulative progress and kind of this simple picture of of scientific advances. Um, SO you're asking, right, does most scientific change Look like that. Um, uh, I think the answer to that quite clearly and, and lots of, uh, philosopher of science have said this in response to a Konian view. No, that lots of scientific change looks like the more standard cumulative progress. Um, EVEN Kuon probably is OK with that. There is a question of sort of um to what extent um Kuon uh Kunian scientific revolutions happen across science and continue to happen, or whether he sort of picked on uh found particular instances of scientific change that have this feature, and science can have this feature, whereas, uh, often it doesn't happen that way. Um, AND so that's maybe where I would land is I would want to say, I think it's interesting and challenging and worthwhile to focus on. These periods of, of theory change that include a kind of radical rethinking of, of what's fundamentally true about our world, um, but then also to appreciate that those are really exceptions, um, and that historically there are just lots of instances of, of pretty direct cumulative change and progress in science. And then of course there are interesting philosophical questions to ask about the sort of Radical theory change, um, periods of science, even if these are really unusual, right? Is there any any way to talk about ways in which uh scientific progress has occurred even with that kind of radical change? Um, THESE are really interesting philosophical questions. Um, WHAT we do in that book really is, is, um, sort of leave those open, but, but our focus, um, you know, for sort of a, a broad undergraduate students, um, population that we're imagining is. We, as we wrote the book, is we emphasize the ways in which scientific knowledge is trustworthy, even if it's possible, still in the future that we might have this kind of uh radical scientific change, um, because I think that's an argument that that can be made, right? That, that we can know that that scientific knowledge, uh, as we have it today is uh sort of trustworthy, um, even if, uh, this kind of possibility of radical change is raised.
Ricardo Lopes: Mhm. So there's also this very common idea among some science enthusiasts that are probably not very philosophically sophisticated when it comes to uh science where they say that science is an institution that, that is sort of impermeable to outside influences like for example, the social and historical context. Is that really the case?
Angela Potochnik: Absolutely not. I do have strong opinions on this one. So scientists are people, right? Um, SCIENTISTS bring to their work like the rest of us, kind of background ideas about how the world is, things that they're interested in, things that they're not interested in, and all of that provides a way for scientific projects to reflect the the going concerns uh at a certain time in history, in a particular community, etc. Um, YOU also get something you know I haven't talked about that much, um. Yet in this conversation, you also get um uh in with science, um, and important to scientific methods as well, a sort of um robust community uh with norms around. How exchanges happen, um, ways in which you need to be sort of open in your work to uh external critique from other scientists, etc. Um, SO, even as, um, scientists values, I think influence the kind of things that they work on. Um, AND the kind of techniques they use, etc. um, THE, the role that those values play and how those values influence the science is also sort of subject to critique, uh, by one's peers in a way that, uh, to some extent weeds out, uh, at least some kinds of influences that that those values can have.
Ricardo Lopes: Mhm. So tell us more about that bit that you just mentioned. I mean, how important is it in science for, for us to have sort of um Let's say, uh, a cognitively diverse population of scientists to uh critique each other, each other's work.
Angela Potochnik: Um, YEAH, very important, and this is, this is an idea that's gains kind of broader attraction, not just in philosophy of science, but among scientists and kind of broader discourse about science as well, um, over the last several decades. Um, HAVING a variety of, of scientists with different backgrounds. Grounds and different sort of um antecedents, right, prior beliefs about the world, um, is incredibly useful in making sure that uh the kind of questions and challenges that should be raised have been raised, right? If we have lots of different perspectives um coming into to an investigation, um. Uh, YEAH, and then, and then I think one of, one of the ways that that this can play a role in science that I've emphasized in some of my other work is, um, uh, one of the, the starting points that I gave for where, um, values and personality traits and sort of social identities can influence science is just in what you want to study. And that sounds so boring and obvious, right? So I'm interested in biology for this reason, or, um, right, I want to study cancer because some of my family has been uh affected by cancer, etc. Um, THESE kinds of mundane ways in which um scientists' attention are directed in 11 place or another, I think actually. Um, INFLUENCE in really subtle ways, features of the research that's carried out, um, such that, um, uh, the kinds of knowledge we have about the world, right? Maybe this is a simple example, back to the cancer example. If, um, if we have, uh, scientists motivated to study lots of different kinds of cancer, right? Uh, THEN we will, um, uh, generate, uh, as a scientific establishment, uh, um. Um, I, um, biomedical establishment, lots of different knowledge about different types of cancer versus if we have scientists who, who, um, are mainly focused on the most prevalent forms of cancer, then we'll then we'll miss out on a lot of that. Um, SO even if this is sort of a simple straightforward way in which um values can influence science, I think it ends up sort of, um, Having implications for the type of scientific knowledge that is amassed in a way that ends up being really important.
Ricardo Lopes: Mhm. Is exclusion and marginalization based on social categories like race, ethnicity, nationality, gender, and so on still a thing in science?
Angela Potochnik: Alas, it seems like it is, yeah, um, right, so of course, um, a lot of science, um, uh, and a lot of funding for science, um, happened in some nations versus others, so it's, it's easier to become a scient scientist and participate in science if uh you have some citizenship in the world versus others, um, and there's still, uh, disparities in, um, the, uh, social. Identities of folks who um are participating in science. There's still, you know, sort of um gaps in citation rates, uh, and grant uh famously sort of got a lot of attention the last few years, sort of grant success of scientists with different identities, and I'm sure it's, you know, very complex as to why that's so, but yes, this is a continuing challenge in science.
Ricardo Lopes: And uh how should we approach that issue? I mean, do you think that we would need more uh DEI programs or something like that?
Angela Potochnik: Uh, YEAH, uh, to some, to some extent, this is beyond my job, right? This is a question for someone who does science policy. Um, I think. Um, THE part that I'm comfortable thinking about is, uh, the ways in which, um, um, social identities can influence the kind of, or let me say more broadly, the ways in which, um, having diverse social identities uh in science is valuable for the scientific enterprise, makes it so that science is better at gaining knowledge, gains more trustworthy knowledge, and is sort of improved among among across other dimensions as well. Um. Yeah, how we get there I think is a is a thorny set of questions and and will certainly depend on sort of particular um political context uh as well as sort of um facts on the ground about science.
Ricardo Lopes: OK, so I have one last topic that I would like to ask you about. Earlier, you mentioned how science is not really value-free and the values of scientists play a role in how they produce scientific knowledge. Can science really ever be value-free?
Angela Potochnik: Um, WELL, I think, I think it's important to, to follow up with a, a question back to you, but what do you, what do you mean by can it really be value free?
Ricardo Lopes: Uh, I mean, I, I, ideally, I guess the values brought to the table by scientific practitioners would not play a role at all in their production of scientific knowledge, I guess. Yeah, thank
Angela Potochnik: you. Um, uh, SO in that sense, I think no, um, because of what I said before, I think, I think the kind of scientific knowledge that we amass will always be influenced by Our values, um, and the especially the values of the scientists investigating our world. Um, NOW it is the case if we have scientists with with lots of different values and who challenge each other's values and look for for in a phenomenon that uh other scientists haven't identified to study, etc. THAT to some extent. That this kind of mitigates the the downside to that, right? Then we have lots of different people looking for different kinds of scientific knowledge or doing different kinds of scientific work uh to develop different kinds of knowledge. Um, BUT that project will always, I think, uh, and essentially be based in, um, what scientists as individuals and then scientists as a collective body, um, are prioritizing in a in a way that You know, yeah, we, we, uh, uh, we'll we'll find its way into what kinds of knowledge we amass.
Ricardo Lopes: Mhm. So one last question then, if science cannot be value-free, is that problematic in any way?
Angela Potochnik: Uh, I don't think so. Yeah, so, so again, kind of another work that I've done, I've, I've emphasized that I think to make sense of the features of science, we should see it as a tool that humans have developed to serve our purposes. And that's a way in which ultimately when we zoom. Really far out, um, fundamentally, science is, um, uh, a, um, set of methods and a community and a set of practices for, for gaining knowledge about the world that are, that are linked to sort of embedded in. Uh, THE questions we humans, uh, in particular humans, right, who have happened to be scientists have about the world. Um, SO I just think that's inherent to what science is. Um, IT is, it is ultimately perspectival. It happens from our distinctively, um, uh, human perspectives, uh, and, and to support our distinctively human aims.
Ricardo Lopes: Mhm. Great. So, the book is again recipes for science and introduction to scientific methods and reasoning. And of course, I'm leaving a link to it in the description of the interview. And Doctor Potosnik, just before we go, apart from the book, would you like to tell people where they can find you and your work on the internet?
Angela Potochnik: Sure. Um, SO I do have a website. Um, IT'S just my full name, Angela Potocnikwithout apace.com, um, and so you can get an overview of my, uh, articles as well as the other books that I've written, um, and then some, some like sort of, um, online present stuff like this interview there, um. Uh, AND yeah, besides recipes for science, um, which I've co-authored, um, I have a book, Idealization and the aims of Science, uh, that was published in 2017, um, and a more recent short book Science and the Public, um, uh, which came out about a year ago. Um, YEAH, I don't know, that's a start. It's a couple of things I can say.
Ricardo Lopes: Yeah, great. I will be leaving some links to that in the description of the interview as well, and thank you so much for taking the time to come on the show. It's been a very informative conversation.
Angela Potochnik: Thank you, Ricardo. Yeah, this has been fun. I appreciate it.
Ricardo Lopes: Hi guys, thank you for watching this interview until the end. If you liked it, please share it, leave a like and hit the subscription button. The show is brought to you by Nights Learning and Development done differently, check their website at Nights.com and also please consider supporting the show on Patreon or PayPal. I would also like to give a huge thank you to my main patrons and PayPal supporters Perergo Larsson, Jerry Mullerns, Frederick Sundo, Bernard Seyche Olaf, Alex Adam Castle, Matthew Whitting Barno, Wolf, Tim Hollis, Erika Lenny, John Connors, Philip Fors Connolly. Then the Mari Robert Windegaruyasi Zup Mark Nes calling in Holbrookfield governor Michael Stormir Samuel Andrea, Francis Forti Agnseroro and Hal Herzognun Macha Joan Lays and the Samuel Corriere, Heinz, Mark Smith, Jore, Tom Hummel, Sardus France David Sloan Wilson, Asila dearraujoro and Roach Diego Londonorea. Yannick Punteran Rosmani Charlotte blinikol Barbara Adamhn Pavlostaevskynalebaa medicine, Gary Galman Samov Zaledrianei Poltonin John Barboza, Julian Price, Edward Hall Edin Bronner, Douglas Fry, Franca Bartolotti Gabrielon Scorteus Slelitsky, Scott Zachary Fish Tim Duffyani Smith John Wieman. Daniel Friedman, William Buckner, Paul Georgianneau, Luke Lovai Giorgio Theophanous, Chris Williamson, Peter Vozin, David Williams, the Augusta, Anton Eriksson, Charles Murray, Alex Shaw, Marie Martinez, Coralli Chevalier, bungalow atheists, Larry D. Lee Junior, Old Eringbo. Sterry Michael Bailey, then Sperber, Robert Grassy Zigoren, Jeff McMahon, Jake Zu, Barnabas radix, Mark Campbell, Thomas Dovner, Luke Neeson, Chris Stor, Kimberly Johnson, Benjamin Galbert, Jessica Nowicki, Linda Brandon, Nicholas Carlsson, Ismael Bensleyman. George Eoriatis, Valentin Steinman, Perkrolis, Kate van Goller, Alexander Aubert, Liam Dunaway, BR Masoud Ali Mohammadi, Perpendicular John Nertner, Ursula Gudinov, Gregory Hastings, David Pinsoff Sean Nelson, Mike Levine, and Jos Net. A special thanks to my producers. These are Webb, Jim, Frank Lucas Steffinik, Tom Venneden, Bernard Curtis Dixon, Benedic Muller, Thomas Trumbull, Catherine and Patrick Tobin, Gian Carlo Montenegroal Ni Cortiz and Nick Golden, and to my executive producers Matthew Levender, Sergio Quadrian, Bogdan Kanivets, and Rosie. Thank you for all.