RECORDED ON NOVEMBER 15th 2023.
Dr. Nicholas Brown is a Researcher at Linnaeus University, Sweden. He works on developing new research methods in psychology and on applying meta-scientific perspectives on psychology as a science.
In this episode, we start by discussing conceptual and methodological issues with positive psychology and the study of well-being, and go through critiques of Lyubomirsky’s “happiness pie”, and happiness surveys. We also address methodological flaws with studies on the link between genetics and well-being, sadness and color perception, and predicting heart disease from Twitter language. We then discuss where bad science stems from, and talk about incentives, issues with the publishing system, and how bad science goes unchecked. Finally, we talk about the idea of trusting the experts, and rules of thumb to evaluate scientific evidence.
Time Links:
Intro
Conceptual and methodological issues with positive psychology and the study of well-being
Lyubomirsky’s “happiness pie”
Happiness surveys
Genetics and well-being
Sadness and color perception
Twitter language and predicting heart disease
The incentives behind bad science
Issues with the publishing system
How bad science goes unchecked
Is there much that individual people can do?
Trusting the experts
Rules of thumb to evaluate scientific evidence
Follow Dr. Brown’s work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello, everybody. Welcome to a new episode of the Center. I'm your host as always Ricardo Lob. And today I'm joined by Dr Nicholas Brown. He's a researcher at Linnaeus University in Sweden. He works on developing new research methods in psychology and on applying meta scientific perspectives on psychology as a. And today we're going to go through topics like conceptual and methodological issues in positive psychology and the study of well being. And then we're going to go through a few other studies, psychology related studies to talk about examples of bad science. And then we're also going to talk a little bit about where bad science stems from and how we can deal with it. So, Doctor Brown, welcome to the show. It's a pleasure to everyone.
Nicholas Brown: Thank you for inviting me. It's always uh always interesting to talk about what I'm doing.
Ricardo Lopes: OK. So let's start here with a positive psychology and the study of the determinants of well being. So, uh first of all, tell us a little bit about how you got interested in this subject specifically and then we can get into some of the issues you found there.
Nicholas Brown: Well, until 2009, I was a, I was a computer guy and I was a manager in an IT department. And then in what I like to call a terrible administrative mix up, I got moved to the HR department. Uh, AND since my, my bachelor's in computer science had served me well for the previous 28 years in it, I thought that maybe I should get some kind of, uh, formal training in something to do with people, which was, you know, never II I like to think I was a good pragmatic manager, but I had no theory. And um due to a chance encounter uh with Professor Richard, Professor Richard Wiseman, who was a sort of fairly well known uh British uh psychologist Piers in the media. And, and he mentioned, I I asked him whether there was kind of any um science that could help with, um particularly when you work in a large organization, uh your, your human resources department sends you on training courses to teach you to be a less bad manager and things like that. And uh and it seemed to me this was my, in my new job buying, buying in those services was part of my responsibility and it seemed to me that it was all a bit hand wavy. So I asked Professor Wiseman, you know, if there's any kind of theory there? And he said, well, there is this sub sub. Now he could have mentioned the good old fashioned, uh, organizational and, uh, and industrial io psychology. Or he could have mentioned, you know, management schools, schools, but he mentioned this. He said his new field called Positive Psychology. They have a Masters at the University of East London. And it's quite funny because, like, I think a lot of people I have a recurring, or I had a recurring nightmare, which is that it was three weeks until my school leaving exams. But I had my present knowledge, uh, of the subject matter. Um, AND so I didn't imagine in a million years that I'd be going back to school. But there, I was at the age of 50 doing this masters in something called applied positive psychology. And I had this very naive view of what psychology could teach us about anything. So, in, in 2011, yeah, I couldn't have named you three living psychologists. Uh, FOR example, uh, so I had a, you know, that was a quick, that was a fast introduction to psychology in general. I had no, absolutely no background, uh, positive psychology and, um, as, as part of a, a taught master's program with a little bit of a research project tacked onto the end of it. But, um, that was how I, I got into that. Um, AND about halfway through it was a two year masters, the opportunity came up to take early retirement. So I left paid employment in the end of June 2012 and decided to see what would turn up in terms of things to interest me. And since then I finished the masters, uh I spent a year in a psychotherapy school where I learned that I would be AAA pretty poor psychotherapist. And um then I got a, I got a chance to be on a phd program uh using some of the critical stuff that I was writing about positive psychology and then psychology in general. And more recently, my work has gone into um bad science and up to and including fraudulent science and in psychology and areas beyond. Um BUT the, the kinds of analyses I do are more suited to uh social science type uh situations. Although I also do work on that in things like biomedical, some biomedical fields. So there's the whole, the whole spectrum of the science reform movement from doing things a bit better up to doing less fraud is, is roughly where I am today. But I, I, yeah, I, I started off critiquing uh positive psychology both kind of somewhat theoretically although, you know, I don't have the psychological background to really, you know, heavily critique every single construct in terms of the history of the literature. Um AND, and, and also pragmatically and, and experimental.
Ricardo Lopes: OK. But uh tell us a bit about that. What were some of the main conceptual and methodological issues you, you found in positive psychology and the study of well being?
Nicholas Brown: Well, what I found coming from the outside was uh and I mean, maybe this is a criticism of just how psychology works in general. But I, I know there are people within psychology who talk about this is what are we measuring? And how are we saying that things are better or, or, or, or what is well being? How do you measure well-being? And it seems to me you can get psychologists to agree among themselves occasionally that something is a measure of well-being. But whether or not that actually relates to what ordinary individuals would talk about, I don't know. One of the criticisms I have, for example, is a lot of positive psychology research uh will say that an intervention made people happier. And what they will do is they will count uh an improvement in scores on the BD I two, the beck depression inventory, which is kind of the, the gold standard test. But if you go to a doctor and say, I think I am depressed, they may give you this. Um So the BD I is 21 items and each of them is scored 0 to 3. And the idea is that uh uh we can say normal, we're allowed to say no normal person and a healthy person will typically respond zero to almost all of them. Because if you actually read the questions, even, even level one, even a level one answer is like, wow, that's, that's somebody you might say is quite often, quite sad, et cetera. Um And a typical score out of 63 possible on this in a normal healthy population is about three or four. So there are about three items out of 21 that a normal person might endorse. Um AND a a at the first level. And in order to qualify for a diagnosis of mild depression, you have to score 14. Um WHICH, and if you read it, if you read the BD I, and, and if you yourself have no experience of mental illness or depression, uh and you think, you know, if I was endorsing 14 of those, most people, I think who have no experience of men would go God, if I was endorsing 14 of those, I'd, I'd feel pretty bad. You have to get to 14 to be mild depression, then there's moderate and then there's severe with even higher scores. Now, quite a lot of positive psychology studies will go out and, and recruit people. And the whole point of positive psychology is, it's the psychology of people who are not depressed and who just want to sort of do a bit better in life and they will measure their BD I scores and they'll get a mean of 4.2 and then they'll do this whatever psychology, positive psychology intervention and the, the mean will be 3.8. But I don't know what, what is the difference between 4.2 and 3.8 when to even be mildly depressed, you have to be 40. Why are we measuring at this, you know, ridiculously low end where, you know, nobody is in any sense depressed. And we're saying, well, they're, they weren't depressed before and now they are very slightly more not depressed. I don't think that that necessarily constitutes a good measure of whether people are uh experiencing well-being and something I learned as I went along, particularly on my master's course. But, but other uh talking to other people is that a lot of people who are studying positive psychology are actually quite depressed, but they are, it seems to me and this is, this is an opinion and a judgment, a lot of people who are attracted to positive psychology are kind of in denial about being depressed and think that if they could just be a bit happier, whatever that would mean that they would be no longer depressed. So then the question is, is depression the opposite of happiness. And psychologists have been arguing about that for years, are positive emotions qualitatively different from negative emotions or are they just uh you know, plus and minus versions of the same thing and emotions researchers don't agree about that. So it, it seemed to me that a lot of it was um it was ostensibly aimed at making people who were not depressed happier, but a lot of the people who want to. Oh, I really, it would be so much, the world would be so much better if people were a bit happier, were really, to some extent talking about either themselves or their, their depressed friends or, or whatever, because a lot of people get into psychology, either to understand their own problems or the problems of the people immediately around them. Um, AND then the other side of it was, if it wasn't about that, the problem was, it was very kind of, it seemed very suburban and middle class American. Uh, IT seemed like it was an ideal vehicle for selling you books for 2499 and improve your well-being courses for 4 99 99 for 10 episodes. You know, and you can keep the podcast, you can keep the MP3 file for a week, that kind of thing that it, it, it was very, very easy to be skeptical about it or to be cynical about it in terms of somebody's making quite a lot of money out of it. Mhm. And I kind of have this naive view of science that science ought to be kind of free. Mm. Um, II, I, I'm not against capitalism or industry or anything like that but basic, a scientific finding should be, uh, you know, if you, if you've discovered a truth about the universe, I don't think that's really something you should be able to kind of patent and license and make money on and, and, I mean, this is a problem throughout psychology is that, uh, a lot of the measures, I don't know, I don't know if the BD I is one of them, but a lot of the clinical measures you have to pay, you have to pay some psychologist 20 bucks to administer this scale to a patient. And it's like, well, it's, is it a pharmaceutical thing? Have you manufactured anything? Just, I, I don't know, uh, who paid for the research that discovered it? You know, if the NIH funded it and you discovered it and now you're selling the result for 20 bucks, how much of that is going back to the NIH? So, I don't know, I think there are, I, I just found that there were all kinds of, of, of those kind of issues which everybody in psychology seemed to think was completely normal. So maybe, you know, maybe I was wrong but I had the impression that this was, this wasn't how I thought science was done.
Ricardo Lopes: Mhm. And I guess we'll come back to some of these issues later on. When, toward the end of our interview, we'll talk about bad science and what I guess motivates it, et cetera. But uh I would like to ask you now about a piece of work that you yourself critiqued coming from. Uh I hope I'm pronouncing the name correctly. Leu Bomi at all. I mean, their a happiness pie. Well, I mean, what is, what were they really working on? There, what is these or what was this happiness pay really about? And what would you say were the issues with it?
Nicholas Brown: Well, the way I read it now is that in the early days of the Positive Psychology movement, which was kind of launched with a bang by a bunch of psychologists saying, you know, we're mad as hell and we're not gonna take it anymore. We want people to be happier. We're not just going to be about, you know, depression. And um it's, it was kind of a pa it was a paper that kind of positioned the idea that people have a substantial amount of control over their levels of happiness based on what they choose to do. And so it took a very, it made it, it, it took what we know about the, the extent to which your t trait happiness is genetically determined. Uh And when we say genetically, not just, you know, genes and your uh uh interaction with the environment and they sort of said it, that's about 50%. Now, it's probably a bit higher than 50% but that wasn't really the big issue. And then they said, and your life circumstances only account for about 10% of your happens. Uh And they added the 50 the 10 got to 60 subtracted that 60% from one and said, therefore, 40% of your happiness uh is um you know, you can change, you can make a difference to your happiness. And this is conceptually not very good. Uh First of all, these are all population level things. They don't tell us anything about individuals. But by far, the biggest problem was it is absurd to say that people's happiness is only influenced to 10% of the variance in happiness will be explained by life circumstances. Uh If you look at um you know, the, the, the countries that are the happiest in the world on that one item, how happy are you survey that comes out every year? Denmark scores 7.9. Syria scores 2.4. Um It does that mean. Yeah, so one of them is now, how should we just say one's one's multiplied by three to get the other? Let's do that. Let's naively say that people in Denmark are three times happier than people in Syria. Um Is that because the people in Syria are not, you know, listening to the right kind of music that they really like or, you know, they, you know, they just need to kind of get some kind of lifestyle makeover. Is there maybe something else about living in Syria about the circumstances of living in Syria uh that make people less happy? Uh And, and you can even, you could look at this, for example, with Greece, where Greece went from scoring like 5.9 I don't know the numbers off the top of my head, but basically between 2005, 2014, the whole Greek financial crisis, the Greek scores went from about 5.9 or something to about 4.2. So Greeks became 40% less happy. Now, was that because they stopped going to the gym or did something else happen in Greece? That? And so it turned out that circumstances have an enormous effect. And so I went and dug into what was the source for this claim. And it turned out to be um some studies of Americans. First of all, everybody in the survey was Americans. So the circumstances that they were looking at could only have been circumstances that are common to Americans. So there was no, there was nothing in there about, have you got clean drinking water? Uh Do you live in a, you know, in a democracy because everyone had that in common. But secondly, also if you actually looked what they call um life circumstances, there were just half a dozen demographic variables. They didn't in the studies that they were quoting the what they was not a full investigation of people's life circumstances. So basically, it was a hand waving reference to research that didn't show what they said it showed. And the net result was, they said only 10% is due to circumstances. Therefore, the other 40% is available for you as an individual um to, to alter. In addition, while the paper itself had one or two hedges about about this and, and not enough, it certainly didn't combine the variant in an appropriate way. But nonetheless, it kind of acknowledged that these, he um Lubomirski then wrote two books, uh particularly the first one, which was the How of Happiness, which had a picture of this happiness pie. So the pie was the pie chart. 50% gene genetic, 10% circumstances, 40% you can improve your happiness. Um And you know, she actually wrote things like that. 40% is yours to guide. You know, you, you the reader, you, the individual reader forget the fact that our study even had it been done correctly, would have been talking about a decomposition of population level variants in the popular book which, you know, is signed off on by Doctor Levski and nobody else, it says this is yours to guide and that is simply deceptive, that is simply deceptive because the, the paper itself, even if the paper were absolutely true. And I mean, this is another problem we have with popular science where we, we publish one study and that then becomes the total truth that we can build huge narratives around in a, in a what I call glad well, organization process. But even if that were the case, that isn't even what the paper said. Mhm. And, and, and, and so that, that, that, that, that paper is part of, um I think we're obliged to think of words that, that remind you of the phrase military industrial complex. So, I don't know if we say psychological marketing comp um complex, but it's part of the uh the airport book world in which a top scientist from a top university shows you how you can hack your life with real science. Tm. And then when you go and look at the real science. Tm. Um, IT'S somewhere between, uh misrepresented, wasn't very good in the first place or outright fraudulent in a great many of the cases. Mhm.
Ricardo Lopes: A and something that you reference there was this sort of global happiness surveys that we get from time to time. I mean, how much useful information do you think we can really get from them in terms of actually uh trying to understand if people in a particular country and then comparing across different countries are really happier or sad. It's
Nicholas Brown: difficult because so this is, we're talking about a thing called the World Happiness Survey that there are others. But the, the big one is the World Happiness survey. And basically it's, it's done once a year and they go and ask people a one item question and it's uh on, on a, on a ladder where zero is the worst life possible. And 10 is the absolutely ideal life you could lead. Where are you? So it's actually about life satisfaction. But life satisfaction versus happiness for the purposes of this conversation, I think we can treat it as the same. And what happens is um well, there are several narratives that get repeated every year. One of them is that the Scandinavian countries are happiest in the world and they usually are usually uh Norway, Denmark, Sweden and Finland will. Uh Finland is not a Scandinavian country. Please don't write in listeners um will come out in the top four of the top five and everyone will say, oh, these co well, you could argue they're kind of one country, they're all fairly homogeneous culturally. Again, please don't write in. But, you know, uh and so if, if tomorrow Norway, Sweden, Denmark and Finland got together and said, we are going to form the Nordic Union, you know, maybe including Iceland and the Faroe Islands. Um uh We could, they would be one country and all of a sudden three others would move into the top 10 cos one of the problems with the top 10 countries is that countries are a pretty arbitrary way of defining the world. You know, you might find that the Marshall Islands, you know, ev if everyone in the Marshall Islands, sort of, you know, that if they only interview 50 people and they all say nine, go, oh my God. The Marshall Islands are the happiest country. Let's all rush to the Marshall Islands and find out what they're doing. You see this with education uh a few years ago, uh um Sweden was top of most of the education leagues and everyone had to go and say, oh, what are we? We should copy the Swedish model and the New York Times and all those papers would have a thing to say. Here's what daily life looks like in a Swedish school and, you know, compare that with the US. And then one year Finland came top. Finland got like half of, half of, yeah. 0.01 ahead of Sweden. Uh OH my God. What are the Fins? And it turns out Finland and Sweden have very different education systems. But now we all have to sort of say, oh, we, and we've gone to, you know, various journalists, got a, you know, uh um flights to Lapland round about Christmas with their kids, interviewed a few school teachers. Yeah, there's, and the Finnish model in Swedish were both completely different, but they were both in all. So there is this element and any time you see a top 10 list of countries, it's immediately meaningless because there's nothing wrong with being the fifth best country out of 100 and 93 un member countries. You're probably doing pretty well. Mhm. You're probably doing pretty well. So one of the narratives is that Finland is fantastic. Therefore, we should all embrace social democracy. Capitalism is bad. The US and the UK are generally about 13th or 14th. But because we decided to look at the top 10 and Australia's usually in there as well. Australia is, you know, this is capitalist a country as, as the US or, or, or the UK. So there, it, it basically enables people to tell stories. Then you can say, oh, last year the US was 13th, this year it's 16th. That must be due to Trump or that must be due to Biden or that must. Nah, it's just, just, just sampling error. These countries are all, people are all going 7.97 0.8327 0.841. You know, th this, this doesn't mean that something radical happened now. Yes, if your country goes from 6 to 4, like what happened to Greece, you might go. Hm. There's a problem, but we didn't need the World Happiness Survey to tell us that there was a problem in Greece. So it didn't add anything. We would have been amazed if it hadn't, if they hadn't got less happy. If, if, if Greece had had the same numbers in 2014 as 2005, we would have gone. They're pretty resilient people. Their country's been completely buffered about unemployment, went up to whatever it did. You know, the ATM S weren't working. So that's the, the, the first problem is if it, if any change, any small change is likely to be noise and any big change, you're gonna already know about anyone. The other problem is it doesn't take into account the fact that culturally, well, how you answer a question on the scale of 0 to 10 is going to be quite different. Um, IF I can be a stereotypical brit for a moment, it's quite hard. At least the, the view that people have of the Brits is they're not likely to say 10. Um, THERE is this, you know, and again, this, this may be a stereotyping. I am a brit. So it's not racist but, you know, this could be a thing. Oh, yeah, they're quite money. And whereas one might imagine that the American, yeah. USA number 110, I'm 10. And you can invent all kinds of stories around that about why people might say 10 and, and what their culture has to say about when you're interviewed in the street by somebody and you don't know them and et cetera, uh you can imagine that in some nondemocratic countries that perhaps people are, um, might be less happy than they say they are. But want to know, you know, it's like, oh, ok, who's, who's asking? Oh, you're not from the government. Well, that's what someone from the government would say. I'm 10. I'm very happy with our comrade leader. So, you know, all of those things now that you can kind of compare year on year, I guess. But as I say, the small changes are gonna be noise and the big changes you're probably gonna predict to them anyway. So essentially it's a way of it provides fodder for filling column inches uh in newspapers and that may not have been the intention. And maybe there is some serious social science that can go on behind that. But by and large it, it's like university rankings if you, uh if you sort of, uh even big countries tend, I don't know whether Portugal does it, but big countries, you have a thing, like what are the best universities and what will happen is um they rank you on about 25 considerations. And so even if you are the 84th best university overall, on some measure that we don't know. But let's, you know, uh, Paul Meal, the great psychologist talked about someone called omniscient Jones who knew the actual value of the parameter. Uh, AND, uh, let's say your university is actually the 84th best out of 100 in the country. It's probably going to be in the top 10 on one of those measures. And so you could put, you know, university of wherever we're in the top 10 on, um, student experience brackets, outdoor sports or something like that. And if you aren't in the top 10, you will have gone, you will have improved 25 places on one of these measures and you go, the results are out, we improve 25 places on student experience, indoor sports. Congratulations. Every, everybody and, you know, all have won and all must have prizes. But guess what? People are still applying to Oxford and Cambridge. Yeah.
Ricardo Lopes: No, you know, that this is also funny because uh uh it came to mind that uh also sort of loosely related to this happiness surveys. Then of course, you can also market and sell those books where people find uh culture, uh uh some sort of culturally contained, let's say a concepts surrounding the lifestyle and happiness. Like for example, in Japan, the Iki guy and in Denmark hike or, or h or however you pronounce it and, and then, oh my God, you just have to do this and you will be happy
Nicholas Brown: forever. And then, and so you spend two years buying all the ike guy stuff and then that doesn't work. So then you go and buy all the hugger shit and then you buy whatever the Swedish stuff is and you've got this house full of crap. Um Yeah, I mean, the point is that a lot of people have outsourced their thinking about their happiness to experts and while that's a sensible thing to do on things like, um you know, cancer and blood pressure, um I, I would argue personally and II, I never thought about this until I got into psychology. But I mean, why do we even, why do we have psychiatrists? Why, you know, why do we have psychiatrists for many conditions that are really just about being human? You know, obviously, if you have some kind of neurological disorder, uh if, and, and you know, I'm sure things like schizophrenia and things like that have some kind of biological basis. But uh we've, we started off with that and then we kind of ended up medicalising everything and, and it, which is ironic, cos um positive psychology likes to think that it's not about the medicalization of everything. But uh though II I um bit of a plug, but it won't sell many copies. I edited a book on which we call critical positive psychology. And there was an excellent chapter in there saying, well, positive psychology claims to reject the medical model, but look what they're doing. And in fact, you know, part of the medical model is I am an expert. You're not. If you give me some money, I will treat you. And again, I don't have a problem with that with the person who, you know, fixes my cardiac arrhythmia. But uh so I think a lot of people imagine that they must, they don't, they know that they don't know very much, you know, nobody knows very much about it. Even the kind of philosophers of happiness over the years acknowledged they didn't really know much about it. But we imagine that there must be experts that we can, if we can just give them some money, then they will fix that problem in the same way that they fixed our ingrowing toenail. And um so there is this huge market for that and, and it, it's interesting when Positive psychology was launched, it was very much um it was with an attack on the people at the, the softer end of psychology, the, the humanistic psychologists because, and it blamed them for the fact that the all the self help books are about, you know, the secret and your inner child and, you know, assorted bullshit. Uh And, but there was an element of jealousy there in that, uh those books sell and make a lot of money and, and most of the leading positive psychologists have one or two New York Times best selling books based on their research and the research of their fellow uh positive psychologists. And the research isn't always all that great. And then at that point, how much of a difference is there? Um PARTICULARLY because, and I think this is a studying also in, in, in psycho a wider psychology. There's a certain kind of psychologist that you meet. When you go to a psychology faculty, there are the ones who are absolutely interested in, in measuring this and cognitive performance and, and, and the determinants of depression. And then there are others who've basically got a theory that they want the world to listen to and they're in order for the world to listen to them. OK, I've gotta go and dig up some data, but they're, they're not kind of going into the lab every morning in full Richard Feynman mode trying to, trying to disprove their theory. Um These people tend to be quite charismatic, they tend to do good presentations. Um, AND they're clearly on some kind of mission, excuse me. And now if sometimes that gets attacked as, you know, some sort of lefty socialist mission or occasionally, you know, even some right wing mission on intelligence and things like that. But in general, it's not really a political thing. It's much more a sort of be in their bonnet that they're right, they know that this is, you know, a solution and often they've come to that out of a very sincere desire to confront either their own issues or issues in their family. Um You know, I want to know, I um people who research suicide, for example, a lot of people who research suicide have a, you know, sibling or parent who, who kill themselves and it's very difficult, it's incredibly difficult to critique suicide research. You know, a lot of suicide research is terrible but you can't, it's very hard to go to suicide research and say stop doing this and be well, why do you not care about suicide? Really, really hard. So, yeah, there is, it seems to me there are a lot of, of, of psychologists whose they're more about the communication about the ideas than they are about the scientific side of it that, you know, treating it as if it was uh a biomedical science where you collect data and do the analysis and draw the graphs and say, look, this is, this is really happening. Um, AND I, I don't know, I, if people are watching this, who work in psychology departments, they may be nodding along going. Yeah, I have colleagues who, for whom the research is kind of secondary, not in any sort of fraudulent way, but they are, they're kind of special people, not in a science nerd, kind of a way. Um, YOU know, they may be very spiritual individuals, et cetera and that's great, but it's quite difficult to be very spiritual, to be a really, really, really good psychologist. You have to be extremely good at maths and extremely good with people. That's, that's a rare combination of skills.
Ricardo Lopes: Uh So I I mean, it seems that there's also this sort of element of uh potentially getting some social and moral backlash if you're trying to critique some of these research, you mentioned suicide research there, for example, I mean, so some of these research when it comes to particularly very sensitive social topics is sort of v from criticism.
Nicholas Brown: Well, also because the majority of people who research, let's say a politically sensitive topic typically start out from a political point of view on that. So the majority of people who research police racism start off, I think from the point of view that there is a lot of police racism. Um THE majority of people who look for race differences in IQ probably start off from the idea that certain groups are less intelligent than others um, and most people, yeah, you're not going to go into that topic unless you're somewhat motivated to do it. And so if it is a political topic, you're probably going to have an, a priori political position. Now, maybe there are people who go along and have epiphanies, uh, along the way. But it's very difficult when you come to, um, an entrenched, when you come with a, you know, an entrenched a priori political position. It's very difficult to, to accept, um, contrary, uh, empirical evidence. Um Michael Shermer had a book called Why People Believe weird things. And one of the things he said is, uh, people believe or smart people. They, well, one of the chapters is why the smart people believe weird things. And he said, well, the thing is smart people are very good at unders justifying why they believe the weird things in the first place. But they've still got to get the weird things kind of out of it. If you look at smart people who are very religious and, and who are religious in the religion of their parents, you know, we all, you know, we're pretty, we all know that the chance that the religion of our parents is the right one. You know, certainly any sort of, you know, anyone with a logical and scientific brain should realize that any, that the chance that the religion that your parents happen to inculcate in you is the right one. It's fairly small. Um And then, but you see, you know, orthodox Jewish, orthodox Jewish particle physicists who are kind of discussing the, you know, the finest points of detail in the world. But they uh uh I, I saw one such person being interviewed but she wasn't mixing pans with like milk and meat or something and, and, you know, ditto for any other religion. Certainly not to um say anything about the uh uh Jewishness. But if you're going to be have that, if you're going to say, as I, as a scientist, I am, you know, objective about the, you know, oh, and by the way, I have this corner with religion, which I mean, maybe they try to defend with the religion objectively. But I think most people who are religious will say, ok, that's outside the, that's outside that box. Um You're already kind of, to some extent justifying why you're, you know, you're not looking at that. I, I'm not certain if I'm making quite good sense here. But, um, people are, yeah, people justify themselves into that position and, um, it's hard to get out of it and it will take a lot more, um, it'll take a lot more co contradictory evidence before you start believing the opposite of your initial hypothesis, particularly if it was the entire basis of you going into that field in the first place. Uh Very few people make that transition. I'm thinking of um Hilda Bastian, a science writer. And she started off as I think she may have been anti vaccine, she was anti something. Um IT may have been anti vaccine, anti some kind of medical procedure. Uh AND she went and read the evidence and she realized that the evidence was actually true. And now she writes about um deniers, well, first of all about bad scientists, which is great, but also from the point of view of someone who's, you know, started off believing a lot of bad science. Um And she's an absolute beacon from that point of view because she actually did the, she crossed the line, she crossed the line and will, I'm sure would tell you, interview her, do interview her Hilda Basket. She'd be a fantastic interview. She would tell you. Uh Yeah, you know, I used to believe this and here's why and most of us who've been, most of us have been lifelong. Most scientists have been lifelong scientists. We haven't had to undergo that conversion. We can't tell you what it was like the day when we realized that Chem Trails couldn't be real. Um So people who've been through that journey, I think uh uh but they are, they are few and far between and, and even more. So when you've made a public position and when you're, you've got a book deal, you know, you, you've got a book deal, you've maybe you might have written two or three or four popular Bestselling books on YX is the way to lose weight, become happy all of that. And even if just in those moments before the, before you fall asleep, you have a little bit of a doubt, you're probably not gonna co um phone your publisher and say Pulp the lot. I just realized it's garbage.
Ricardo Lopes: Yeah. So uh uh still on the topic of well being, I would also like to hear a little bit about your critique of another study in this case by Frederickson at all, uh uh uh work on the relationship between genomics and well-being because at a certain point there, you also mentioned that, I mean, there could be, or there might be some sort of genetic basis to this to some extent. But in the, in the, in the case of that particular study, what would you find as the main a conceptual and statistical?
Nicholas Brown: Yeah, this was, this was quite the, this was quite the uh quite the study. Um Frederickson presented it at a conference with a um a picture of a couple of the charts, one of them going one way and one of them going the other way for the, the two conditions. Uh And the uh the title of the slide was an objective basis for moral philosophy question mark which um there, there were two main problems uh with this study and I presume you'll put the reference in the notes. Um The first was that it was it was about the sixth or seventh study that the s last author had produced, using a way of analyzing uh data which is completely invalid. He basically made up his own statistical method and this method is going to lead to large numbers of false positives. Uh And I, I've cried, I predicted another paper where someone had done something similar and they, and they replied, but we replicated it. It's like, yes, because your, your method reliably produces large numbers of false positives. Um So that was the, the first problem is the method was nonsense. Um An even bigger problem, however, was that when they were coding up the data, uh they decided that they, they collected data on people with this rather strange American way of collecting race. So uh Amer the the US US scientists used the census definition of race from about 1910. And in the 1910 US census, there was black, white, uh native American Asian, which kind of meant Japanese immigrant, uh Hawaiian and Pacific Islander. Now, clearly those are not, you know, in any way representative of, of, of, of the human population. But OK, there was Hawaii and there was Guam. So we kind of had to include them. There isn't, for example, a code for Hispanic and Hispanic is orthogonal to race and you can be black Hispanic or white Hispanic. So uh and nobody dares touch these now because obviously to try and re you know, reconstruct that would be just a political nightmare. Anyway, they collected these data and either they didn't get a significant result, but for whatever reason, they decided to recode them as white and not white. So instead of white being one and all the others being 23456, they just had uh white is one and everything else is zero. And they must have done this by hand because they left a four in. So they had all these ones, all these zeros and a four. And they then used that without noticing as a binary variable in a regression. And basically, if they held all ones and zeros, it wouldn't have mattered if they'd been ones and zeros or fours and twos. But as soon as you have three values for what should be a binary variable in your aggression, they actually start behaving like real numbers. So we discovered this and we found that. So again, let's pretend that the method isn't full of statistical artifacts. Let's pretend that the method is correct. Even with that method. When you corrected this incorrectly coded person, the main effect of the study reversed. Hm. Did it reverse or did it halve or both? Anyway, it was drastically altered simply by changing that one data plot. So we wrote this up in our rebuttal. And what happened was the authors. So we said to people, but if you download this file from this gene database and you run their code. This is what will happen and look, here's that form. The authors took that data file, corrected the four and re uploaded it. They didn't tell anyone, they didn't put a note on it. They didn't tell us and they didn't tell the journal. So now anyone who downloads it will go. Well, what's this brown sl what are they talking about? The results are? Um So yeah, a combination of the method was completely wrong and there were errors in the data. Um And so the whole thing was just nonsense, just complete nonsense. Uh But you know, it took quite, it took, it took many, many hours and days of dredging through. You know, the data file is like 35 megabytes or something. It's complete genomic analysis and then all these variables that go along with it. So, yeah, the method was wrong and they hadn't even implemented the method correctly. Uh We wrote to the editors and said, we really think you ought to retract this paper. It's based on, you know, we can argue about whether the method. Yeah, it, it is full of artifacts. Here are several statisticians who will tell you it is. But OK, that's a scientific argument. But the data had this error in it and when you correct it, the, the effect goes away. Um And the authors tried to sneak a new version of the data past you and the um the editor refused to uh to consider it. So when he got fired a couple of years later for some me too stuff, uh, I, we were, I was quite pleased. But, um, yeah, so that's, and, and that was an interesting example but an entirely typical example of how science doesn't get corrected because in the same way that if you've got a theory and you're committed to it, it's embarrassing to come back for it. It's very embarrassing for a journal edit to, to admit that they published a, a garbage paper, uh, with, with data errors in it that, that completely trashed the results. So they just pretend it didn't happen and hope no one will notice.
Ricardo Lopes: Mhm. But, but when it comes to the statistical analysis that were applied in this particular study, since we're talking about genetics here, do you find that perhaps some of the issues would also extend to other genetics research or
Nicholas Brown: you'd have to talk to a genetics researcher? Um, I do know that, um, about 15 years ago there were problems because people were attempting to identify which gene was responsible for this or that phenomenon, uh, by basically, uh testing 35,000 genes, uh, and seeing which of them were associated with an outcome without making the necessary statistical corrections. And so they were generating huge amounts of, um, of, of false, false positives. But you'd have to talk to somebody who knows a lot more than me about genetic research. I, I had to do quite a lot of learning in, in quite a hurry for that paper and I've forgotten most of it now.
Ricardo Lopes: OK. So let's move on from uh happiness and well being here because I, I would also like to get into another paper, this one from 2015 by Thorstenson Pastor and Elliot. Where, I mean, uh th this is very interesting and funny in a way, I guess they sort of offered evidence from two experiments that perception of colors and in this case, it was on the blue, yellow axis was impaired if the participants have watched the sad movie clip. Compared
Nicholas Brown: to part, if you're feeling blue, you can't see blue or something.
Ricardo Lopes: Yeah. And this was compared to participants who watched clips designed to induce a happy or neutral mood. So, what were the main issues, methodologically speaking here?
Nicholas Brown: Um I, I, I've always been surprised that that, well, I mean, I presume this, I think this is done with undergraduates. I've always been surprised that um undergraduates who have been watching four hours of TV, since per day, since they were four years old. Um So they've probably watched 20,000 hours of television. And apparently psychologists believe that if you show them a mildly sad or a mildly funny clip for about six minutes that they will all of a sudden be in some kind of, you know, happy or sad mood. But apparently we have to stipulate that we have to accept that um the bigger problem in this study is that the uh the results were fabricated. So uh there's not really a lot of point in going any further. Um The editor of the adjourn at the time is a terrifically nice chap. And he didn't want to get into too much of a confrontation with the authors. So um a suitable fig leaf was arranged in terms of the reason for the retraction, but it's, it was abundantly obvious if you looked at the data set that these numbers had been fabricated. Um So obviously, you can get any result you like if you just make them, make, make, make them, make the numbers up. Um So yeah, that was, that was nice and simple. That was a, that was probably the first case I worked on where I actually had data in front of me. And I go, yeah, this is obvious fraudulent.
Ricardo Lopes: Yeah. Uh And so uh from the same year actually, and we're talking about this sort of thing nowadays, I guess with Twitter and other kinds of social media. So there was also this study on how supposedly through Twitter language, uh the authors could reliably predict heart disease. And this was um I mean the name here is a bit complicated. I stabbed by it all. Yeah. So what, what are the criticisms here? And by the way, do you think that they would extend since I'm mentioning uh the uh the fact that we still talk a lot about how much reliable information we can get from uh, patterns of language and other sorts of activity on social media. Uh, AB about this or that kind of issue that we can. So, do you think that these would apply to other research using Twitter or other social media? They, no, I mean,
Nicholas Brown: everything's far too noisy. They were just, they, they, they ran enough models so that eventually they managed to model some noise down to a P value of 0.049. Um And if you get 0.05 someone will publish it. Uh And uh they, it, yeah, they, there were all sorts of things wrong with that paper at one point. They were, they were using some reference data. Uh BUT it turned out that was from Facebook, not from Twitter. And, you know, Facebook and Twitter populations are different. Um They showed a map of, you know, this area is close, it looks like that area and it, it wasn't, there were hundreds of differences but it was kind of visually impressive. Um And, you know, basically, uh uh what I was saying earlier, there are someone, the uh the couple of the people on that paper have this idea that you can with machine learning or with big, big linear models d discern trends in extremely noisy data. And the answer is you can't, and you know, they, they didn't realize that their data had been censored somewhat by Twitter. So they thought they were getting pure data. Uh, Twitter had taken out a lot of stuff. So this was about the claim of the study was not that angry people get heart disease. The claim of the study was that US counties where there was more anger expressed on Twitter had higher rates of heart disease, which would typically be among non Twitter users. The data were from like 2011 and 2012 when there weren't that many people on Twitter and the kind of people uh who get, uh who get uh atherosclerotic heart disease uh tend not to be on Twitter anyway. But you had this idea of an, you know, an angry county full of, I don't know, rednecks, whatever. Uh And it turned out, yeah, Twitter was censoring the data. So there was all kinds of anger related stuff uh in the data that wasn't in, that wasn't in, wasn't there that wasn't given to them because Twitter bowdlerized uh the data, they took some of the rude words out. Um Then I showed that if you, um if you, instead of looking for anger, you looked for upbeat things, but that also predicted, I can't remember if I said it predicted heart disease or if it predicted cancer. But basically, if you've got enough variables, you know, that if, if anger is predicting that particular form of heart disease, well, how many other forms of disease did they look at and which ones did they choose to publish? Uh So it was, it was just a, it was just flashy nonsense. It was flashy nonsense but it didn't prove anything at all. Uh But the reviewers of that paper could not possibly have understood it. I mean, I, I have a degree in computer science and it took me 2.5 days to download all the stuff that I needed to get the model to run. Uh NN none of the reviewers ran any of those models to replicate them. So they would have just gone. Oh, there's this big complex computer model. Well, it must be right. Well, you know, the model, the model could be wrong and I have no way of knowing that, but a big complex software model. Uh I mean, this is a, this is a uh a more general problem for science is that a lot of the time, even if you trust that the people collected the data, you're also trusting that their 12,000 lines of code doesn't have any bugs in it. Well, you know, your, your, your viewers who know a bit about software, maybe uh having a bit of a smile at that. Um And so, you know, we ran this model and look these numbers came out. Can we have a publication now, please? Just, you know, and, and as I say, we, I demonstrate, for example, one of the things was if you cut the new outage states in half, the effect was only present in the north of the United, in the south of the United States and not the north or vice versa. Well, there's no theoretical reason why angry people in your county, north of, of Missouri uh would, would cause you to have more heart disease in that county and not in the south. And then I found two counties in Indiana that were like next to each other and one of them was 10 times more angry but had half the heart disease. So there's no, there's no useful, the, the, the the biggest problem with that article was, it was like claiming that somehow health authorities were going to be able to, you know, oh, let's not, you know, what's the best predictor of this year's mortality from heart disease? And the answer is last year's mortality from heart disease, um especially since uh the determination of mortality from heart disease is what's written on the death certificate. And it turns out there are enormous differences in medical diagnosing practice based on local cultural factors sometimes including whether one particular doctor had a big influence. You know, if you're in a uh an area with a big city hospital and, you know, oh, for the last 50 years ever since professor. So and so we, we try not to say that it's atherosclerotic heart disease because we look for these other more subtle factors or vice versa. And So it's all of these extremely noisy measures being thrown into a regression. Eventually something's going to kind of pop out one side or the other. Um, AND it, you know, but it, it's not useful. Uh But it makes, again, it makes, it's a press friendly story. The press likes to say, oh, wow, pretty soon we'll be able to see whether, you know, you're, you're likely to get heart disease based on what, what's, what's being tweeted in your neighborhood. Oh, wow. Gee whiz. Um YOU know, sorry, predicting heart disease was going to continue to be um difficult. But the best basis we have for it is, you know, what happened last year and how much pizza you eat. So and a lot of this is about cute, so cute stuff gets published. The thing about, oh Feeling Blue makes you see blue or rather not or something. Uh And all of these articles get coverage in Huffington Post and even in, you know, the Daily Mail and even in the Times sometimes, um because we've got to generate content and so psychologists to some extent can also, or scientists and university press offices. Uh Although my colleague, Chris Chambers had a look at that and determined that the scientists aren't always completely. Oh, yes. The press office made me do it. No, no. But the press offices are producing these, you know, stories uh and feeding them to their friendly journalists and scientists at the university of whatever. And then they can go, hey, University of wherever is in the news again with our world leading research. So it's all part of that kind of mutually reinforcing, um, world of, um, en entertainment, water cooler discussion. Clicks. It just feeds that and, but it's all nonsense. Almost all nonsense. So we can't tell what is it?
Ricardo Lopes: Ok. So, uh, le let's get a little bit more into what might be behind the bad science here. I mean, we could have gone through some more studies that you yourself debunked or found flaws with. But let's get into some more general questions and issues here. So when it comes to perhaps having a better understanding of where bad science stems from and the factors and do you think that uh it is better to try to approach it at an individual level? I mean, is it mostly about people, individuals with certain personality or psychological traits uh having a tendency to particularly produce fraudulent data and studies or is it more a systemic issue? And if it's a systemic issue, then we might get into some of the main factors here.
Nicholas Brown: I mean, I think mostly it's an, it's an interaction of the two but the, the, the system, as it itself, the system itself, the scientific production system um you know, exists in an economic context and you know, the economists, uh most of them who aren't making their data up, but economists can tell you if you, if you show an economist from Mars, uh, if you describe the system in which scientific research takes place, uh, they will tell you exactly what's going to happen, which is that people are gonna respond to incentives. Um, AND we, we have this idea of the scientists as the kind of 19th century gentleman Sikh, uh, with, you know, independently wealthy and, and dabbling in things to, you know, to plant species or, you know, the odd uh still, you know, seeing if you can get chlorine gas out of it or whatever, um uh very much all the sort of, you know, the noble, the noble cause of, of advancing human knowledge. Um But first and foremost, scientists are employees, they are working in an economic system, they need to go home at the end of the day and put food on the table. Um They worry about their career, they worry about uh their visas. If they're from another country, they also have normal human drives and desires such as prestige, um you know, bettering themselves, making a lot of money in some cases and the way science is set up makes it relatively easy if you are prepared to be a bad actor to do that. Now, I don't know whether there are, you know, more sociopaths or more generally bad people in science than any other uh domain, you know, like banking or uh you, you quite often read out someone the other day on Twitter. I, I had some sympathy with them and they, they were trying to sort of fill in their expenses from their university and they had to jump through hoops to get $30 reimbursed. And they said, you know, it always seems to be the university administrators themselves who've been creaming tens of thousands of dollars a year off the top when the scandals come to light. You know, it isn't, it isn't the, the research who goes to two conferences a year over claiming $40 on a taxi ride, that's going to be what, uh, what breaks the budget. So, but, um, people, people of all kinds and all classes and all backgrounds are exposed to temptations to cheat whether that's professionally financially, uh, et cetera. And we shouldn't expect scientists to be any different. I sort of give this analogy of, uh, when your university, when it comes to the end of the month or the fortnight in the US when it comes to the end of the month, uh, you get a payslip, uh, saying, you know, your gross pay your net pay after tax. And they say, and we've sent this money to your bank account. What they don't do is say, ok, your net pay after tax is 3.5 1000 just go along to room 47 and take 35 €100 bills off the top of the pile. They don't know why don't they trust you. Uh And yet we allow scientists essentially without any external supervision, certainly without any external supervision, you know, of the kind that could say stop, you can't do that uh to um mark their own homework, decide themselves what numbers they're going to present. Um And one number will get you a book deal and the other number won't. Um YOU know, that's quite a temptation. You have to be more than averagely honest to turn down the prospect of a quarter million dollar book deal based on publishing or not publishing a result. Um So yeah, we expo we, we at the top, I mean, not, not very, not many scientists do this, but at the top, the rewards are very high and therefore the temptations are, are are there. But even in the middling level, um we see this particularly in China where, you know, you've gotta have this amount of publications to continue, but it happens in the West too. It just seems to be more of a uh and, and so people will pay somebody to write a paper for and then they can say, yes, I published this paper and no one's ever gonna come along and say so tell, no one's gonna give them a, a spontaneous Viber, no one's gonna say so tell me about this. So this variable moderates this and they're gonna sit there, they didn't write it, they didn't understand it. Um So yeah, I say in, in, in particularly China. It's got to the point where large numbers of people are just having papers written for them. Uh, BUT all over the world you've got all of these do, um phd candidates and post docs desperate to keep their job, keep their visa, et cetera and who will do what it takes. And particularly if they're, uh, you know, the, the boss of their lab who is pretty much God, you know, there's no other sort of appeal, um, tells you to come up with, you know, data that show the right num the right results. Well, you're gonna give them that data or you have to leave or you have to leave and a lot of people do leave, they walk away from science, uh, because they're too disgusted about what they were being asked to do, but most of them don't blow the whistle on the way out because the bar for doing so is very high. The evidentiary bar for proving that somebody committed fraud is very high. And now, as we've seen in, in, in a recent case, we've even got somebody credibly accused of fraud and, and then, um, claiming that it's malicious libel, you know, to point out that none of these numbers add up. So, um, it's, it, it, you're very unlikely to get caught, uh committing scientific misconduct. And historically, you've certainly been very unlikely to get caught. And so if you're in a situation where you have a choice between two options. A and B. Uh One of them will either make you rich or at least ensure your continuing employment and visa and the other won't. Um, AND no one will ever find out that you cheated to get to this result. Um It doesn't surprise me that a lot of people do. That doesn't surprise me at all. I think we would be, it would be really weird if, if we discovered that no scientist ever did cheat, we would want to sort of bottle whatever it is. They've got cos we could use that with every other aspect of human endeavor, you know, politicians, et cetera. Um But, you know, it, it's just not the case, it's just not the case, but science is set up and, and the popular view of scientists is these rather aloof individuals but who we can trust. Trust me. It's the science, it's not me saying it, it's the science that's saying it. I am merely the, the kind of priest through whom the science is channeled to you. Uh So you have to believe it and yeah, all sorts of things can go wrong there as we know from every, every other aspect of our society in, in, in almost any other aspect of our society. If people came along and said, you know, trust me, this, this, this would be go yeah. Right. You know.
Ricardo Lopes: Yeah. A and we also have to get a little bit more into that uh idea of trusting the experts because uh I, I mean, knowing what we know about how sometimes we should be careful about trusting some published science. I mean, perhaps we have to talk a little bit about how we really should communicate that idea to the general public. But uh before that, let me just ask you a little bit more about the publishing system in science. Do you think that uh there might be some issues with it that also play a role in how and why? Apparently there are so many bad studies slash papers uh published.
Nicholas Brown: Yes, I mean, it, the system is clearly nuts. In the, in the late in the early 21st century, you would never organize the system around that. What we have is the result of 300 years, but certainly 50 or 60 years of repeated kind of um optimization of a process that suits the publishers and doesn't and suits suits the individual scientists. It just doesn't suit the interests of science. And there is um there is a sort of growing realization that the people who are successful are successful cos they published a lot and therefore they're not gonna call into question the publishing system. It did OK, by them, uh the people who are criticizing it are the people who haven't yet published a lot. And of course, the people at the top would say, well, you're just jealous, you just haven't published a lot once you understand the system, you know, you'll see that it's good for you. And of course, the people who are selected to be at the top are the people who published a lot. So it's self reinforcing. I mean, it, it actually means you can't tell one way or the other, both sides are biased, both sides are biased or have the potential to be biased. I personally back the people who say that the system is, is wrong, but we should acknowledge that the people who are saying the system is wrong are by and large, not the people who've been through the system. And that, that doesn't prove that the people who are saying that it's good because they've got to the top um uh our, our role, but we should, you know, consider that possibility that they're there because they were successful at publishing. So, um you would need to have a kind of relatively neutral external party to say, you know, is it a question of survivorship bias or is there something something deeply wrong with the system? I, I think it's fairly easy to make out the case that there's something fairly wrong with the system. Uh But um it's, it's, it's what we all grew up with and there's nobody, you know, there's nobody who remembers the time before it was about journals and impact factors and um you know, other stuff that you shouldn't be doing. So the publishing system, we, we've we've, the, the sort of administrative side of academia has piggybacked onto completely contingent features of the public trading system such as uh uh H index and citation counts. Um WHICH somebody came up with one day, you know, in someone in 1963 wrote a paper saying, how about we count someone's H index? And all of a sudden the H index is all that counts as a measure of your uh academic productivity. But we didn't, we didn't have a commission, we didn't have a sort of 15 year scientific commission to say what's the best way to evaluate uh research because we were in a hurry and we needed stuff doing now and this was handy and we'll do that and now everyone's and, and same with um impact factor journal, impact Factor. And so journals now spend a great deal of their time trying to manipulate their impact factor. Uh So for example, what's very common and some publishers do it more than others, but it's common is that an article will be published uh online first and then it won't appear in the paper if there is a paper, but it won't get proper page numbers and a volume number uh for 18 months. And the reason for that is that the impact factor is measured for the five years after official publication. So they want to hit the ground running. And of course, it's very difficult for a paper to get cited in the first. Um No, I just, if Zoom has just seen me put my hand up, yeah,
Ricardo Lopes: I, I, that, that's C I for you, I guess.
Nicholas Brown: Um BUT yeah, so um what was I saying? Yeah, so um obviously when your paper appears it's not gonna be cited the next month because anything that might cite it has to be written and has to go through the pipeline. And so a lot of journals now, for example, are, yeah, not publishing, not putting an official publication date and page number on an article for 18 months. So that the moment that the clock starts, they hit the ground running and they get kind of impact factor juice out of that publication uh from the start. And there are various other ways in which the publishers uh um game, the impact factor and we all know it's being game but nobody does anything about it partly because there isn't really anyone to do anything about it. So um science is not like the Catholic church, you know, with a guy at the top who can sort of say as of tomorrow, this happens. Science is much more like um uh Sunni Islam, you know, very, very, very distributed, no hierarchy, no kind of, you know, great hierarchy of the, of, of the priesthood. I apologize to any Sunni Muslims watching this. But the impression I have is that Sunni in particular is extremely decentralized. And so there isn't even anybody who could decide to do this. And when you see people on Twitter saying, well, we should do this. It's like, well, you can't because the, you know, the only people who could decide to do that are the publishers and they will decide what's in the interests of their shareholders by and large. Those that are run for profit. And the funny, the, the kind of tragic thing is even publishers that aren't run for profit tend to behave as if they are. So, for example, the American Psychological Association is, is really one of the worst offenders in many publishing practices, even though in theory, it only works for the benefit of its members.
Ricardo Lopes: But, but I mean, uh uh related to that, how is it? I want to ask you this question because I mean, particularly for people who are not associated with academia or have academia related occupations, perhaps, sometimes it might feel very hard to understand how this happens. But how is it that so much bad science sometimes go unchecked for so long
Nicholas Brown: because, well, first of all, so much gets published, I mean, we, we publish 234 orders of magnitude more science than we can consume as, as scientists. Um, YOU have a, if you work in a department, you work in the psychology department at a large university and you're coming in the morning and read your calling. Oh, my paper got accepted. Oh, they'll say congratulations. They won't read it, they haven't got time. So there's this absolute fire hose of publications coming at us and almost nobody will ever read any of them. Um, SO that's the, the, we, we're just publishing way too much. So, nobody who's likely to have a, a particularly critical opinion is likely to read your paper. Now, if you're in a very small area then maybe they will. And if it's a, you know, small but, uh, important area, but then you're gonna read that through the lens probably of your professional jealousies of the author anyway. So that's another. But by and large, it's because we publish too much, we just publish way, way, way too much. And so, OK, there's this paper, it says that this modulates the imp implement of this on this. Oh, ok. If I'm working in that I might find it. But it's quite often you won't find that you won't cite that paper because you remember reading it, you'll cite that paper cos you'll be looking for some support for your argument in your literature review and you'll Google it and you'll read the abstract. But one of the things that happens certainly happens a lot with my papers is when I read papers that cite what I've written in a, a third or more of the cases. They actually think it says the opposite of what I said, cos they haven't read or understood it. They've just kind of, you know, oh, we sh we should, we should cite something to say. X and say, yeah, I think that Nick Brown guy had a paper that did it and they'll go to Google Scholar and they'll look at the titles and they'll click on something and they'll read the abstract and go, yeah. Good enough Brown and whatever. 2015. And they're either citing the wrong paper or they haven't read, you know, they haven't read it or they've missed the word not out or something. And, and, and so it's, it's just being produced in French. We say a la loche by the bucket load, by the bucket load. And when you produce stuff by the bucket load, that's what happens. You know, you can't produce masterpieces uh on a, on a production line. So really, it starts with the fact that we're producing way too much science. But what happened was 1945 we'd won World War Two or the Americans, you know, we won World War two with science. Science is good. Therefore more science is better. And um, you know, even today, it's quite hard for even the most kind of pro maga people to suggest cutting the science budget. Uh But if you know, if, if, if uh next year you get President Trump and he decides to cut the science budget by 90% I'm gonna be hard pushed, I'm gonna be hard pushed to tell you why he shouldn't because I think we have 1020 times too many people doing science to be able to do anything useful. We've got enormous numbers of people doing kind of science. Busy work. I don't know if you know the term busy work. It's, it's, it's w when you're a kindergarten teacher it's what you do to keep the kids quiet and do some cutting out, do some painting, you know, fill up some time. Um, AND we, we've got people doing enormous amounts of science, busy work. They're not going to discover anything. They're not using methods that are going to discover anything. We're sort of claiming that we're training them to be scientists, but they're not, you know, they're not making any worthwhile discoveries and they're not going to make any worthwhile discoveries. And it's, again, it's not a nice thing to say to people. I, I do sometimes have this discussion with my fellow scientists and to my, they don't push back as much as I, as much as I uh expect when I say, I think we need to get rid of 90% of scientists. Um Most of them seem to think they'll be in the remaining 10%. Uh MAYBE I have very talented uh friends. But uh it, it, we are just doing way, way, way too much. We're doing way too much science and as a result, there's lots of garbage. Now, if we got rid of the problem is if we got rid of 90% we probably wouldn't get rid of the right 90% because, you know, when you, when you, when you do that kind of exercise people survive on the basis of favoritism and friendships. And, um, but yeah, I think we do have to give some thought to doing less science heretical though. That is
Ricardo Lopes: a a and, but realistically speaking, what would you say could be some solutions here, particularly when it comes to publishing, to really try to avoid as much as possible. Uh Bad science getting published. Well,
Nicholas Brown: to start with. Absolutely, every paper should present the full raw data and the every computational step that was done to get to the results and people sort of go oh, but what about human subjects protection and in 99% of cases in psychology, that's just an excuse. Um You know, absolute you, yes, if you're doing research among politic, you know, political dissidents in some country and their lives are in danger or if you're dealing with highly vulnerable people, then maybe, then maybe, then we can talk about it, then we can have specialized journals. But the default ought to be, you show your raw data as a minimum to the reviewers. There may be data that you can't publish. But the idea that you have data that is so sensitive that only you are allowed to see it and not your peers who are reviewing it is like, well, what qualifications did you go through to make you able to see those sensitive data. Why are you allowed to know the names of these political dissidents are not your peers who promised to keep it quiet? The medical profession manages this. When I go for an X ray, you know, and the, my doctor wants a second opinion. He sends it through. He doesn't go. Oh, yeah. No, I'm not gonna send it through because that other doctor might put your, you know, might put your genitals on the internet or something. You know, there are punishments for that. So we, I if I join, if I might be your peer reviewer and you won't let you see the data, but you're hiring and if I join your lab tomorrow, hey, I could, I could just open the file. So, you know, it's not like this is nuclear secrets. This is simply a question of covering your arse a bit. Yes. Some stuff is sensitive but you can get away with, oh, it's sensitive and everyone has to go. Oh, it's sensitive. OK. Right. And nobody goes, no, it isn't. No, that's just bullshit because we don't like going to people who are telling bullshit stories and say sorry. That's bullshit in the same way when people say, oh yeah, I sorry, I don't have the data. I accidentally wiped the hard drive with the only copy of the data on it. Well, that isn't true. Nobody accidentally wipes a hard drive and nobody who is capable of wiping a hard drive is not also fully aware that they need to make backups. So, what do you do when somebody makes a claim that is completely unreasonable and everyone's laughing at them behind their back, but nobody actually stands up and go, mate. Seriously. No, no, that's bullshit. We go. All right. Yes, we have to respect professors on this. No. Sorry. The guy is being massively disingenuous. He probably has something to hide. Let's, uh, look further. But we don't do that because it's, it's still at that level. A gentleman's club. Um, SCIENCE is very much a gentleman's club because academia works that way. Um, YOU know, how do you get a university to investigate a senior professor when they, you know, they're godfather to each other's kids or something? Um, YOU know, it's, it's very difficult. There's a reason why, for example, um, law enforcement people tend not to have many friends outside law enforcement and vice versa because if you have a good friend who is a, you know, senior detective, he knows that there's a chance one day he may have to bust you. Yeah. Um, YOU know, the police know how many people there are out there committing offenses of various kinds and it's an enormous number. Uh, I, I saw a thing, I wish I had a source for this about 25 years ago and it was one, it was 10 million people in the UK. So population 60,000,010 million people have a conviction for a conviction for something that isn't a motoring offense. And if you assume most of those are men, that's about one men in three. So that one man in three has been convicted in a court of law now, probably something minor, probably just drunk and disorderly or, you know, urinating in a public place or whatever it is. Nonetheless, one man in three in the UK. And I don't see any obvious reason why it should be vastly less elsewhere has some kind of. And so, you know, when you're in law enforcement, you know that there's, there's a, there's a chance, what am I going to do one day if I have suspicions that Ricardo is committing fraud. Uh And so law enforcement tends to be done separately from the rest of society and these people tend to exist. It's one reason why people don't always trust police officers cos we don't really know them, we don't get to know them. So come back to academia all of a sudden, Professor X is accused of something and there's a committee consisting of professors A B and C to investigate them. It's, it's very awkward for everybody, very awkward for everybody. So the there is no real method by which that can be investigated because there's no police, there's no prosecutors and the, the bad people know that they, when if somebody says Blimey, how did you get that amazing result? You know, you're not gonna go, I made the data up. You're gonna come up with some moderately bullshit excuse. But your colleague who you've had coffee with every morning for the last five years is not gonna go, Nick. That's just bullshit. What's wrong with you? Because we don't do that with our colleagues. We just don't, I'm sure you've come across situations in journalism where people have just made up stories. Mhm. You know, how do you deal with that when you suspect that this interview? Oh, wow, great interview with the, with the defense minister. Wasn't, wasn't he, wasn't he in Australia at the time? Um Should I ask about that? I'll, I'll, I'll just keep quiet, I'll wait for more evidence. Um And, and we as, as a society and a society of polite middle class, educated individuals are not very good at calling out situations where we go, hold on is, is, is this just all lies cos it's not something we, it's not something we're used to dealing with. And, and so as a result, the bad faith people can get away with a lot because they just exploit the fact that nobody's ever gonna ask them. And in many cases, these are people who have been doing this since they were Children and they quickly learned as a child that if you develop the right kind of um obfuscated uh sta um strategies to deflect attention uh that, that will work. And I'm sure you know, you knew people like this and, you know, people like this in your life who've gone through their lives, they're not quite what they seem but they're pretty good at hiding it and you get them in every walk of life and you get science.
Ricardo Lopes: But, uh, I, I mean, let me ask you then a different kind of question. So do you think that as individual readers of the scientific literature, I mean, as people that are interested in science, that we can really do much to avoid, at least sometimes giving rhythm to some bad science. I mean, because we went through some examples here today and at least in one of them, you mentioned how hard it was to go through the data and the statistical analysis and all of that. I mean, it shouldn't we expect that uh most people or even all of us at least now and then uh will give grievance to some bad science and
Nicholas Brown: intentionally. Yeah. No, you, you can't, you can't uh to a first approximation, you can't. Um And if that results in staring into the abyss, then welcome to the abyss. Uh YOU can't, you have to trust something at some point. Uh OTHERWISE you end up being a total conspiracy crank. Um And so um yeah, it, it, it gets, it, it gets pretty dark, it gets pretty dark in places because it's like, well, why should I trust this vaccine? You know, why should I believe that climate change is real and you know, I, I trust vaccines and I believe climate change is real, but um science is not doing itself any favors by tolerating this kind of behavior. So, I mean, one of the things I was involved in, in, from about two years ago, uh was somebody who was faking climate related research. Um She was making out that the effect of dissolved carbon dioxide increase in dissolved carbon dioxide, increased ocean acidification, which is a well known phenomenon. Uh And she was making out that the effect it was having on fish was absolutely catastrophic and she was making all of her data up now, that doesn't mean that the effect might not be catastrophic, but it hands ammunition on a plate to people who say, who go, who say climate change is a scam. It's all being done by researchers for grant money because in this case, it was now, uh that's kind of embarrassing. I don't think she was typical and, and, and she wasn't doing research into the actual kind of climate science. It was the, you know, the knock on effect, but it's true that in, um, it's certainly true that in, in most fields of, of animal biology, for example, uh it's a lot easier to get your research funded if you can chick tick the climate change box. Yeah. So, uh you know, we're, we're not doing ourselves any favors uh by insisting on you know, everything has to be tied to that. But I, I, yeah, this and then this person came from a lab which produces a whole load of dubious research. And I think, I don't know whether people go in there as honest citizens and come out believing that they're making discoveries by fabricating the data. Maybe they get on such a kind of mission on what is an extremely important topic that they think it's OK to fake results. Certainly the people who blew the whistle uh on this individual are, you know, mainstream, mainstream biologists concerned about the impact of climate change. But they also say it's not doing anyone any favors to exaggerate the the impact um or to to fake data about it. So in terms of trust in science, yeah, for exa another example is III I worked with some people on various fake COVID cures and those are relatively easy to debunk because most of the studies claiming, you know, most of the trials claimed to have been done never took place and the data were fake. But the same per one of the people who was I was helping with that has discovered that pfizer still hasn't released one of their uh chunks of trial data that they should have done. You know, I had my Pfizer jab yesterday, I have reasonable faith. You know, I had COVID, I didn't get as ill as I probably could have done if I hadn't had the jab, but they still haven't released those data. Come on, come on, you know, um you know, uh so there, you know, there is bad behavior at all the levels, I suppose after a while, it just becomes a question of the overwhelming consensus. Um AND particularly in reproducible sciences, then you kind of look at the consensus. So if someone produces a bad uh a bad paper on, on superconductors, for example, there's always something about superconductors. Um YOU'LL know within three months, if it's real cos somebody will go and try and make the same, you know, compound and it'll either work or it won't and that will get dropped very quickly in psychology because the concepts are all so kind of malleable. Uh You can always plead that they weren't really testing what you thought, what they thought they were testing and you can eventually the line of research will collapse. But by then you'll have retired, you'll have got the best selling book, 15 years will have gone by and nobody will care. So it's, it's much easier to um to get away with that kind of thing in, in a, in psychology. And then in, in, in basic biology research, the problem is that there's so many diseases and so many genes that anyone, a lot of these fake papers that are written to order you just pick a gene, pick a disease and someone will write a story saying the two are linked with experiments that never took place, but it's very formulaic and no one will ever test that because the chances are that nobody's looking at that gene for that disease every so often somebody might come along and say, oh, I was interested, oh, and, and I got misled by this fake paper. But the, these fake papers mostly are just bulking out the journals and bulking up the university's kind of count of how much prestigious research we're doing. But there isn't even a great danger of them causing too much individually causing too much problem because nobody's even gonna read them. Yeah. And, and, and you get this rather kind of ca care situation of, of, um, I call them, write only papers, you know, they, they, they're only ever written and published and nobody ever reads them but somebody somewhere has an economic interest in doing that. I don't know if you ever Google, sometimes you Google for a phrase and there's this page on the internet of complete gibberish that's been copied and pasted from 25 different and maybe translated into Albanian and back to English. And you go, what even is this page doing? And there aren't any adverts on there but there's somebody somewhere has identified that there's a very small economic benefit to doing that. And, and uh yeah, it, it, it, it's like that a lot of, a lot of what's called scientific publication is just very, very marginal economic benefits. From somebody and that appears to have no impact on anyone else. You know, who is the victim, who is the victim. If I publish a scientific paper about research that never took place on a gene disease link that no one else is interested in investigating who's the, who's the victim? Yeah, it's, it's difficult. It, how would you ever prosecute that on what basis would you try to discipline somebody? And if you only discover it three years later and they've moved on to another university, the university where they were at the time says, well, they left the university where they are now, will say, hey, it wasn't done at our university and, and um a lot of fraudulent people, if you, if you move every three years, you'll, you, you'll essentially never get formally sanctioned for any form of malpractice because your current university will never investigate you for stuff at another place. And your previous place says, hey, they moved on. That's, uh you know, if you imagine, imagine if you could commit actual kind of financial fraud by moving on to a different bank every three years.
Ricardo Lopes: Well, but uh I mean, related to that when it comes to the idea that we communicate to people, and I, I still think it's a good idea of course, when we tell people that they should trust the experts. Uh AND, and I mean, we really do need to trust the experts because realistically speaking, we cannot evaluate data from all sorts of places by ourselves. I mean, that's impossible. But uh but, but since there's still some bad science, some fraudulent science, I mean, sometimes I'm afraid that if we just say without adding anything to it without adding any warnings or nuance with uh trustee experts, then if sometimes it happens that people know that all this piece of science, this piece of research got published and then five years later, it was discovered that it was just pure fraud or something like that, that people might get cynical and think, oh, ok, if it works like that, then uh I will just go like this. If it confirms my pre-existing beliefs, then I will trust the experts if it doesn't to hell with it. So, you know, so how should we handle it?
Nicholas Brown: Well, I I there's not much you can do about what people, what people out there want to believe. Uh I think the, the issue is when it starts affecting public policy. Um AND, and we, you know, you, when you, when you see the kind of Q and on crazies, uh finding themselves actually getting elected to congress, et cetera. Um And you know, there are one or two an rabidly anti vax members of parliament in the UK. Um But um by and large, I think you have to sort of hope for the wisdom of the, of the majority. I think also the majority of scientific fraud. Is in areas that people don't really care about. Uh It's not taking place in, in uh the most basic stuff that can be replicated easily and which turns into technology, whether that's medical technology, uh or, um, or, or computer technology that said, I think there is a, we are approaching a point where, for example, a lot of drug development is now on drugs that have moderate effects at best and whether or not those effects are real and how will we discover if they're not? Um, SO, you know, nobody's going to doubt the effect of penicillin, for example, but if you have a new disea, uh if you have a new drug for, I mean, talk about Alzheimer's, which is in the news at the moment with two or three stories of, of various forms of either historical or current fraudulent research. How do you know that an Alzheimer's drug is working? And how do you know that it's working when it claims when the only claim their claim is not, you know, Alzheimer's will go away, you know, your, your, your grandmother is going to be, you know, playing chess at national level in three years time. Um When the claim is that it makes significant difference to the severity of symptoms. Uh, HOW are you going to test that? Because these things are all based on samples and averages and there's always gonna be people for whom it doesn't work. We've got used to having medicines that have relatively dramatic effects. You know, we bring in a measles vaccine and all of a sudden the number of measles cases goes from that down to here. I mean, you can, um, it's, it's quite easy to hide poor science if your claim is fairly modest. Now, normally, when claims are fairly modest, they don't then become that important. So obvious you get things in psychology and, you know, we're going to reduce your back depression inventory score from 4.2 and 3.8. But people aren't rushing out to change public policy on that basis. You know, maybe one or two people are going to an expensive training course they didn't need. But if you come along and say we, you know, we're going to reduce Alzheimer's symptoms, we're going to delay the onset of dependency by 18 months on average. And people are reading about this, this new drug and you're going well, my mum, my dad is an early stage Alzheimer's if we could have 18 months more with them before, you know, these things are fairly inevitable. But before the decline, how much would you pay for that? Or how much would you in a European context? You know, demand that your health system pays for that rather than shingles vaccine. I have to go for a shingles vaccine tomorrow and it's expensive. Um And how would we know if it was or wasn't having an effect? And it's really quite difficult because, you know, we do that on the basis of observation report of the statistics and those kind of, that kind of research is quite, is quite easy to falsify. So I think that is historically, it's not been too much of a problem because the kind of gee whiz claims that have turned out to be false, the kind of Cyril Burt, uh, things, the, I think psychology stuff, um, has tended to be fairly fluffy stuff. But with one or two exceptions, uh with things like the nudging and the, you know, signing the paper at the top to say you'll be honest, one or two, but it hasn't really had a huge impact. But the, the medical stuff I'm starting to worry about now because the claims are no longer, you take one of these pills, it costs $3 and, and, you know, it will go away. We will know we, we, the, I, they're talking now about malaria vaccines that are 50 or 60% efficient and can be made for, I don't know, 20 $30. If we deploy several, tens of millions of those, we will see, we will see in the figures that there is that much less malaria and we will know that they work um, some of these other drugs that, you know, with modest claim effects, how are we gonna know if they're working or not? It, it's very difficult because the, you know, the whole initial claim is difficult to evaluate uh uh uh a percentage drop of this, you know, with what statistical techniques. And even if the research is being done, honestly, people don't always agree about the right way to analyze these things. So yeah, historically, it was I think simpler because you make, you know, you claim you made a superconductor, nobody needs to worry about that because in three months time, we'll either know if it works or if it doesn't. Uh But if you play some of the, some of the stuff in, in the medical area is, is and we know we know that plenty of drugs are marketed that don't work. Mhm. Uh And, and sometimes are worse than useless. Um And then was that because of statistical errors in the phase three trials or was there faking going on? Um Yeah, difficult or, or
Ricardo Lopes: sometimes you even get unregulated stuff like supplements, for example.
Nicholas Brown: Uh Yes, but at least no one's, I mean, no one's hopefully taking some of the those claims seriously. But I was talking this morning, this is someone on Twitter about there's this drug that makes you apparently lose weight quite effectively. I forgot its name. I think we V is one of the brand names. And so what's happening now is people are buying a fake version of it over the internet because everyone read about it. Let's assume the research is genuine and that the, the drug is great. The pro the problem is the drug's quite hard to get, you have to get it prescribed and your doctor won't necessarily prescribe it for, you know, something minor. Uh, AND so people are going out to the internet and buying it or at least they think they're buying it. They're probably just getting, you know, flour and water or something. Uh, BUT to some extent that's a problem. It, you've got a, you've got a societal problem caused by the research on this drug that arguably kind of isn't the fault of the people who made the drug. But the, the press got hold of it. You've got to get this, you know, and, and they get whatever the 10 letter name of the drug is, you've gotta get this and people go and type it into Google and off, they go. Um, THAT'S another problem for society of, of, um, sort of criminal derivation of, of, of, of, of, of scientific research that we also have to look at. So I think there's an even wider problem of, of how do you communicate truth to people, um, whether it's scientific truth or, you know, political truth or, you know, and, and then, you know, so we, we t, I don't know it, this is the kind of thing that we ought to be able to ask psychologists about. It's just that psychologists aren't very good at it as we've seen. Um, IF you look at, you know, during COVID, um, some of the advice from the psychologist. Well, it was based on this kind of crappy nudging theory. Most of which I think is probably, is probably at least at the very least bad, bad signs and quite a lot of it I suspect is faked. Um, BUT, oh, yes, the nudging. Yes. Um, THERE was a book you could buy at the airport called Nudge.
Ricardo Lopes: You remember that one?
Nicholas Brown: Well, for example, that book, the first edition was full of uh cute stuff by Brian Wansink, the food research guy who then got fired by Cornell after we showed that he'd been uh well, amongst other things, making up his data. And uh and then the second edition of that book came out, but they just dropped all those references. They didn't put a Ford say, oops, the first edition of this book was full of bullshit about that. Well, that all turned out to be crap. But this stuff, you know, trust us all the rest's really solid. They didn't say anything at all. They just produced a new edition and he was airbrushed out. Um You know, and they, you know, these gu the um these guys make quite a lot of money selling those books and it was a bit embarrassing. So, uh it's, it, it's a genuine problem, but the problem is there isn't anyone who can do anything about it. There isn't a pope, there isn't a pope to say guys the entire scientific community, you're stop doing this or there will be consequences. Um So all we can, I think all we can do is have um people like yourself holding science to account uh and asking those awkward questions and um you know, talking to people who disagree with me as well, but we, we have to have that level of scrutiny. Nothing would have happened in the Weinstein case, for example, if Buzzfeed hadn't, uh hadn't gone on to it, uh, because it became too embarrassing for Cornell not to do anything about it. Uh
Ricardo Lopes: And uh, I mean, actually the nudge book, uh reminded me of, uh Dan Arielle who I guess I, I, if I, if I'm not mistaken, even rec just recently published a new book and iii I haven't even read the book. But after all the, the things that uh uh came out since 2021 about these fraudulent data and all of that, I would imagine that probably he's still promoting some of his fake ideas as if
Nicholas Brown: they were, but it's his business model. He can't, you know, he can't back away from that. He's committed to that view of the world in which uh very small subtle interventions that only he knows about, um, you know, make a difference. And it's, it's, it, you know, no, nobody else reproduces these things. There's this whole thing of mindset theory. So the mindset theory is that, you know, students with a growth mindset, you know, achieve better. And it's strange because this, the claim is that any teacher, any school teacher can go into their class and apply this. But no psychologists have ever been able to replicate it. So apparently it can be replicated freely by the person who invented it, any school teacher, but no other psychologists. Um, AND, yeah, that doesn't make a whole lot of sense to me. Um, AND the, um, the, the naive reader would draw conclusions about that which, um, scientists are too polite, uh, to draw. But, uh, the, the people reading the books aren't getting that they say, oh, you know, I, there, there's this interesting research, my, my colleagues and I, at university or whatever tested this with 300 undergraduates. We found that when we did this, they did that come by us. But, um, you know, did it happen? Is that what really happened? Did they know what, you know, does that, can, we, can we really draw that conclusion? Probably not. But I've got a book to sell. Um, AND the, yeah, the nudge type stuff. Freakonomics. Freakonomics is another one. THE, a lot of the, what, what they write about, uh, is, is studies that either never happened or certainly didn't happen as was described. Um, BUT that, again, that's that whole industry now, you know, there's that book, think like a freak and it's like, oh, the thing to do is to be counterintuitive. Everything is counterintuitive and when the problem is when you get into everything being counterintuitive, well, then you're into Chem Trails and five G and, and all that because, hey, that's counterintuitive too. Um, AND you know why not? Who says that can't be possible? Um, IT, it's very dangerous, it's very dangerous. This idea that absolutely everything is counterintuitive cos actually most things are, you know, true on their face. Mhm.
Ricardo Lopes: Uh So uh ju ju just to, to wrap up our conversation, let me just ask you, I guess one last question. So when it comes to lay people specifically, if they are exposed to a paper or to a piece of news on mainstream media about some new a breakthrough research, something like that. I mean, do you think that there are perhaps, or there would be good enough rules of um even from uh for non experts to follow where perhaps there would be some things they should be wary of things, they should pay specific attention to where if they see them, they should go. OK. So this is, I should be careful about this. This is sort of a preposterous claim. Let me just take a step back and perhaps wait a while to see if there are some reviews and critiques that come out to see how it turns out uh what experts actually we see about
Nicholas Brown: it. Um I'll give a plug here for my friend Stuart Richie's book called Science Fictions in which he sort of, in which he, you know, tells, gives you these steps to go through. But, uh, I'm not very optimistic that, that many people will want to do it because people are, people are absorbing this on their commute into work. You know, in the, in the, in the, in the 20 minutes free newspaper, a new study shows, blah, blah, blah. Next page, somebody got mugged in your neighborhood last night. Next page, the football results. And it's just part of the, it's just part of the entertainment churn. And then, yeah, sometimes I'll be out, I'll meet people on a walk or something and they'll say, oh, I read that X, you know, causes why. And it's like, hm. But sometimes people will say, I don't believe any of it because they're always telling you that this causes cancer and then they're telling you that this cures cancer. Why won't the scientists make up their minds? And I used to say, I used to say this isn't the scientists, this is the journalists. Uh, BUT it turns out that a lot of it is the scientists and I wish the scientists would stop doing this. But the scientists will then say, oh yeah, but we didn't say that, you know, we had this study and which said that this is associated with this in rats. Uh But they allowed their, they allowed their study in rats to be turned into a news item. Um, AND they might even say, oh, well, the university press office obliged us to it. It's like, well, you're not, not obliged to, you're not obliged. So you can choose not to do it. You went along with it because it was the easy thing to do. And then they might say, yeah, but, you know, my university ranks people on their public impact and they may well be right. And so on the other hand, it is quite difficult to critique people for their uh behavior as scientists. It, it it's easy to critique them when they got a million dollar book deal off the back of it. Understandable why they would do that. But it's easy to critique them. It's less easy to critique them when they have a precarious contract of employment. And they sincerely believe that going along with a little bit of the excess of the press department is the requirement to still have a job paying them 2000 a month or 4000 a month next year. And I think we have to be a little bit careful about dismissing sort of, you know, the, the incentives because we see the results of the, you know, the high end thing and the $100,000 consulting gigs. But we also have to acknowledge that there are lots and lots of people who really want to keep doing this line of work um and who are prepared to do what it takes to stay in that. It's one of the reasons why I think we need to have fewer scientists, uh is because once people are in that system, the system owes them an honest opportunity to keep going within that. I think if you have any kind of uh profession that isn't casual employment and you ask people to make a commitment to being trained to do that profession of engineer or, or lawyer or accountant. I think it's incumbent on that profession, that domain of human activity to have a reasonable opportunity for people who've made that commitment to go for a decent length of career within that field where they chose to give up their twenties, getting trained. And at the moment, we have a situation where we, we train all these scientists but there simply isn't enough stable employment for them. And as a result, we make them jump through hoops and, you know, perform little tricks to stay, um, in, uh which, which first of all means we select for the people who are best at doing the little tricks, but also the whole premise was, well, we'll push all these phd s through the system and we'll, we'll cream off the 15 or 20% best ones. You shouldn't be doing that. You should be selecting people much more rigorously for their potential to be, have a good long career as scientists. And I'm, I if you go along to law school and you're just not up to, you know, whatever I don't know what the qualifications are or what. But I'm sure there are ways, I'm sure there are people who go to law school and after two months, you know, people around them will realize, look sorry, but you're never gonna make it in the law, you know, because of the way your mind works or whatever it is. And, you know, hopefully, well, you failed them perhaps. I don't know. Uh, BUT we, we, we, it seems to me that in science that is completely out of whack and we're bringing in so many people and then they, they, they wanna continue in what they're doing. But that's, I think a legitimate social contract with people now who is running that system. I don't know. But worlds of, I don't know, don't know about journalism but words of engineering law and things like that when we bring you into the system, the deal is that if you pass the exams with a reasonable degree of competency, there will very likely be a job for you somewhere. And that simply isn't the case in science. Um And so I, as I say, my, my fundamental that everything I've told you in the last two hours, you could have heard from a dozen other people apart from perhaps my position that we need to train vastly fewer scientists than we currently do. Not because the world might not be a better place with all those scientists, but because we simply don't have room for them. We don't have the capacity to absorb either the people or anything they might output. Uh, AND we just have to accept that in the same way that we don't put, um, 20,000 kids a year through professional football academies. Mhm. You know, your, uh, whatever your favorite football team is has an academy and they know they know that 90% of the 14 year olds won't make it. 70% of the 16 year olds won't make it. But they don't, they could, they could take in three times as many kids. And occasionally, you know, the ones who they, let's say at the moment they take in the ones who score nine out of 10, they could take in all the ones who currently score eight out of 10. Occasionally, one or two of those would blossom and become 16 year olds and go on to become professional. But the trail they would lead in there and there's a, the cost of doing it. But there's also, I think a certain moral obligation within that field to say, look, the chances that he or that this person will make it as, as a profession are so low that we're going to take the risk of missing out on the next Lionel Messi because, you know, we just have to do that. And it's, I at the moment it's as if it's as if your local football team had Tween took 2030 times more people than they need into the academy. 14 year olds go, hey, I'm at the FC, whatever academy. That's fantastic. And, and you go, yeah, you're not gonna, you do know it's very unlikely. And I think we would, if that was happening we would consider that there was an ethical, an ethical problem. Maybe there already is. Maybe they're already training to me. I don't know. But we would, you know, we would be talking about that and I, I don't think we talk about that in, in science. I'm not certain why or outside science, maybe that's your next kind of investigative mission is why are we, why do we train so many scientists? Because we have so many professors because they were trained as scientists. You know, we've got this, we've got this huge machine and it has to be kept going and we, in our society, we associate any machine getting smaller as a sign of failure, right? You know, why have half the psychology departments in our universities closed? There's not going to be an article saying actually this is a good thing, you know, it's going to be interviews with the people who lost their jobs and, and my, my kind of assertion is that we shouldn't have hired a lot of those people in the first place. But this will, this will, this, this doesn't make me too popular. I'm surprised that it doesn't make me more unpopular, but it doesn't. Yeah. No, no one's no-one's yet invited me on their podcast to give a good, full 45 minute exposition of exactly how I think that should work.
Ricardo Lopes: Well, I, it's food for thought. Anyway, I guess let's end on that note. And Doctor Brown, uh, just before we go, would you like to tell people where they can find you when you work on the internet?
Nicholas Brown: So, I'm still on Twitter and, uh, you can put the link in cos if I try and spell it, it'll you um you can put my Twitter handle on there. I, I suppose at some point I will move over to blue sky like everybody else. Um And one or other of those uh will be the best place to contact me. And if not, um Ricardo, you can put, for example, the address of my blog um in the in the blurb and um people will be able to find my contact details through that.
Ricardo Lopes: OK? So I'm leaving links to that in the description box of the interview and Doctor Brown, thank you again for taking the time to come on the show. It's been fun to talk to you,
Nicholas Brown: but I, I enjoyed it and I, I hope there's uh I hope there's something here you can use without your lawyers getting to uh getting too upset.
Ricardo Lopes: Hi guys. Thank you for watching this interview until the end. If you liked it, please share it. Leave a like and hit the subscription button. The show is brought to you by N Lights learning and development. Then differently check the website at N lights.com and also please consider supporting the show on Patreon or paypal. I would also like to give a huge thank you to my main patrons and paypal supporters, Perera Larson, Jerry Muller and Frederick Suno Bernard Seche O of Alex Adam, Castle Matthew Whitting B no Wolf, Tim Ho Erica LJ Connors, Philip Forrest Connelly. Then the Met Robert Wine in NAI Z Mark Nevs called in Holbrook Field Governor Mikel Stormer Samuel Andre Francis for Agns Ferger, Ken Herz J and La Jung Y and the K Hes Mark Smith J. Tom Hummel Sran David Wilson, the dear Roman Roach Diego, Jan Punter, Romani Charlotte, Bli Nicole Barba Adam Hunt, Pavlo Stass, Nale Me, Gary G Alman, Samo, Zal Ari and YPJ Barboza Julian. Price Edward Hall, Eden Broner Douglas Fry Franka Gilon Cortez Solis Scott Zachary, Ftw Daniel Friedman, William Buckner, Paul Giorgino, Luke Loki, Georgio Theophano, Chris Williams and Peter Wo David Williams Di A Costa Anton Erickson Charles Murray, Alex Chao Marie Martinez, Coralie Chevalier, Bangalore Fist Dey Junior, Old Einon Starry Michael Bailey then Spur by Robert Grassy Zorn, Jeff mcmahon, Jake Zul Barnabas Radick Mark Temple, Thomas Dvor Luke Neeson Chris to Kimberley Johnson, Benjamin Gilbert Jessica. No, Linda Brendan Nicholas Carlson, Ismael Bensley Man George Katis Valentine Steinman Perros, Kate Van Goler, Alexander Abert Liam Dan Biar Masoud. Ali Mohammadi Perpendicular Jer Urla. Good enough, Gregory Hastings David Pins of Sean Nelson, Mike Levin and Jos Net. A special thanks to my producers is our web Jim Frank Luca stuffin, Tom. We and Bernard N Corti Dixon, Bendik Muller Thomas Trumble, Catherine and Patrick Tobin, John Carlman, Negro, Nick Ortiz and Nick Golden. And to my executive producers Matthew Lavender, Si Adrian Bogdan Knits and Rosie. Thank you for all.