RECORDED ON DECEMBER 5th 2025.
Dr. Alberto Acerbi is an Associate Professor of Sociology at the University of Trento. He is a cognitive/evolutionary anthropologist with a particular interest in computational science. He is the author of “Tecnopanico. Media digitali, tra ragionevoli cautele e paure ingiustificate” (“Technopanic: Digital Media, Between Reasonable Caution and Unjustified Fears”).
In this episode, we focus on Tecnopanico. We first talk about moral panics surrounding new technology. We discuss misinformation, and whether people easily fall for it. We talk about conspiracy theories, whether people really are in online echo chambers, whether algorithms know us better than ourselves, and whether people can fall into “rabbit holes” on social media. We discuss the supposed link between social media use and mental health outcomes, the problem with monocausal explanations, and whether there is such a thing as “social media addiction”. Finally, we discuss whether bans on smartphones and social media work, and the negative effects of alarmist narratives.
Time Links:
Intro
Technopanics, and moral panics
Do people fall easily for misinformation? How gullible are we?
Conspiracy theories
Are people online really in echo chambers?
Do algorithms know us better than ourselves?
Can people fall into “rabbit holes” on social media?
Social media use and mental health
The problem with monocausal explanations
Is there really “social media addiction”?
Do bans on smartphones and social media work?
The negative effects of alarmist narratives
Follow Dr. Acerbi’s work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello, everyone. Welcome to a new episode of The Dissenter. I'm your host, as always, Ricardo Lops, and today I'm here with a return guest, Doctor Alberto Aervi. He's associate professor of sociology at the University of Trento in Italy. And he's the author of a new book, uh, Technopanico. I mean, in English, it would be something like Technopanic Digital Media Between Reasonable caution and unjustified fears, and we're going to talk about the book today. So Doctor Aerbi, welcome back to the show. It's always a pleasure to everyone.
Alberto Acerbi: Thank you. Big pleasure for me, Jay Ricardo.
Ricardo Lopes: So let me start with this question. What do you mean by techno panic by using, I'm using the word in English here, of course. I mean, how would you characterize a techno panic?
Alberto Acerbi: Yes, techno panic is a, is a word that has been used a bit in some uh literature recently to talk about some. Excessive uh fear or alarmistic narrative about uh technologies and um In the case of the book is uh specific about uh communication technologies and in particular about contemporary uh communication technologies through digital media, social media, smartphone. We know that there has been a big tension about the, the, the effect of, uh, social media smartphone on society and on us. And for some people, including myself, uh, uh, sometime there has been a kind of uh Overly negative and alarmistic narrative about the effect of digital technologies while the image that Came out from researches at least uh more than once and so the the term techno panics is exactly about this, about the fact that uh um. The, the, this overly alarmistic and negative uh narrative about uh digital technologies. Then of course, uh there is a history behind this, uh, this term. For example, uh, and I talk about this in the, in the first chapter of the book in, um, sociology, there is this idea of, uh, moral panics which are indeed. The, the, uh, uh, the society reacting for a series of reasons and then I explore and we can talk about it, uh, a little bit more in the detail if you want, uh, but again, the idea is, is negative and, and not necessarily always, uh, empirically grounded reaction to new technologies.
Ricardo Lopes: Uh, SO in our previous conversation, we talked about uh cultural evolution. I mean, uh, does this book and the way you, you, you approach this sort of panics connect in any way to your work in cultural evolution?
Alberto Acerbi: Yes, I would say, I would say so. I, I, it connects, I think in, in two ways. One is I would say more like uh um. Personal way in the sense uh so I, I start uh to work about the uh the effect of digital media using this cultural evolution perspective you're talking about around 10 years ago or even more now and um. Just after I started to work on this, so in, let's say in 2016 there was the first, uh, Trump election, then there was, uh, Brexit, then you had all the increase of populist parties in Europe and everywhere, and in this time there's also started to be a big backslash against the social media. We started, you know, the, the, the, the. The idea of fake news and misinformation started basically to be very popular in 2016 with Trump election. There was Cambridge Analytica. So while I was working for my research interest uh related to cultural revolution to this topic. I could see that was emerging also all these interests and uh as I was saying before my impression was from the beginning that this was not really reflecting what we would know about, about uh uh about the effort of social media so. I started to be interested not only in the, in my, in my, let's say basic research about, about digital media and cultural evolution but also I, I started to try to understand the reason behind this narrative, why this narrative was there, what were the effect of this narrative. So there is this link that is, you know, more personal. Uh, BUT also I think there is a more, uh, theoretical and, uh, possibly more interesting, uh, uh, link uh because I think that if we have, uh, uh, an evolutionary perspective, a cultural evolutionary perspective, uh. We, we have also a, a view about how propaganda works, about social influence work, and I also talk about that, especially in the. In the, in the second chapter of the book, and this view has been of course influenced by other people that you interview like uh like Hugo Mercier or, or other epistemic vigilance, and the basic idea is that we are not uh overly uh gullible, but we tend to be pretty stubborn that propaganda doesn't work in this automatic way that sometimes is kind of implicitly assumed when people talk about the, the, the danger of, uh, of social media and so on and so. This is a more um theoretical reason why I think that the background in evolutionary er social science is important when we look about, uh, when we look to these, uh these phenomena. Mhm.
Ricardo Lopes: Yes, we're going to talk a little bit later about how inf influenceable are people really, I mean, how gullible they are really. But uh before that, let's just talk a little bit more about moral panic. So, which do you, would you say are the aspects of our psychology that explain the development of moral panics?
Alberto Acerbi: Yes, there are, there are several. I, I, I, I think that the, the, the clearest one are, uh, uh, a general negative bias. So this is one of the few results in psychology that are quite robust, uh, that we can say now, OK, this, this exists, of course, there is a lot of variations and contextual factor, but We kind of know that in general, uh there is a tendency for everybody again with individual difference and everything, but there is a tendency to find negative information, more um attention catching, more memorable, more, uh, cognitively attractive than uh positive information. This again is really nothing new we know that. The negative news are more successful than positive news, but that is, is, I think part of the, the background of this. So it's kind of easy to, to, uh, say, oh, that the smartphone is destroying a generation, the social media are destroying democracy. This is something that, uh, uh, traveled very well, especially on social media paradoxically, and, uh, so this is one. There is a second aspect that is also I think important which is uh uh some sort of um We, we think that the past was better. This is er also pretty, pretty obvious, but it, it, it can also be tested uh with the experiment. This is interesting. I, I mentioned some of the experiments that try to test this effect. But also a, a, a feature of these, of these techno panics if you want that we found, uh, also today with social media smartphone is like the past was better like the, the, the idea, the idea of post-truth era which was very, very common related exactly to social media in the last in the last, uh, 10 years or so. It's, it presupposes explicitly that, that, that, that there has been a, a era of truth. If we are in a post-era there should, there should have been something before in which things were better and this again, it's uh, uh, uh, an idea of a kind of mythical past which is pretty difficult to defend if we think about it traditionally, uh, Jonathan Haid, which is one of the. Most vocal, uh, character in the, in, in showing that, the, the possible uh danger and risk of uh smartphone and social media usage, especially for, for, uh, teenagers and young people. Is, is also, is also often, you know. Talking about the importance of uh free games like the, like having the kids going in the, in the, in the, in the, in the countryside in the past, which again, it's, it's in a way, it's kind of reasonable. The, the, the, the, the interesting point here is that a, a feature of this techno panics is to have a, a, a reference to an era in the past in which things were, were better. And then of course uh uh so the these are kind of psychological background of this of this idea but then you have also a series of features of this uh uh moral panic or, or techno panics. So I, I used in the book the as I was saying before, the concept of, of moral panics that is a moral sociological concept. Um, THIS is from a, a sociologist's called, uh, Stanley Cohen of a few, a few decades ago, and, um, so he was studying some the, the, the, the reaction of media and, uh, and of the, the, the general population about some uh. Uh YOUTH disorder that were happening in the UK in the early 60s and um he, he was the first one, let's say, well, he, he was lucky to invent the term moral panic, but also was the first one to really focus on the fact that uh these disorders were not really. Major, uh, didn't create major problems, but still the reaction of the, uh, of the, uh, of the media and of the population was clearly, clearly, uh, bigger than what was the problem and he started to try to understand what were the reason related to this. So for example, the way to, to easily find some, uh, scapegoat about some more complex societal problem. Uh, FINDING some generalized water maybe local problem too big, uh, too big, um, to like all the world problems. So maybe social media usage is problematic for someone, but we say social media is destroying a generation. So this you, you make bigger the, the problem. So there are a series of features that are then, uh, characterize Hispanic. For the psychological reason and I was saying you have this negativity, you have this uh nostalgia for the past and they kind of go together to create a very uh attractive story.
Ricardo Lopes: Yeah, I mean, it's very interesting because there are two aspects here that are not new at all. I mean, first of all, people claiming that the younger generation or generations have some sort of problem that they are decadent or they are degenerates or As the new practices they have or the new technologies they use, they, uh, their moral character is weaker or they, uh, and, uh, also moral panic surrounding new technology like, I mean, I've heard about moral panics, uh, regarding writing romance novels, the radio, the TV, video games, so. I mean this, this seems to be just part of a very old historic trend among people, right?
Alberto Acerbi: Yes, absolutely. So, uh, in, in, again I, I go through some of these, of these moral panics related to, uh, communication technology because this is the topic and as you say, we have like from, from the writing itself, uh, to the radio. Uh, TV video game, so there is, is a kind of a recurrent pattern that we, we, we can, uh, we can see, um. There are, again, there are some uh good reason in a way I would say good in a sense of we, we can understand why this happens. So I, I talked about the psychological reason but it's also uh a fact that these uh new technologies, new communication technologies have a strong impacts on our society. So it's, it's normal in a way that we can, we can be uh worried about that. I, I will give you just the example, for example, the printing press, that's, you know, seems to be a technology that we would say, OK, it's books are not that dangerous, but, uh, but of course they are. Books are very dangerous and when the printing press appeared, it, it, it's reasonable that people were worried and is, is nice because I did that. I mean, it's not my, my main topic, but I did a bit of research about the, you know, the reaction on the printing press and when you read these things seems to read the like uh it, it could have been written 10 years ago. People are worried about the spread of misinformation, who is uh checking if what is written in these books is true or not. And uh and it's quite interesting because it's true that the, the printing press was a disruptive technologies that could potentially have negative effect. What I think that we need to learn from history in this case is that the, the introduction of new technologies, any kind of technologies is in general, I, I, I say in the book a process of adaptation, co-adaptation. So in generally, people that, that tend to feel. Very strongly this new technology thing that the technology arrived, it changes everything. We are kind of society's kind of passive and is molded by the, the new technology, but this is not usually what happens, uh, for example, in the case of the printing press, there are, uh, innovations that follow the printing press to manage, for example, the fact that there are. Many new books, uh, people invented the, the index at the end of the book, so it, it's, it's a way when you have a lot of information you need to find a way to organize the information. So this is an adaptation to the increasing information. It's that, I mean, again is a bit is a bit, uh, uh, niche, but there are some interesting stuff about how, uh, annotating books changed from. Uh, BEFORE and after the invention of the printing press, so, or even before with manuscript we annotated by, by readers or monks or whatever, but after when, when, when you start to have a big circulation of books, the, the way you annotate books, it changed because it became a way to, uh, summarize and synthesize information. So that there is, it's always a process and, and I think that uh Even now something like this uh is happening and it will happen that's, that's, uh, and then I think this is so the the let let's say that the election from the past technologies is not too much to say it is it already happened. So I mean it could be different now but I think that the important lesson is that uh we need to remember how thing uh change and that is not just a a a deterministic um. Introduction of the new technologies, but it's a process of adaptation between society and technology.
Ricardo Lopes: So I would like to get into the topic or the specific panic surrounding online misinformation, but just before that, let me ask you more broadly. I mean, people have this very common idea, or at least people who deal with or talk about online misinformation have this very an idea that people are extremely gullible and whatever kind of information they are exposed to, whatever they read, they watch, they listen to, they will fall for it. But I mean, how influenable or how gullible are people
Alberto Acerbi: really. Yes, that's a million dollar question that they say. The, the, the, the short answer is, I think less than what we tend to think. Uh, I, I, I try to give a bit longer answer. Well, first, one interesting thing that uh, that is related to this is that, uh, uh, which is kind of curious. We, we tend to think that other people are gullible, not much ourselves. So we actually did a, a study about this with, uh, with Sasha Chai. So we, um, we wanted to. Understand what kind of beliefs were linked to this idea that misinformation is very dangerous and it's very widespread so we had our participant to answer to this question about misinformation and also about a series of uh uh other other question about you know if they were. Worried about the status of the world if they were thinking, sorry, that they were not really good at detecting misinformation, if they were thinking that others were not really good a series of questions. And uh what we found was that the, the, the most consistent correlation because it's a, is a correlational study but the most consistent belief that was correlated with the thinking that misinformation was a a a big problem was exactly the idea that uh. Other people were gullible, basically, it's called in, in psychology, it's called sometimes first person effort is that the, the other people have, are really socially influenceable but we are not. So that's a kind of another uh pretty consistent pattern like I was saying about the negativity bias. So it's something that seems to be pretty robust in different study. And I think say something about the fact that uh our uh intuitive ideas about uh social influence are not super good so we tend to think that other are we not too much and then going more on the, on the, you know, on the science about that as I was saying before. Uh, I'm coming from, from cultural evolution. Uh, IN classic cultural evolution there is this idea of, um, what are called social learning strategies or transmission biases. So the idea is that we do not copy at random, but we have some. Kind of, uh, general heuristic like a copy, the majority, let's say copy from prestigious people copy certain content and not added. So this is a kind of a first level so we don't just, we are not just a random influence them. Uh, BUT I think that one can, uh, go a bit further than that. So, mm, in, in, in the previous book in Cultural Revolution in the digital age, I was actually trying to understand whether these, uh, these, uh, heuristic, these, uh, these strategies were possibly, uh, deleterious in online dynamics because, you know, if we copy prestigious people we could just follow some influencer or some politician. But I was saying that this. Didn't seem to be the case and uh and in this past year I tried to go on in this in this direction and for example we found that in general um. Not only people do not use much these uh strategies, but also in the, in the Cultural Revolution literature, we found that that people do not copy other even when they should. So we did a uh like a Sort of, sort of reviews of uh uh experiment of, in which people could use social learning or individual learning. And um what, what we found is that uh People are just, at least in this experiment made by research and in cultural evolution that we tasting the, the optimal social learning usage, but participants were not doing it, so people in experiments tended to be uh stubborn instead of being too gullible. And then when you start to look uh uh in, in various literature, you find that this is a bit of a pattern. So there are studies about the fact that advertisements do not work that much. So if you put together the effect of advertisement, some work very well and usually people remember the effective advertisement, some do not, and so the effect tend to be zero. Studies about in, uh, in sociology, about the fact that people tend to. Basically not change too much their idea during their life course they they say about the stable disposition model so that people do not change too much and so can this of provide a different, a different picture of uh social influence and then again uh in in in this in this my, this my, my, my. My, my research, I arrived to this idea, as I was saying before I was mentioned in the work of, uh, Hugo Mercier or, or the people in Paris about epistemic vigilance, which is, uh, um, again is an evolutionary approach to, uh, social influence but is a bit more, let's say strict than what would be standard cultural evolution. Now I, I don't enter in the detail of this approach. I'm sure that. You already talk with them and so you would,
Ricardo Lopes: yeah, I've, I've have, I've had Hugo Murcia and Dan Sperber on the show,
Alberto Acerbi: yeah, exactly, exactly, and, and I think to me at the moment this seemed to be the, the, the theoretical framework that at least in my ideas seem to describe to explain better what we see in reality. And uh and um which of course doesn't mean that we do not change our mind, but it means that the way we change our mind is not by seeing a few fake news or uh because of political propaganda, but we need to take into account much. More, uh, many factors and, uh, and so on and so let's say going, going to, to, to, to, to, to the, the, the topic, the, the techno panic topic with this background let's say, OK, we are not that gullible changing our mind is not that easy, um. It, it, it, it gives you some, some, some, uh, perspective to look at all the aspects that, you know, from, uh, microtargeting to misinformation to the effect on, on mental health they say OK, but maybe they're not so direct and so, uh, so, so, so clear as they are sometimes presented in the, in the, in the narrative. Mhm.
Ricardo Lopes: So, but let me ask you then, what do we know about the availability of online disinformation? Because again, people who write on this topic, uh, I mean, not everyone, of course, but many people, popular writers who write on this topic, uh, tend to present it as a moral panic in the sense that, oh, online disinformation is everywhere and it influences a lot of people and it spreads. It's very easily. I mean, what do we know about that? Is it really the case that it is, there's lots of disinformation on the internet and that it spreads very easily? Like, for example, even just during the, the last pandemic, the COVID-19 pandemic, at a certain point, some people were panicking because apparently they thought that many people suddenly were anti-vaxxers. I mean, is that really the case or not?
Alberto Acerbi: Yeah, yeah, yeah, that's, that's, that's uh excellent. Uh, NO, it's not. So it's uh, it's, and again, it, it, it, it's strange because it, it was uh kind of surprising for me when I started to study these things as I was saying before, say 2016 when everybody was talking about, uh, uh, you know, social media is full of misinformation, the internet is full of misinformation. The few studies that started to appear, they were saying, OK, that's, that's not the case. I mean, the, the, the, the majority, of course, the, the point is that obviously if you go online you can find all the misinformation that you want. I can just now Google for, uh, Bill Gates, uh, whatever was doing with the vaccine or whatever, yeah, yeah, the,
Ricardo Lopes: the chips on the vaccine chips,
Alberto Acerbi: yeah. I, I, I would find thousands or millions of pages and various versions. So, so obviously it's there. The point is that we need to, if we want to assess the quantity of misinformation, we need to assess it in the context of the total quantity of information. So this would be the, the numerator of a fraction in which the denominator is all the information that we can find online. When we look at this, at it in this way, then it's always like uh. Few percentage points. Of course, it depends. There are many ways to measure it. It's, it's an open problem and, and, uh, um, still the, the results, I think now give a, a, a story that I think everybody would agree that if we look at the quantity of uh misinformation is always very very limited also because the the majority of people uh tend to look at the mainstream uh mainstream uh things that usually do not have a big incentive to to uh to share misinformation uh. It's also generally um like the people that are a lot of online they will also consume and share more misinformation but they would consume and share more of everything so that's that's again a problem of uh uh of um. You know, you, you have to put this in relation between the total activity or the total quantity of information. So this is about, about the quantity. Um, WE don't have any strong uh support of the idea that uh misinformation is more effective to spread than uh true information. In fact, it's um. Intuitive to think that uh uh true information is more likely to, to spread than false information and in this case the intuition is probably correct um so there's, there's nothing really. I, I mean I discussed a a a famous paper that was, that was uh uh proposing this idea but I think it's not uh uh we have strong support for that. The, the main point though is, um, the idea that one could say, OK, even if there is not much misinformation could have some strong effect on people. It could, but the, the, the point is that, uh, nobody really knows, and, uh, so the people that claim this are just assuming that this is the case. Uh, IT'S obviously very difficult to measure the, the efforts. So what what I'm saying here is just that, uh, uh, we don't know and then. If we have a a perspective like the one I have so about an evolutionary perspective, the safe assumption is to say that probably there is not much effort. What is, what is worrying is that still now people tend to conflate. Consumption of misinformation and change of uh uh beliefs of ideas so people are exposed to misinformation so you see you you don't uh you are uh anti-vaccination because you saw some misinformation to me it's usually the, the, the, the causal chain doesn't go in this direction so you are, uh, opposite to the vaccine for complicated, uh. Um, REASON that can be your, your, uh, related to your, uh, economical and social background, to your belief, uh, and your trust in, uh, institution and in politics, so you're also skeptical about vaccines. So you see some, uh, our friend, uh, Bill Gates and the, and the chip, and you just, uh. Engage or share with this misinformation not because you change your belief, not to have other change belief because it would not work, but you just signaling, I look at this. I, I'm against the government so I think that's, that would be a more plausible explanation of why some people would do this. And, uh, and, uh, so, so I, I, I really think that, uh, that, then we can talk more about the, the, the, the more deeper ethical problem about the fight against the misinformation, but for, for staying on the, on the empirical uh part. I think there has been a lot of uh not uh not uh really robust um fear about misinformation and about the effect, about the quantity, about the, the, the changing people's mind that is really not uh not very robust from the empirical point of view.
Ricardo Lopes: But do you think there are any aspects of disinformation we should worry about, and do you think that we should do something to fight it, and if so, how?
Alberto Acerbi: OK, that's, that's a complicated question. I, I, I, I, of course, some aspect of misinformation are, are not, uh, so, so like, like, let's, let's use again the example of the vaccine misinformation. So this, this is, uh, this is, uh, uh, we, we don't want it, so. The point is that as I would put the, the thing, what, what we don't want is the people to be against the vaccine. This, we agree that we don't want, we want that people uh would uh again I'm let, let, let, let's. Keep aside all the debates about the vaccine. Just, just say, OK, vaccinate is good. We agree on that. Uh, WE do want that people, uh, do it. Um, AND we, we have to fight for this, but I don't think we have to, to fight for this. Uh, uh, WE cannot just fight against the, the misinformation on the vaccine. We have to, uh, make these people not receptive to this misinformation, and this misinformation will disappear. So I think that what we have to fight for is more the. Uh, THE demand for misinformation and the production because, you know, if the, we can do all the fight of misinformation that we want to try to avoid this, but if the demand will be there. Somehow the demand will be satisfied in, in, in somewhere in social media or in the internet so we, we, we used to say that misinformation is more a symptom than a cause of something so. Yes, we want to fight something that appears as misinformation, but the, the, the way to fight it is not to fight misinformation, but is to fight the reason why people would be sensitive or willing to accept this misinformation. So I think that's, that's, and, and, and I think really this is a, a, a key point because. You know, think, uh, let, let, let's leave on the side and vaccine case, but like people focused for, for, uh, in the first Trump election to say, OK, we have to. Trump was elected by with the fake news that they were called at the moment, so we have to fight, uh, misinformation. And uh we're still there so maybe maybe we had to find something else uh and and uh and so yeah I, I, I think we we need to be careful to, to this is an easy way to, to, you know, we, you have something that you don't like and you identify and this is another big feature of techno panics so we don't like Trump, let's say, and so we say OK, it's easy to identify. The problem in the misinformation is May be easy to uh fight it, but if we are not identifying the real problem, the social, the cultural, the economic cause, we are a bit of uh wasting our time.
Ricardo Lopes: So, uh, a topic that is somewhat related to misinformation but is not exactly the same, at least psychologically. Uh, LET'S talk a little bit about conspiracy theories. So, um, how, how do conspiracy theories come about? I mean, what aspects of our psychology explain them and are conspiracy theories necessarily irrational?
Alberto Acerbi: Yes, that's, that's another interesting point, so. There has been also in this case, I think a sort of Panics about the relationship between social media and conspiracy theories, this idea that, uh, uh, the world is full of, uh, conspiracies now, uh, this also, I think, is the same feature of what you already said about, uh, uh, misinformation. So not, uh, strong, uh, empirical uh support, at least, uh, uh, uh, dubious empirical support. Uh, UNCLEAR causal link between believing in a conspiracy theory and changing behaviors and ideas. Uh, CONSPIRACY theories do exist. They existed for, uh, centuries or millennia, as you were saying, conspiracy theories, they exist and they are. Up to a point relatively common because they do um have some, some psychological function. For example, they have uh um epistemic uh functions so they make us, you know, it's, it's the world is a very complicated place and they are uh usually simple way to, uh, understand the, uh, events that are complicated. Let's, let's use again the, the, the COVID example. It's very. Difficult to, to know what exactly happened, but if you say, you know, some, some, uh, whatever, uh, produce it on purpose and throw it around is explained, uh, give you like an epistemic value they have social value so, um, beside knowing and understanding, give us an explanation that, uh, uh, make the, the, the. The complexity of, of uh, of events in a way that you can understand and say OK that's that's why this happened so, so there are a lot of reasons. Um, SO it's not surprising that they are around. Uh, THE question is, is there any special relationship within social media or there has been an increase in social with in the past year because of social media of beliefs in, uh, conspiracy theories. And again, this is not, uh, clear. So again, the, the, the, the, the post-true era, it's, it's a bit, uh, problematic. Conspiracy theories were there before. It's even more difficult here than measuring misinformation like calculating the proportion of beliefs in conspiracy theories in 60 years ago. It's, it's very complicated, but some people tried, uh, tried to do the, the Joe Sinski and the group. And when you look at this again it doesn't seem that there is a clear pattern in which they, they increase of course social media are, uh, conspiracy theories spreads on social media because everything spread on social media so you will not be, uh, surprised about that. In the book I discuss the fact that maybe this is a pretty speculative. I found it interesting though that uh. One aspect of conspiracy theories is that uh um people that believe the conspiracy theory can somehow, somehow participate in the construction of the theory. He, a, a good example is uh QA on the, the, the story. I don't know if you remember the, the, um, I mean now is a bit quieter but there was this queue that was. Putting drops online that had to be interpreted by the, the, the, and, and I think this is an interesting case because uh in this case there is a link with social media you could do this only on social media uh 50 years ago would have been a problem for you to have people to collaborate in this creation of the conspiracy because how would you do it you can do it in your. Uh, YOU know, cafe or something, but would have not been very effective. So in a way I think social media makes this collaborative construction of conspiracy theories maybe, uh, easier. So maybe they can change some features of the conspiracy theories, but there has been a, a, a. There has been an increase because of social media is, is not, uh, is not clear again they are there and I think if I can add the, the, the, the logic to me is the same of misinformation so we, we see them and, uh, you know. Now we can again look for a weird conspiracy theory online and we see them 50 years ago we had to to go and check with weird publications or with the with people. Well now I, I call it availability so everything is there, everything can be measured and counted so it appears that oh the world is very weird, but maybe it was weird or weirder even before we just didn't know because we were in our. Uh, WE, we didn't have the access to this information. Mhm.
Ricardo Lopes: So, let me ask you about another very common claim that people make, that online and particularly on social media, we live in echo chambers and there's filter filter bubbles, and so basically we're, what people are saying is that The sort of information diet we get online is very biased and tailored to our own beliefs, our own political beliefs and other kinds of beliefs, and so we don't really get exposed to uh contrary views. We don't really get exposed to information that contradicts or that would tell us something different than what we already believe in. I mean, is that really the case?
Alberto Acerbi: Yeah, this has been another, another big, uh big fear related to social media game that has been, you know, social media start to be widespread and you have Trump and you have populist Party and you have Brexit, and so as in the case of uh misinformation and conspiracy theories, it It was natural to, to, to, to, to link the two things, social media and these increasing people uh believing things that we didn't like we in this case, you know, I'm an academic liberal professor of sociology, so I'm like, oh wow. But, uh, even in this case I think it's, it's, uh, and, and I, I would say that now in this. Uh, IT, it, it, it kind of the fashion of the echo chambers on the internet can, kind of went down, I think, and again, even in this case, the, the, the basic idea is plausible, so we know that the algorithm would give us, uh, uh, on average similar things to what we already like uh we can create, we can choose to follow someone and not follow some other. Availability also this is is very good because if you look at the, the transmission of information in a social network you would obviously find clusters that tend to communicate more within than with the external and so this was another idea that was very uh popular and I think uh again not very, very well grounded and indeed the the more studies that happened they showed that uh um. The the the. The the echo chamber in nature of social media is less echo chamber than what people would would think that uh. That when you compare especially the situation in social media and not in social media in, in say offline life, uh, echo chambers and polarization are also present so um you know, traditional media are as much if not more polarized than than the social media, uh. Daily, daily interaction are also very polarized. The group of friends that we have are not usually, uh, very, very different from a political point of view. Uh, EVEN in, in, in, in Technopanic in the book I talk about also the example of, uh, like, like geographic. POLAR polar uh polarizations like like people live in different neighbors without even thinking about social media. You just, you know, there are neighbors for a rich and leftist people and whatever so it's, it's, I, I, I, I think the picture is much, much more complicated. And, and to me now this seems really like uh not the case, but there is a, I, I think a nice example is what happened now with the, with the um the, the division, the Blue Sky and I like it. I mean, people aid to create their own echo chambers because uh social media were too much mixing people so you say, OK, I want to refund. So, so it's the opposite of the idea of uh like the, it, it's probably social media that, that they were giving too much diversity and so people had to say, OK, I don't want this diversity. I prefer my, which is again. Reasonable, uh, for many points of view, but it's really the opposite of, of, uh, what the, the echo chambers idea would predict. And another aspect here that I think is important is that This this is really an example in which and, and I think is a problem in general for studies of, uh, social media is that they are, uh, generally US centric. So in, in US there has been clearly an increase in polarization in the last, uh, actually it predates social media. The increase in polarization is in the last decades, um, and then of course you, you link these two things. Uh, THE point is, is less clear that the same effort has been in other countries where of course you still had the, uh, the diffusion of, uh, social media, but you didn't have the same, um, the same increasing polarization and so again this is the same problem I was telling before about, uh, fake news is easy to say, OK. There is a problem we don't like the increasing polarization. Let's identify an easy target, echo chambers in social media, uh, but, but, uh, you know, sometimes this cannot be, be the right way to do it.
Ricardo Lopes: So another very common thing that we've heard even in recent years from very notable people like Tristan Harris and others where they say that algorithms know us better than we know ourselves and that we can, for example, predict someone's personality. Done what they do on social media, and if, for example, you go on Netflix or on Amazon or on, on YouTube, they also, they always know you better than yourself and they always suggest content for you to watch that before you know you want to watch it. Something along those lines. Is, is that really true?
Alberto Acerbi: Uh, WELL, I mean, it is a bit true, of course, I mean, of course, there is a lot of uh data that can be collected on our activities and of course they tell something about us and about our, our inclinations and it, it, it, our behavior can be up to a point predictable if we use our activities. Um, SO, so, on, on this, I, I'm pretty, I'm, I'm saying, OK, yeah, we, we have to be careful about this, uh, this, what, what we are doing online about how our data are, are, uh, who, who can have access to our data. So I think this is, this is true. Uh, THE point for me is what are the consequences of this. So, um, if, if we go from this to the fact that some, uh. Magic, uh, algorithm can change our political ideas or our behaviors, then, then it's a problem. The problem is there. So, um, something can be predicted, um, up to a point I think this is OK. So it's good if, if we have some personalization, for example, in, in. Uh, YOU know, if Amazon advised me some new books to buy, that's, I mean, the, the, the, the, the, the catalog of Amazon or the, or the, the, the, the, the amount of things that are online would be an algorithm should, should somehow do something, otherwise we cannot have a random search in this, uh, big search space. It's just that. An impossible problem. Um, I have to say, this is, I, I'm talking as a, as a, as a normal guy. Uh, MY, my feeling is that they are not that effective. They, I think the personalization algorithm should be more effective. I think in the book, I do this example when, when we moved in Toronto, I bought a, uh, mattress for the bed from IKEA. And then I had the 4 weeks like advertisement of mattress of the bed so it's not that I'm opening a hotel or something. I bought my mattress, so it's like a not super smart algorithm should have this or you buy, you know, some Chinese food and then always sell it. I, I don't know, it seems pretty, pretty nothing for I, I never failed to be so like scared about this, the power of the algorithm. Say so, it's true to, to, as I was saying before that, that there is some predictive power that have been studies that they think about quite a reasonable that, you know, can predict, for example, the uh up to a point your personality traits based on, on your social media activity which is uh. Uh, REASONABLE, and, uh, it's, it's something that is quite interesting as I was saying, what can you do with that, uh, even in this case again there is the, the, the alarmistic narrative that, you know, Cambridge Analytica with the, the psychographic, I think was the term, uh, psychographic was like, you know, taking the data that having people change their vote that Brexit was, uh, was due to. To impart at least to the activity of Cambridge Analytica. Again, I, I repeat the same story. When we look at the, at the studies that try to, to, to look on the, um, for example, the persuasive effect of uh microtargeting. I, I, how do you say in English, micro targeting or micro targeting? Uh,
Ricardo Lopes: I usually say micro targeting. I mean, maybe there are people who say micro targeting. I'm not
Alberto Acerbi: sure. I don't know. I, I, I live in Italy now. I'm forgetting my English. Uh, LET'S say micro targeting, uh, what, what studies that look at the effect of microtargeting, political microtargeting. Mhm. Usually, as I, for all, I mean, I, I have the same pattern, but they, they don't find uh strong effort. The effort seems very small and very context-dependent. Of course, there are studies that show that if I, if I have some information of your personality and I put like an advertisement uh related to the personality, maybe you click a bit more on this than on the other, that is the opposite. But going from there to the effect of political uh microtargeting is really, again, I'm not saying in this case that uh we know that it doesn't work. I'm saying that uh we don't know whether it works. And this is, is very, is very uh different with the narrative that we have about the, the power of the algorithms that are changing our idea and The same logic, I think again when you look at this, then you are not looking at the uh more real problem. So like the, the, the, the Cambridge Analytica scandals, if you think that, that um changed the result of Brexit and the power of psychographic advertisement, but then. OK, maybe let's look at why, uh, Brexit happened more let's look about the why, uh, the usage of data in Cambridge Analytica, what was the problem? What was OK, what was not. If we say they are destroying the world, we are not really even criticizing well these, uh, these big corporations, so. Again, the, the, my, my, my, my more uh nuanced uh um view is not to say everything is good and they are doing good. It's, I think that it's not effective to have this catastrophic and overly negative, uh, narrative.
Ricardo Lopes: So another very common claim, and because I mean, I spent some time on YouTube also because of my work, I, I've heard over the past few years many people talking about rabbit holes and that people can fall into rabbit holes on social media and. YouTube particularly, and there are even videos of people on YouTube talking about how they fell into a uh alt-right rabbit hole, for example. I mean, is that, uh, really something supported by the evidence that something can that uh like that can happen on social media?
Alberto Acerbi: Again, that's, that's the thing. So this is a, uh, a relatively plausible story, you know, you have people spend a lot of time on YouTube. Uh, YouTube advise you other videos and maybe videos that are more emotionally. Uh, STRONG,
Ricardo Lopes: compelling, yeah,
Alberto Acerbi: sorry,
Ricardo Lopes: um, emotionally compelling,
Alberto Acerbi: yeah, exactly, it can be advised more by the algorithms. There is an increase in, uh, in, uh, again, the alt-right or whatever for young people. So that's, that's a very good story. It, it's plausible. It's a very, uh, make us feel well because you say, OK, these are good guys. Society is good, but YouTube is making them going in this direction. Uh, AGAIN, in this case, uh, there is about, I mean, usually when you, when you, when you read about these rabbit hole is generally. You know, uh, single stories of some people that tells the story about this, which I mean, I, I, I don't have a doubt that this story are, are, uh, real and honest. The point is when we try to study this with experiments, uh, or with, with data, it's also very, very, uh, unclear if there is an effort. In, in the book I talk about, uh, an experiment they did with uh what they call, um. Counterfactual bots so they had some, some bots being trained on the um because what is difficult in this case is to distinguish the effect of personal uh usage, personal choices and the algorithms of course people that go towards alt right will look at more alt right video, but it's because they want to or it's because the algorithm that uh uh push them that, that's another very difficult question. This, this, this experiment, they have these counterfactual bots. They were trained on the um viewing history of real individuals and then at some point, the, the bots were just following blindly because it's a bot. The, the, the algorithmic recommendation. And the users will keep on doing their own stuff so you can, in this case, you can compare the, the pure algorithmic recommendation and not the pure user but user plus algorithmic. If the pure algorithmic com if the, the, the rabbit hole theory is correct. The pure algorithmic recommendation should push you more toward the extreme and uh and again in this study they found the opposite so they found that uh the algorithmic recommendation tend to stay where they were or even go to our more moderate videos while users were going in their whatever relation some case in some case toward more extreme view. And this makes sense because the algorithms usually give you, OK, uh, emotionally compelling content but also generally give you content that is already popular and content that is already popular is mainstream video and, and, and again of course it happens that sometime you have, uh, you know, the, the Andrew Tate or whatever that get very big. Um, BUT I, I, I think it's interesting because when it happens, everybody talk about it, so it's not the, what happens in YouTube, what happened in social media, it's sometimes this happens, so we are like all, uh, worried, but when we look at the general pattern, this is not the case. And even in this case to understand why some, some young people go to the alt-right or whatever they go mm. The, the, the, the rabbit hole uh hypothesis is to me a bit of a shortcut and a scapegoat to say, OK, we just, uh, we just, if the, the algorithm would be different, uh, every, everybody would be happy which is uh Uh, strange way to put things for me. Mhm.
Ricardo Lopes: So, uh, let me ask you now about uh the correlation or even the causation some people have been making between social media use and mental health outcomes. I mean, even last year, we've had a new book come out by Jonathan Haidt, The Anxious Generation. I mean, it, it really didn't convince me at all, and I I read some critiques of it and it seems that many of the claims he makes, there are not really well supported by the evidence and there were some poor studies cited there. But I mean, let's go step by step here. First of all, uh, how can social media use and its supposed effect on mental health be properly measured?
Alberto Acerbi: That's a difficult question. How can we properly measure? We don't know. Uh, WHAT we know is that, uh, uh, a lot of people, including me and some of my colleagues and other people are trying to, uh, do this. And um I think if we are honest we should say that is very difficult and that so far even in this case there is not a a clear results that go in any direction so I, I mean on this I I am I am. You know, quite open to see what happened. I, I, I'm happy to if we will meet in an interview in, uh, other 5 years. I'm happy to say, OK, I want to see other 5 years of research and I may change my mind on this. I, I'm pretty, I'm pretty happy to say in the other, I think I, I, I would bet all my money on this. I'm like, let's see what happens. But we can say what is happening now and uh the situation didn't change from when I finished the book one year ago or so is that uh we don't have a clear uh there is the first problem is the, the, the existence and strength of the correlation between social media usage or smartphone usage and mental health. So some study. Some studies found uh correlation, um, some didn't. uh, THERE are several problems. One is that usually these correlations are very, very, uh, small, which by itself is OK. Human behavior and societies are complicated stuff, so small efforts are perfectly fine. Uh, THE problem is that uh these small efforts are also linked to the fact that, uh, as, as you were saying, it's very difficult to measure, uh, both social media usage or smartphone usage or digital technology usage and, uh, mental health. So you can ask people, uh, how many social media do they have in their account, how many hours they spend on social media. Uh, HOW many hours they spend on the smartphone, do they have a smartphone or not. There are many. Mental health is the same. You can ask them, you can look at the. Medical prescription, uh, and the problem is that when you put this thing together, you can find always something, um, so we need to be very, very careful. To me it seems that yes, it's plausible that there is some, some small negative correlation, but even if there is, there is a big discrepancy between this very small effect, if there is. And like the, the, the destroying a generation uh thing and, and this is just the first aspect. This is the aspect about the the correlation. Then the second aspect is about the, the, the causality. So what is, I mean, it, it's equally plausible that people that uh uh for any reason have a worse mental health would use more smartphones and social media. Than the opposite that that that is the usage of social media or smartphone to cause mental health and uh even in this case again I, I'm, I'm not saying that I am sure that uh it's not the case. I'm sure that people that say that they are sure they are uh a bit cheating I think because it, it's, it's really, it's really difficult to do experiments that taste properly this quisition, um. Because, uh, because it's very difficult, so you, you, you have this, for example, this, this deactivation experiment. So you ask a person to say not use, uh, a smartphone or a social media for a certain period of time and then you measure the difference, uh, so you indeed these are kind of like standard experiments so you have a control group, you can do your stats, everything, but, uh, of course, uh, we. How, how, how common, uh, is this conversation in society if someone asks me to not use my phone for a week, I kind of have very strong expectation of how I would feel and what I would say is very difficult to see if these data are very reliable and then it's just one week. What, what does it mean to not have it or have it? I mean just one week or one month or one day can be a little holiday from, from the. Social media, it's, it's just complicated and, and again it's, it's um. Also, I mean, in, in a, in a, in a situation in which everybody use uh social media or smartphone is very difficult to measure if there is maybe and and now I'm saying something against my point so like maybe uh even if a person use it not too much if everybody around them use it they will also have a negative effort so you cannot measure. So what I'm saying is that uh. To me, the, the, the best, uh, best guess that we can have is that, that we don't know now, uh, so it's good to have up to a point some, some, uh, be, be careful about that, but that, that we don't want to go on the extreme and, and present this thing as, as that we know and that we need to have strong, uh, strong reaction to this because. And I'm repeating myself, this could also have uh by itself a negative effect. We don't know what would happen if we take out. I'm, I'm almost uh uh. Interested. So in Australia there will be a ban for um less than 16 year old of uh social media starting I think in the 10th of December. You know, can be a, maybe we will know more. I, I'm, I'm happy to see, to see what happened.
Ricardo Lopes: Yeah, I mean, but there are two other points here that I think could be a little bit problematic. I mean, one of them has to do even with the claim. That uh teen mental health is getting worse over time because I've heard and read people questioning that claim and they look at the data and say that there's not really good enough evidence to really make the claim or be sure that teen mental health is getting worse over time or over the past 15 years or so because usually people Start in 2010 with the introduction of smartphones and uh the uh widespread social media and so on, and then even if it's getting worse, I think that, and uh please tell me if you agree with this point or not, but I think that uh trying to create this unifactorial narrative based just on social media use is not helpful because There are many other factors that could be playing a role here, like, for example, an uncertain future for young people having to do with, for example, climate change, the job market, the economy, and so on. I mean, there are many other aspects that could be playing a role here if, if mental health is getting worse,
Alberto Acerbi: right? Yes, but it's much easier to say that it's a smartphone. You know, it's more difficult to solve the job market and climate change where the smartphone you can say just OK, don't use the smartphone until you are 16 and uh you feel good about that. No, uh, excellent point. I, I completely agree. So, uh, first thing, uh, that is really true that there is this decline in mental health, um. Again, one aspect is that usually as you were saying, people take the last 20 years of data and they observe something like this. Even in the US if you take say the last 40 or 50 years, you have kind of some, some ups. And down and if you look at all the data, there's nothing special in these last uh 20 years. People were, uh for example, much uh worse in the early 90s. I don't know. I, I remember that now. I don't remember the exact data, but if you look, it's nothing special now. So, um this is also not clear and again, as I was saying before, this is all about US data. Uh, IT'S less clear if this happens in another country, um. Seen kind of in the UK, for example, is less in Italy or in more South European country. So it's, it's not obvious this, and the second aspect that you were saying which I completely agree is this, as you were saying this mono, uh, mono causal explanation, uh, of course there are so many possible other reasons. I mean all that you say, uh, the fact that there has been, uh, uh. In some country, and these are the country in which you observe more clearly, uh, uh, this, this, uh, this effect, uh, a change also in the social, uh, fabric of the societies and, uh, which I think it's, uh, more problematic and interesting also than just the, the, the, the, the smartphone. Um, THAT there has been, uh, uh, the, the, the way in which, uh, mental health is, uh, categorized changes a lot, so it's, uh, it's, uh, now, uh, which is good, but, uh, uh, it can have some effort now it's kind of more OK to say that you have, uh, uh, some. Some problem than maybe 20 years ago again that's good but could could create these efforts so uh also this part is not is not uh foolproof so we need to be careful when we do big claim but again these big claims tend to be very. Very attractive for, for, for the reason I was saying before, they are negative. They talk about uh when I was younger, we didn't use the smartphone. Uh, THEY find easy solution to problems that are complex and are scary. So yeah, they, they travel fast in in culture.
Ricardo Lopes: Uh, IS there a good enough evidence to support the existence of something like smartphone or social media addiction? Because I mean, I've heard people talking about that, and then something that really particularly irritates me is that people always link it to dopamine and then there's, there's always influencers on social media. Media and the internet talking about dopamine addiction or that every time you use your smartphone or you go on social media you get the dopamine hit. I mean, is, is there really any basis, uh, any scientific basis to that?
Alberto Acerbi: No, no, not really. I mean again, but yeah, I, I, I completely agree with you. So it's fine because yes, you, you have been addicted by dopamine or whatever. Uh, SO, yeah, this, this is, I, I mean, you know, dopamine is, uh, I, I, I'm a, I'm a, I, I work in cultural revolution, so I don't want to enter in the detail, but dopamine is a neurotransmitter, so you cannot be addicted to a neurotransmitter. It's just something that, you know, the level change whenever you do something that is pleasurable or not. Uh, SO this is the first point, when you say, when someone say, OK, I'm addicted to dopamine or whatever, it's just say, OK, let's not read the rest of the article. Um, uh, THEN of course, uh, the fact that we use smartphone and social media means that they give us some form of, uh, uh, reward, otherwise we would not use them. So, so that probably is true. I, again, I, I don't know really here the, the detail, but of course there will be some increase in dopamine when we do something on, on, uh, social media. The point is that, uh, uh. To talk about, uh, you know, addiction or about problematic behavior, this increase in dopamine should be, for example, similar to what is the increase that you have for activities for which we use, uh, usually the term of addiction, drugs or similar thing and when you look at this in this way, it's completely uh another, another thing. I mean, in general, the, the I, I think generally it's problematic to. Move the, the, the, the terminology of addiction to uh uh behavioral addiction. I mean, people do it, uh. That I like obviously the, the, the proper domain of the term addiction is about, you know, drugs or alcohol, things that there is some, some withdrawal, physical effort. This, this you can use addiction, that's a problem. When you talk about giving an addiction, my opinion is that uh. It makes sense if the er behavior gets problematic for everyone that go after a certain threshold. So let me explain, let's leave aside any physical effort or whatever, but we can say that heroin is addictive, because if you start to use heroin, er and if you go on for a while for the, I don't know. I, I'm, I'm using just a random number for the 90% of people, this will create problems. So it's not, it's, it's, it's generally problematic. Um, SO sometime for behavioral addition this could make sense. I, I'm thinking about like, uh, um, like betting in a, in a very consistent way. I, I would think that if a person. Bet, uh, money every, every day for, uh, for a long period of time. This would necessarily, at least for a big percentage of people, uh, a problematic behavior. So I would call it, uh, I, I would, I, I don't know, again, I'm not an expert but I would accept that, OK, can we, we can talk, we can use the language of addition, this add something to our knowledge of the, of the behavior. Going to the social media, I think that for the this is not the case. The majority of people have uh an OK usage of social media. Of course there is problematic. This doesn't mean that there is not problematic use. There is some people use, uh, uh, maybe they, they need like real, real, um. Real, uh, uh, medical or, or, or, or psychological help, that's, that's for sure, but uh it's not that the majority of people that start to use social media end up in a problematic situation. So in this case I would not talk about addiction. That's why when people use the term addiction, you know, like sex addiction or shopping addiction, I, I think these are kind of uh. You know, labeled for saying something in a, in a funny way that maybe describe the behavior of some people, but the, the problem is not social media edition, the problem is that these people that are. Addicted to social media or shopping, whatever, have some, some other problem, but the, the, while in the case of heroin and maybe betting, these are problematic by itself, in this case, they are not, so I will not use the, the, uh, the term addiction and I think this also is a bit of polluting the, the, the conversation about the effect of social media because you know. Mhm. So, yeah,
Ricardo Lopes: what do you make of proposals that we've seen across the world, uh, for banning, uh, smart smartphones from schools and prohibiting social. Media use for children under 16, I mean, do you think that there's any scientific evidence to support policies and measures like that?
Alberto Acerbi: Yes, as I was saying before, the scientific evidence is really Not, not there yet, uh, and I would say, OK, let's hope that, you know, if this thing happened, even if I'm not particularly in support of this, maybe we will have some scientific evidence, and, uh, you know, I hope in 5 years to talk about this, and I will answer to this question, and I'm really happy in this case to, uh, to change my mind with the, uh, evidence that we have now about the effect. I, I think, uh. Like 16 year old bands can be more risky than, than, uh than um beneficial. Um, OF course, uh, I'm not claiming that we need to give, you know, smartphone to, to primary school children and let them abandon and do things. I don't think that the way is banning. I think the way it should be a society that uh accept in full what can be the, the effect, the positive and the negative effect of smartphone and social media usage and build, uh, structures in society, in school, in family that make the usage of social media, take the positive part and not the, uh, and not the negative part, um. Again, because a band can actually for, for some teenager, I imagine that social media usage is really important. Imagine, you know, uh marginalized communities or uh it could be. More problematic for them than beneficial for the rest of the people and then you would do really damage to someone with a band so I'm not particularly in favor, as I'm saying we don't know too much so to me it would be, I mean I would, I would, you know, I, I, I would prefer that uh uh the, the, the usage of smartphone is incremental with age that should not start too early, um, but uh this should be done, I think in a different way, should be done in a society that give you. Occasions to, you know, do something else without prohibiting so you know, middle school kids should, uh, not use too much the smartphone, not because there is a law against that, but because they have other things to do because adults people should give the example of using it in a good and moderate way again it's easy to say yes, it's the kids that are the problem, so. Mm, it's, it's difficult. It's, so I, I'm, I'm mostly talking about what is a kind of a, a common sense thing because we don't have much science behind that, but, uh, people that pretend that there is a science and in general I don't think in, in very good faith. Oh, well, they, they are probably in good faith from their point of view, but, uh, um. You cannot say that we have strong support for these uh these initiatives.
Ricardo Lopes: Yes, and I've heard some arguments made by people who are against these kinds of bans that I think are really important to consider. At least it's not just some of the positive aspects of social media like the ones you referred to there and the fact that people can connect more easily with other people and perhaps it's easier to get in touch with people from different cultural backgrounds and have those sorts of experiences. But also the fact that, and these are the arguments that some people make against the bans, the fact that it is important even for children to acquire and develop social media literacy, uh, online literacy, and also for them because like it or not, social media are now part of our lives, the lives of everyone. It's important for children also to learn how to use them properly. Even uh in the future for their future in terms of professional development and so on, yes,
Alberto Acerbi: yeah, no, I, I, I agree definitely. I mean, it's, it's at least unclear whether, you know, they're banned until 16 year old and then a 16 year old what happened, they just, you know, it, it could be even worse because then they have, you know, the full access and they didn't, didn't have any experience to build some knowledge, yeah, I, I completely agree. Another aspect that is more practical. I would this work? I mean, in, in, in, in how, in how many days, uh, the, the kids will find a way to, to fake the, the, the age identity. I mean, it seems to me like very, very, uh, but, but, OK, this, let's not talk about that. It's not, uh, but, but is a practical thing, but we see. But yeah, I agree completely. Yeah, it's probably makes sense to build incremental skills in a. In a, in a, in a positive and controlled environment and say like, OK, this is bad and you can do it only at 160.
Ricardo Lopes: Uh, YEAH, I mean, I was laughing because I was an adolescent in the 2000s when the people, when the internet became more widespread, and I can guarantee you that uh as young people we were able to find ways around certain things.
Alberto Acerbi: Exactly, we, we, we all have this experience, so yes, yes.
Ricardo Lopes: OK, so, uh, uh, I have two more questions then. Uh, WHAT is your take then on communication technologies? I mean, are they mostly good, mostly bad? How, how should we deal with them?
Alberto Acerbi: Well, uh, that's, that's a, that's a hard question. Uh, SO, um, We cannot say if they are mostly good or mostly bad. The, the, the classic thing is it depends how we use them, but also I, I, I, there is also, I think, a naive thing that say, OK, uh, they are neutral and, uh, you know, it depends what we do, but I don't think they are neutral. I, I think that uh. Each communication technologies gives some kind of affordances, gives some kind of, and, and even different social media have different algorithms that you, you use them for different things so you use uh uh LinkedIn to promote your work and Facebook to show the last uh dinner that you did uh. Television was one too many, media, uh, visual radio was one too many, but without visual, social media are 1 to 1 or many to many. So uh it's not that. They are neutral. Each different communication media have different features, um. And I don't even think that I can say if they are good or bad. The point is, as I was saying probably at the beginning. Each new communication media has communication uh technology. Each new media have different affordance, different features, and then there is a co-adaptation process of the society that tend to use this in a way that uh. Up to a point serves the society. So, uh, in general, new, uh, new communication technologies give more possibility of, uh, communicating this thing it's potentially at least uh positive. Then of course, depends how we use the photos is, how the society adapt to this, uh, to me. Like now it seems, seems, uh, seems crazy to say that, you know, internet is, uh, it was a positive development, but to me it clearly is. I mean, we, we, we give us for granted the fact that, you know, we, we are doing something that until uh uh 30, maybe years ago, 20 even was, was impossible. Now it's, it's normal and, and. This is increasing, uh, the possibility like we, we, we, we take for granted a lot of, uh, uh, a lot of innovation, a lot of possibilities that uh uh to me are, are uh positive so, so it's not that technologies are positive or negative and not even neutral technologies I was saying, have, have affordances and uh. Um, OPEN the new spaces and it's up to the society to use them in the, in the better ways and, and my point is that whatever people would say, I think that the, uh, the net, um. Balance of what happened with digital communication technologies is, uh, is positive. We, we seem crazy. I mean, it's, it's, uh, uh, I, I was, uh, I was, uh, this was probably the, the dominant idea like 20 years ago or something, and I, I found recently a book of, uh, 20 years ago that you would, about 15 years ago that was like uh Internet will not solve all the problem, but it's not, uh, which really now seems like, of course, because they not solve the problem, but because we really change our perspective on, on the effort but uh um it's, it's, um, I think there are many things that we take for granted that are, that are very important.
Ricardo Lopes: Mhm. And so my last question and then in part, we've already explored this question, but I want to get, I guess a broader uh or a more detailed answer from you. Uh, WHAT do you think are the effects of alarmist narrative surrounding social media and communication technologies more generally? I mean, earlier when we talked about the supposed correlation. Between social media use and mental health outcomes we mentioned the fact that the sort of monocausal explanations, I mean, are not good scientifically, but also they leave other possible factors aside that could be considered and tackled. So what other effects of these alarmist narratives do, do you think we can have?
Alberto Acerbi: Yeah, yeah, that's a, that's an excellent question to conclude, and I mean, it's also like a bit like the chapter of my book this because usually people kind of tell me, well, you are defending, uh, Elon Musk or the Zuckerberg, which is unfortunately, as you see, I'm here in my kitchen, yeah, I'm not on some, some private some
Ricardo Lopes: mention,
Alberto Acerbi: yeah, no, there, there's no mention. It's, it's a little kitchen. Treto is very nice, but, uh, no. Uh, OK, I decided, so yeah, the, the point is that uh I think that this negative alarmistic narrative can have. Negative effect by itself. So what one could say, OK, well they're not strong supported, but still we try to do better. The point is that I don't think that this is the case. So you, you, you already gave the example of, of, uh, mental health, but for example, let's go back at the, uh, misinformation case. So 11 aspect that we already, uh, discussed a bit before is that uh if you focus on this. Cause or or presumed cause of, of uh various events you are not really looking at more uh possibly important structural causes, social problem, economical problem, cultural problem you just say, OK, people uh vote uh populist parties because of misinformation. That's, that's, uh, that's a problem with this narrative. There are also other, for example, in the case of misinformation, more, more specific problem. One, I think that this narrative contributed in uh a general loss in trust of uh media and institution. I, I, even in this case, I would not say that this is the only cause. I, I, I would not go for a monocausal explanation, but if it. Is one of the cause if people keep on repeating, you know, social media, uh, it's just a lot of misinformation and then we observed that there are now some experiments that try to actually test these things. So if you tend to participate that, uh, um. That's the, the, the, the everything that the misinformation is very widespread, they will tend to trust less uh less also reliable news. This is also a problem for what we said before, so we say that the, the, the problem with, with social influence is not too much that we are too, too gullible, but that, that we do not change our mind when we should. So if this negative narrative of misinformation make everybody more skeptical, this is probably not the right direction. We would like people to be open to to uh to. Reliable information like people use less and less news, uh, social media, at least some of them try to avoid the to, uh, propose political news for the fear of being involved in some misinformation thing and again the result is that people are, uh, less informed. So in general I think there is a lot of uh uh. Problems that are so that the beside the fact that they are not uh uh empirically uh strongly supported this narrative have uh um some, some drawback by itself and again even if we want, as, as I want, if you even if you want to be critical about say social media about some aspect if, if we have this overall not robust narrative, we're not really looking at the, at the, um. At the specific aspect of the things that we could change. I, I, I see you had two minutes for talking about something that I didn't talk in the book because I didn't want artificial intelligence because I said, OK, let's just not touch this because negative narrative, it's, it's, it's complicated, but uh I, I was finishing to, to work at the book and was the, the time in which there were this big um narrative about the existential AI risk for humanity and that, you know, there will be AGI and then people will uh humanity will uh will get extinct or and so, so you have in this case like a very big negative alarmistic narrative and what I found uh interesting in a way that it was supported. A lot by the same people like the big guy working in AI and I was wondering like why, I mean they, they don't have any, any interest in supporting this narrative, but then I was thinking about this and I was saying, OK, but, but they do because if we are focusing about these, you know, things that maybe will not happen or maybe they will in 50 years, we are not looking how they are getting the money, where are the data that they're using, so again, I think this big narrative are in a way. Paradoxically going in a good direction for, for the, the, the, the, the companies or the social media. So it's, it's also this uh this is a a risk. It's I think it's much more productive to focus on a realistic thing and change what we can actually change. Mhm.
Ricardo Lopes: So let's, let's end on that note, Doctor Rerbi, and the book is again Technopanicco. I will be leaving a link to it in the description below and apart from the book, where can people find you and your work on the internet?
Alberto Acerbi: Yes, again, I, I, I try to do one, soup stack in Italian which is, uh, which because I, I was trying to focus a bit on the, on the Italian audience, so there is this, uh, it's called chinqueliaima which means, uh, 5, links every week. In which I try to put uh these links to this kind of topic we are talking about and also some uh cultural evolution things. So this is something that can be interesting. Again, the idea here is also related to the fact that uh uh if you have access to all this information. Curation is kind of important, so having someone that would give you, OK, just read these five things for the work that you do, so try to, you know, give some, some curated aspect. It's, it's getting, uh, more and more important and more access do you have? And then I am in the, you know, the usual social media.
Ricardo Lopes: Well, and, uh, as I told you yesterday, I mean the day before we recorded the interview, uh, I'm going to put up your book on my list of my favorite nonfiction books of 2025, and I'm maybe I can influence a few people to learn Italian because I will also
Alberto Acerbi: have another,
Ricardo Lopes: I will also. I have another Italian book on the list. This year two Italian books, so I mean maybe I, I might influence some people to learn another language. That,
Alberto Acerbi: that, that would be good. I mean, I, I, I, I translated it. Uh, I have a first draft of an English translation. Uh, HOPEFULLY there will be an English version maybe, maybe next year, but, uh, let's see. People can start to learn Italian and then, and then, yeah, I, I,
Ricardo Lopes: I think that, I think that's a good thing.
Alberto Acerbi: OK,
Ricardo Lopes: OK, so Doctor Aservi, thank you very much for coming on the show again. It's been fascinating to talk with you.
Alberto Acerbi: Thank you very much. Thank you to you. Bye, everybody.
Ricardo Lopes: Hi guys, thank you for watching this interview until the end. If you liked it, please share it, leave a like and hit the subscription button. The show is brought to you by Enlights Learning and Development done differently. Check their website at enlights.com and also please consider supporting the show on Patreon or PayPal. I would also like to give a huge thank you to my main patrons and PayPal supporters, Perergo Larsson, Jerry Muller, Frederick Sundo, Bernard Seyaz Olaf, Alex, Adam Cassel, Matthew Whittingberrd, Arnaud Wolff, Tim Hollis, Eric Elena, John Connors, Philip Forst Connolly. Then Dmitri Robert Windegerru Inai Zu Mark Nevs, Colin Holbrookfield, Governor, Michel Stormir, Samuel Andrea, Francis Forti Agnun, Svergoo, and Hal Herzognun, Machael Jonathan Labran, John Yardston, and Samuel Curric Hines, Mark Smith, John Ware, Tom Hammel, Sardusran, David Sloan Wilson, Yasilla Dezaraujo Romain Roach, Diego Londono Correa. Yannik Punteran Ruzmani, Charlotte Blis Nicole Barbaro, Adam Hunt, Pavlostazevski, Alekbaka Madison, Gary G. Alman, Semov, Zal Adrian Yei Poltonin, John Barboza, Julian Price, Edward Hall, Edin Bronner, Douglas Fry, Franco Bartolati, Gabriel Pancortezus Suliliski, Scott Zachary Fish, Tim Duffy, Sony Smith, and Wisman. Daniel Friedman, William Buckner, Paul Georg Jarno, Luke Lovai, Georgios Theophanous, Chris Williamson, Peter Wolozin, David Williams, Dio Costa, Anton Ericsson, Charles Murray, Alex Shaw, Marie Martinez, Coralli Chevalier, Bangalore atheists, Larry D. Lee Jr. Old Eringbon. Esterri, Michael Bailey, then Spurber, Robert Grassy, Zigoren, Jeff McMahon, Jake Zul, Barnabas Raddix, Mark Kempel, Thomas Dovner, Luke Neeson, Chris Story, Kimberly Johnson, Benjamin Galbert, Jessica Nowicki, Linda Brendan, Nicholas Carlson, Ismael Bensleyman. George Ekoriati, Valentine Steinmann, Per Crawley, Kate Van Goler, Alexander Obert, Liam Dunaway, BR, Massoud Ali Mohammadi, Perpendicular, Jannes Hetner, Ursula Guinov, Gregory Hastings, David Pinsov, Sean Nelson, Mike Levin, and Jos Necht. A special thanks to my producers Iar Webb, Jim Frank Lucas Stink, Tom Vanneden, Bernardine Curtis Dixon, Benedict Mueller, Thomas Trumbull, Catherine and Patrick Tobin, John Carlo Montenegro, Al Nick Cortiz, and Nick Golden, and to my executive producers, Matthew Lavender, Sergio Quadrian, Bogdan Kanis, and Rosie. Thank you for all.