RECORDED ON JULY 20th 2023.
Dr. Stephanie Hare is a researcher, broadcaster and author focused on technology, politics and history. Selected for the BBC Expert Women programme and the Foreign Policy Interrupted fellowship, she contributes frequently to radio and television and has published in the Financial Times, The Washington Post, the Guardian/Observer, the Harvard Business Review, and WIRED. Previously she worked at Accenture, Palantir, and Oxford Analytica and held the Alistair Horne Visiting Fellowship at St Antony’s College, Oxford. She earned a PhD and MSc from the London School of Economics and Political Science (LSE) and a BA from the University of Illinois at Urbana-Champaign, including a year at the Université de la Sorbonne (Paris IV). She is the author of Technology Is Not Neutral: A Short Guide to Technology Ethics.
In this episode, we focus on Technology Is Not Neutral. We start by talking about technology ethics, and we discuss arguments for and against technology being neutral. We discuss what is a tool, and if scientific discoveries are value-free. We talk about design bias, and the example of policing. We discuss the problem with sci-fi fiction, existential risks, and distracting from real threats. We talk about social media, clickbait, misinformation, online privacy, data collection, and regulation. Finally, we discuss digital health tools used during the COVID-19 pandemic, and if they were worth it.
Time Links:
Intro
Technology ethics
Is technology neutral?
What is a tool?
Are scientific discoveries value-free?
Design bias, and the example of policing
The problem with sci-fi fiction, existential risks, and distracting from real problems
Social media, clickbait, and misinformation
Online privacy, data collection, and regulation
Digital health tools for the COVID-19 pandemic
Follow Dr. Hare’s work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello, everybody. Welcome to a new episode of the Decent. I'm your host as always Ricardo Loops. And today I'm joined by Doctor Stephanie. Her, she is a researcher, broadcaster and author focused on technology, politics and history. And today we're going to talk about her book Technology is Not Neutral, a short guide to technology ethics. So doctor her, welcome to the show. It's a pleasure to everyone.
Stephanie Hare: Thank you so much for having me or Gala.
Ricardo Lopes: Great. So um uh tech uh technology ethics, I guess that uh people who think about is who look at these would think that OK, it's just another uh philosophy thing. Why should we care about that? It's just some about some abstract questions. But I mean, what is it really about? And what kinds of uh more practical questions, let's say, does it deal with?
Stephanie Hare: I love, I love your idea of like one more philosophy is if we're being bombarded by philosophies and all the time, and there's just philosophers walking around selling their wares in the street, which I guess is maybe true, we would just call them marketing people now. Uh No. So I think, look, I think technology and the way that humans have a relationship with technology has existed probably since we, you know, developed depos, thumbs and started making tools. Um, YOU know, so we would have picked up bones or stones or twigs and started fashioning them creating fire. And probably from that moment you started having human beings disagreeing about, you know, when is it acceptable to make a fire? What are the rules around fire? Is fire for cooking and keeping us warm and keeping predators away or is it ok to like torch our enemy camps village? Um, OR use it, you know, launch it into an arrow and shoot it at people. So I just feel like in some ways it's like nothing new under the sun, as Shakespeare said, it's, it's always been there. So I don't feel like this is like a new idea, um that people are hawking. I think it's, it's one of the oldest things because it's so fundamental to us. Beings have been making technology, you know, for a really long time. And we've also been philosophizing for a really long time because if we think about philosophy, kind of the way it's taught today, it's very abstract. We think about it in universities, uh you know, philosophy departments writing about topics that most of us probably are never going to read a book about, uh or an article or really care, which is a shame because we're actually doing philosophy all the time. We're doing it every day. So if you're interested in, in power and in politics that welcome to philosophy, you're doing political philosophy. If you're interested in like good and bad, right and wrong, um where do we draw the line on certain things? Welcome to philosophy. You're doing ethics. If you have really strong views on aesthetics, which could be anything from, you know, design in your house to like the sustainability or labor relations that go into the materials that you buy, any product that you buy or service that you buy or even things like user design and user experience. Welcome to philosophy, you are doing aesthetics, right, and so on and so forth. So we're doing it all the time, but I think we weren't certainly in the Anglo Saxon world, we're not really taught it in the way that often you are in the continent much more explicitly. So I think that was like the first bridge I wanted to build with people that I work with both in my client work. But also in my writing was to kind of get everybody confident with the fact that actually you've already been doing this your whole life. So now that we give you the terms, you have like a name for what you're doing. And I can take you through examples of how you've been doing it, that's a bit empowering and confidence building. So now you can start doing it deliberately because that is uh that is a shift. I think there's something really cool about how philosophy exists as a tool set. It's very practical. It's very pragmatic. So if you're facing, oh, I don't know, a problem or an opportunity you're trying to evaluate to do something or not to do something or just what you think about something. Like, what's your point of view on it? Sometimes, particularly today, life is very complicated. It can be nice to have a tool set that helps you to think through things so that you can do it with a bit of rigor and then you can talk about it. It gives you like a shared menu if you will a shared script for talking about it with other people. And it's really nice if you're a problem solver by a profession, which doesn't necessarily mean you have to be a technologist or an engineer. You might be a problem solver as a lawyer, as a regulator. Um You might be, or even as like a CEO and you're having to figure out, you know, do we go this way or that way? What, what problem I'm trying to solve here. It can be really nice because it's like a due diligence exercise that allows you, you can go through it and when you're done, you're like, OK, I've done this, I've done this, I've done this, I've done this and that's really helpful. So I think that's the thing that I find really exciting about technology ethics is it's really practical. Like when I go in and teach this to people afterwards, they're like, we're doing our, the way our team functions is different now or the way that we tackle opportunity and risk assessment is different now and by different, I'm hoping they mean better. Right. So it doesn't have to necessarily be right now. A I Ethics is very buzzy and has been for a couple of years. But like, I don't limit my analysis to A II I go as broad as you can in terms of all technology because A I is just, it's an exciting technology and it is definitely the technology of the day in terms of the media and marketing cycle. But there's so much more, there's so much more we can do with philosophy in terms of the human tech relationship. So I think it's really fun. I wouldn't, I wouldn't have spent my time on it if I found it boring or not useful. Um On the contrary, once I started really working with this, I was like, this is powerful and it's helpful and that is great because if you are trying to tackle particularly complicated problems, you need every tool that you have.
Ricardo Lopes: Uh And I mean, the main question you tackle in the book is, is technology neutral. And uh of course, just by looking at the title, your position is obvious, but e even before we get into your position and go through some examples that really are illustrative of the fact that technology might not be neutral. Give us perhaps some examples of arguments from the other side. People who argue that technology is indeed neutral, that you find perhaps the most compelling.
Stephanie Hare: Yeah. So I actually used to think technology was neutral. It was, it was in the writing of the book that I had to change my mind and I'll give a few examples of why I used to think this. Um First of all, I came from a business perspective where and I should declare my nationality interest. So I'm American by birth. And then I've had my career in the United Kingdom. So this is very much an Anglo Saxon uh free market capitalism, hardcore perspective because I appreciate that's not the case everywhere. Uh For your global audience of this interview, that perspective uh is very anti regulation. Regulation is always being portrayed as like hindering innovation. It's going to stop innovation and innovation is really key particularly in the US, you know, to American superpower, competitiveness. So anything that would hinder that instantly becomes political, it doesn't matter if you're a Republican or Democrat, everybody's like, oh, nobody wants
Ricardo Lopes: to, everything that hinders that is the bloody communists.
Stephanie Hare: Exactly. Like take your pick. Are we scared of like the former Soviet Union in the eighties? It was Japan now. It's China, right? So like nobody wants to hinder innovation with pesky regulation. Uh SO you have, you have that, that right there shows you like, it's not at the end of the, the innovation cycle. If you will, you decide, you build something and you decide if you're going to regulate it or not, that is inherently not neutral. But the way it was being portrayed to argue against regulation, you would be like, whoa, whoa, whoa um you know, classic example in the United States is guns don't kill people, people kill people. And that argument has been if you will weaponized uh very successfully in the United States to stop us from doing any real meaningful gun control. Not just because we have the second amendment which gives Americans the right to bear arms, but when that law was crafted, we didn't have some of the weapons that we have today that you can buy at your local Walmart. Um OFTEN without an ID background check, it's actually quite staggering. And so the founding fathers were not imagining taking guns and walking into a school and shooting up an entire school, which unfortunately is a very common occurrence now, um, tragically, all too common. And so I started looking at being like, well on the one hand, it is true guns do not on their own, kill someone because they can't like self pull the trigger, they can't self shoot if you will. Although people I'm sure are now working on building guns that do exactly that A I powered weaponry but even then there's a human that's, that's ultimately coding that and controlling it. So it's still, there's always a human that takes the gun and makes the decision to pull a trigger. But I also was like, yeah, but the muskets that were there in the 18th century are really different from like an A K 15 assault rifle. You can kill a lot more people with the latter than with the former. So like a gun is not just a gun, like even within like the gun analogy, the amount of harm I can do with a musket versus like a pistol versus like the kind of weaponry that frankly, you should only ever be seeing held by the military in like a theater of war and is unfortunately on the street or in people's homes, um is, is like inherently not neutral. There were design choices that went into the making of the gun, the way that the bullets are done. Um There's like cop killer bullets that can pierce um body armor, right? So like the people who designed that knew that and the people who are choosing to sell it know it. So like they're not neutral either because you could decide to be like, we're not selling guns here and there indeed was a sporting store in the U SI think it was Dick's sporting good sporting goods, which decided not to. And then it got this massive blowback and it became a national issue precisely because they were exercising their right not to sell. Right. But that's not neutral because now you're like removing my, my right. Damn it to have guns. Um, SO this isn't like a, I don't want to make a sort of pro or anti gun statement. I have my own view on that obviously, of course, as a citizen, I'm sure it's probably obvious from here. But what I mean is even with something that's like such a little easy example, like guns, easy. And compared in comparison to A I because it's just a physical, you know, tool, even that you start to realize like it is not neutral to decide to design bullets that can pierce wound right. There's like no other reason that you would do that. There's no other purpose of particularly a good benign purpose that you could use that for. So when you start thinking about it in that way, you can actually start looking at pretty much, I'm just looking around my my house right now, like almost any object, you're like, shit. Somebody has designed every aspect of it from the, the raw materials that were pulled out of the ground to like the environmental impact of that, to how much the workers all got paid all along the supply chain, all of it. Um Whether or not it's designed for the majority of the population, which is right handed or like, do we take left handed people into account What about if you're color blind? What if you have like dyslexia, like every single design consideration is not neutral. And that's just, that's just, again, that's for like analog world, physical objects. Now you take that to the next level and you start getting into code, it's the same principle, but it's also like turbocharged because A I, one of the things that's fascinating about what the world we're living in now is that because we have more data than ever before and computer processing power than ever before. We can do things faster and at a greater scale than at any time in human history. Whether or not that will hold by the way is very contentious. We don't know if we've reached the limits of computer processing power. For instance, we definitely haven't reached the limits of data. So that starts getting really messy. You also look at like who is doing the designing, be it for code or a physical object and those groups are not representative of the whole of our populations, which is why it can sound like diversity and equality and inclusion. We call it de I here. I'm not sure what it is in, in um Portuguese, but you probably have a similar concept
Ricardo Lopes: for it. Yeah, the initials are pretty much the same, the same.
Stephanie Hare: Ok, cool. So de I can in many ways just seem like another version of being politically correct writer in the United States. A woke and it seems like a bad thing. I don't look at it as a bad thing in terms of the political sense. I look at it from a design perspective of being like I want, if I'm selling something, you know, bring out the capitalism. If I'm selling a service or a tool or a product, I want it to work for the vast majority, it ideally for everybody. So I have a challenge there because I need it to scale. But I also need to be able to personalize and customize, depending on your individual needs or the needs of your company or the needs of the country that you're based in. Right? Because you might have different laws there. Um How do I, how do I focus on that? And that kind of stuff really fascinates me because I don't understand why it gets lampooned so much in certain certain branches of the press and indeed investment communities as being politicized because I'm like, that's just truly, that's just good business. But like that, you know, that's just my, that's just my view, but I would assume as a good hearted capitalist that you would want to design products that the, you know, the majority of people can buy because that's, you know, catching more money. So it's very strange. Um So I guess that's what I mean when I say like technology ethics is so much bigger than just like, is something good or bad. Like that's a really binary way of looking at it. It's very complex and rich. It's like a tapestry, you start pulling on one thread and then you pull on another and another and you, you know, you come out with this whole complex analysis that you can use if you wish to make things better.
Ricardo Lopes: Yeah, I mean, at a certain point there, you mentioned regulation and regulation across different countries. And perhaps later in the interview when we talk about specific kinds of technologies, we can come back to this because uh recently with the release of threads, it's interesting because here in the European Union, it's not available yet and it has to do with data privacy, online privacy, data collection, data, the ownership and all of that. So uh we can come back to this uh later. But uh I mean, uh at a certain point you mentioned there or alluded to one of the points you make in the book because at a certain point you talk about or explore the idea of what is a tool and the di the difference, for example, between tools that are found and tools that are created. So uh uh tell us a little bit about that because I think it's very helpful in terms of trying to reframe the way we think about these questions.
Stephanie Hare: Yeah. So for me, this was um I wrote a lot of the book during the pandemic during lockdown. Uh And I would, you know, I go for my sort of hour long walk sometimes longer than an hour who the British police will find out now that I was walking for an hour and a half. Um, I would do these really long laps around my local park and, you know, there was such a weird time. Right. It was such a strange time for all of us and I started to really go into some fairly abstract directions of thinking. And so one of the things was like, let's think of the most neutral tool possible and then the most like non neutral tool. So my examples because, you know, I wanted to like draw the, the map if you will or draw matrix, you know, so what's what's like the most neutral and the the most completely value laden um thing. So the, the hardcore one for me was the atomic bomb, which was, there's only one reason that you would build an atomic bomb and only one reason you're really going to use it, which is to harm because there's, there's no like there's no way of getting around that harm. It will, you know, the radiation damage to any human that comes within the blast radius and also just like other parts of nature for sure. Um IS just non negotiable, it's going to happen in physics. So there's that, but there's also like the only two times that human beings have used, it was in the theater of war Right. So, it was like, it was designed in a war to be used in a war and we have lived under the shadow of threat of nuclear war ever since. And I grew up in the Cold War in the US. So, for me as a child, this was like the scariest thing that could ever happen to human beings as a nuke, like a rogue nuke situation or nukes getting out of control and just decimating entire countries.
Ricardo Lopes: And very, unfortunately recently we've been living more or less through that kind of fear and we're still living through
Stephanie Hare: that. Yeah, I mean, that's the thing is like, everybody freaks out about A I, and I'm like, there's all these nukes, you know, it's like thousands and thousands of them and like, lots of countries have them that maybe shouldn't and it's too late now. Like, we can't put that genie back in the bottle. So that was like, that was, that was over here in my most extreme example of a technology that, like, you just wouldn't use a nuke for something good. Maybe you would have like, aliens were coming or an asteroid that was attacking earth and the only way we could stop it would be if we, like, launched a nuke out in space to, you know, blow up the asteroid. This is how I talk with little kids about tech ethics and they're like, they get it. They're like, yeah, you nuke against the asteroid. Maybe not the alien though. But, uh, we would want to talk with them first. Perhaps so. Yeah.
Ricardo Lopes: Yeah. Perhaps. Let's not just assume immediately that they would come here to work and we would be preemptively killing another life farmer. I mean, come on, let's just give them a chance.
Stephanie Hare: Exactly. That's, that's my view as well in case they're listening. Uh, BUT then the other extreme I was like, well, what's the other, other extreme? And I got really into looking at nature when I was on my walks. And of course, this led to animals who use tools because what humans are not the only species that make tools, which is so cool, you can really go down a rabbit hole with this. Um And I did, I went down that rabbit hole so that you don't have to and looked at all of the different ways that animals fashion tools. And of course, this starts to become if you're interested in the history of what differentiates human beings from our other ape chimpanzee brethren. One of the big questions was human beings make tools. That's like a defining characteristic of what it means to be human, which is kind of trippy if you think about. Well, so the crows. So like, why is, why is that about making us human? But it just is. And so that was one of the things when Jane Goodall was um doing her amazing research about chimpanzees and she wrote to her then phd supervisor to describe what she was observing and he wrote back saying either we have to redefine what a tool is or we have to redefine what man is. And I was like, oh, there's something in this. And so it was like, well, what are those animals doing? And what were we doing? So, they were doing something along the lines of what I call a found tool. So they would find an object in nature, a stone, um, a branch and they would fashion it in some way to make a weapon. Usually it's used to hunt for food, um, rather than harm others, trapping food, et cetera, but it showed like advanced cognitive abilities. So I was like, ok, they're taking something that, that exists already and they're fashioning it in some way and they're coming up with really interesting uses for it. And there's a number of species that do it, of which we are one. So let's call that like the most even fire potentially could be viewed as neutral in the sense that it exists in nature. Like you can just look over in Canada right now where you're just getting these spontaneous wildfires because of the heat or when lightning strikes the earth, it can spark fires so that can exist in. So we didn't like invent fire. Nature invented fire. We just figured out how to harness it and then deliberately create conditions in which fire appears So I took that thinking and went back to the atomic bomb thinking and was like, OK, what do we mean by the atomic bomb? Because there's a whole process, you don't just start with an atomic bomb, they're incredibly difficult to build. Thank God uh given the harm that they can do. So I started reading honestly, it's like bringing back all these weird pandemic memories. But I spent like months, months and months reading about the journey to make that bomb and the scientists who started out working on it were not doing it to make a weapon, which I think is important because what you want to do is you want to identify where, where from like the first discovery to the first detonation over Hiroshima and the Nagasaki, where in that line do you cross, do you cross a line and it stops being neutral? So I was like, OK, the scientists who are just looking to understand what happens when you bombard uranium, that's just like pure science and the, the information of that the fact of that exists in nature or whether or not we ever discovered it as humans, it would exist because again, it's, it's, it's physics. So when does it start to become a weapon? And I identified in the book that it becomes a weapon in the mind of a brilliant Hungarian scientist, Leo Solar as he was walking down the street here in London, he attended a lecture, I think in 1938 and he suddenly had this idea of how to do it. And because he was friends with everyone, he's an amazing man. His book, Genius In The Shadows. It's a book about his life is really worth reading to discover like the networking of science and how an idea goes from idea to weapon or hopefully the idea to vaccine or idea to, you know, wonderful cure that will save us all from climate change. So he goes and talks to Einstein about it and because both of them were Central European and Jewish, which I think is also very important, they recognize the threat of Hitler far earlier than their American counterparts did. And they were like a, you could do this with the scientific information that we now have about how to split the atom, you could do this and you could weaponize it and we think that Hitler will so we'd better have a plan in place. And because Einstein was able due to his incredible status as a Nobel Prize winner to get an audience with the then US President Franklin Delano Roosevelt, they explained this to him, laid it out and then FDR green lit the deal if you will to create what became the Manhattan project, which ended up involving over 1000 scientists and you know, massive budget and became a military project and scientists from all over the world worked on it. So you're like, so was it FDR that did it because he, he gave like the political nouse if you will and power and approval and money. Was it Einstein who like formed the bridge? Because Leo Solar could have had the idea. We all have great ideas, but like that doesn't necessarily lead to them being executed. So there's like a chain. It was like solar. Einstein FDRFDR then tasked Ben our Bush who is like head of all of America's totally cool World War Two science stuff. Another fascinating person to read about Robert Oppenheimer. Now subject of a major movie uh who led, you know, who like literally run the man project. All of it was absolutely fascinating for me to like I had to build out the map of who was working on it. You know, you could ultimately be like, no, all of these things could have happened and it actually comes down to the pilots who flew the plane and dropped the two bombs because they were the last kind of line of defense who could have, I don't know, made an ethical protest and done what many people wanted by that point, which was a demonstration of the bomb but not on a human population. You could have dropped it out in the ocean to let people see what it was capable of doing. And hopefully, then that would have made Japan surrender. That was like an actual idea that was being put forward. But by that point the new president, Harry Truman was like, no, we have to do this because he was given statistics saying if we don't drop this bomb and the fighting continues for another 18 months, which were the predictions, however many people will die and it was in the millions. So he was weighing up his ethical calculus, right. And it's easy for all of us to judge these things now knowing what we know now. But back then they didn't know and they had to make a call and they'd already been fighting war for a really long time. And, you know, we can argue about that and people do argue about that until the cows come home. So for me, I didn't want to argue about it. I wanted to sort of forensically map it because I can see it happening again. Uh I can see it happening in all sorts of different things. And like, I don't mean in the sense of it's leading to death. I mean, I, I can see it happening again in that we are in this incredible moment of flourishing scientifically and technologically. This is an, this is an amazing time. Like people will write about this. I think centuries later from now that this was like a real turning point moment in human science and technology evolution. And so what I'm hoping to do many people are working on this, I'm just like one tiny little person. But what I'm hoping to do um as a historian in training is like, write the history of the future, which is like, ok, so if, if we've got a, I now, what can we learn from the atomic bomb exercise that we just went through? You? And I here today to understand who's working, for instance, on A I, although it can also be genetic engineering, it could be any technology, put it in the whole point is that it should work for anything who's working on it. What are the ethical decisions that are in play? Who has the power to actually get things done or more importantly to stop something, right? All of that stuff and also who doesn't have the power, who's just on the receiving end of this? Because like, you know, the atomic crisis, if you will, that resulted from 1945 I think helped a lot of people to understand that this is something that affects all of humanity, but not all of humanity was involved in the design and deployment considerations. And even with nuclear arms control, even today, certain countries have a greater voice in that than others. And that matters because you hear with people talking about A I governance and they're like, we should have an international Atomic Energy Agency like exists today in Vienna to look at nuclear technology. We should have something like that for A I. And it's like, well, OK, let's just check that, that interesting idea. Let's examine that is the IAE A actually a good example, a template a model for us to use to govern A I. And like we need to interview the people who are working on it and also their critics and come up with that view, I've seen some people really dismiss it. You also saw a bunch of people saying we should have a CERN for A I research. So cern being the particle physics laboratory that's based on the French Swiss border as physicists again from around the world. And it's public, public science, if you will public interest science, so you would want to talk with CERN and be like, OK, like we have an example, we run the CERN experiment now for, you know, 5060 years. Do you think that's actually relevant and appropriate if it isn't, could we do better with an A I research lab? What would that look like to make it fair equitable? Um REPRESENTATIVE more democratic. Uh HOW do we factor in things like climate and biodiversity, which we didn't necessarily in the early atomic discussions but actually really matter that sort of thing. So I think there's a lot we can learn from the history of this found technology if it's just something that's out there versus something that you very deliberately set out to make like an atomic bomb and most of us when we're making technology or using it or investing in it uh or buying it are not ever going to have to face the extremes of an atomic weapon, but we'll probably be somewhere closer from, we just found something and we, we like fashioned it or we're deliberately creating something that's probably more middle of the road between those two. So that's a really long example. But I think it's quite important because you can't really rush through talking about nuclear technology. Um, AND it matters because now we're having this big conversation globally about how to regulate A I and thing that people keep coming back to again and again and again is the nuclear war threat. They also cite the pandemic threat given that we've all just lived through that. I think we all had a very uh upfront close and personal stress test of pandemic governance and health governance and bio laboratory governance, right? Like was this a virus that was manufactured in the lab? Was it something that came from nature? Like this is something that's still being discussed? Scientists are still evolving their views on that and even if it is something that was found in nature, it still raises the issue of these labs, right? And like how the ethics, the ethics of bioengineering that all comes from World War Two, all of that thinking medical ethics, all of it really got an update from World War Two. So if you want to think about the technologies of the future, knowing your second World War Science and technology studies, history from the second World War onwards I think is essential, right? You know, you have, you can't, you can't be talking about A I to today in the abstract, you have to know we've already seen scientists taking ethical stances and doctors taking ethical stances. Um, SOME people have said that A I should be regulated more like the food and drug agency in the United States. Like you can't just, you can't just roll out a drug on the American population. It has to go through extremely rigorous testing, right? And God forbid when we're talking about global pharmaceutical manufacturing or global research um or research on the human body or embryos and stem cell tissue, right? Like there's entire governance structures set up for this that come from weirdly a World War Two starting framework because that's when we saw not just the potentially positive but also controversial narrative around atomic weapons, but also a lot of the Nazi medical experiments and bioresearch experiments, which were absolutely horrific, right? So all of that thinking is there. So if you don't know that thinking you're doing the classic thing that tech people do, which is like, oh what if we invented this thing? And you're like, you've invented something that's literally already existed and that people have been working on for decades, like do your literature review before you start talking, right? So classic. So classic,
Ricardo Lopes: I mean, let me just ask you one question about all of what you what you've just said. So, um, you've talked mostly about nuclear weapons and you also have the discovery of nuclear fusion, for example. And uh I, I was wondering if you think that perhaps certain scientific discoveries themselves might not be neutral because I, I mean, we could, we could just say, uh and I'm not pointing fingers at anyone that studies or studied nuclear fission. I mean, it's uh as far as I, as I, as I'm concerned, it's perfectly fine to understand how atoms work and all of that. And the same for how genes work, for example. But at the same time, isn't it the case that uh first of all people, uh people who are making the scientific discoveries themselves are people and they have their own motivations and their work is u usually uh uh paid for by governments or private institutions. And that's also not random or usually not arbitrary. I mean, if they pay for something, they have particular goals in mind, they want certain developments instead of others, they want c to work. Why there certain knowledge instead of some other knowledge? So could it also, could you think or say that certain scientific discoveries are themselves not neutral even though they are just uh just uh uh at first sight producing useful knowledge or?
Stephanie Hare: Well, I think it's probably important to differentiate between the discovery and what leads to it. So, back in the day when I thought that technology was neutral before I wrote a book of the change if I was coming from the, the sort of technology um digital perspective. And I was like, this is math, this is mathematics and you hear that a lot um like Gary Kasparov, the chess grandmaster who wrote a book on A I called Deep Thinking was like on Twitter a few years ago. And I cited him in the book saying ethical A I is like ethical, electricity, like electricity is just electricity. Like he was kind of like, what is this uh this argument? And I, I understand what he means and you see it a lot. I follow Twitter obsessively and I'm constantly watching people argue about code is neutral. Math is neutral and all these people are trying to talk about values, but math has no values. It's like the universal language. And I understand that just like a phenomenon that exists in chemistry or a phenomenon that exists in physics. Again, putting the atom has like doesn't care what my politics are or who writes my paycheck, right? Like it just, it just exists, it's knowledge. So I think that part is, is neutral, something that just exists, is neutral in the natural physical world. That doesn't mean that motivations for seeking that knowledge out are irrelevant because I think they are relevant. Uh Absolutely the funding model. I mean, we're seeing this right now with A I like most people can't afford to build large language models there, there's this whole thing of like elite capture and whether or not we're going to have like a new monopoly of these things, which is why we're getting a lot of pushback and some people are wanting to release them as source. Now. Uh What data goes into the training sets, all of this stuff, the intellectual property thereof. Can these things be audited or not? And should they be particularly if they're involved in public procurement of any kind? Um OR like public service use of any kind. So I think we're sophisticated enough as a species where we can have this conversation and go if you are doing something, if you're seeking out knowledge and doing research on behalf of shareholders for a company or a private company or the government, governments are completely value laden. So of course, it matters if you're doing it for the US government versus the Chinese government versus let's pick Switzerland which claims to be neutral, uh an interesting country to tackle on that claim. Of course, these things matter. So I feel like we can, we can separate out the neutrality of the fact like, oh wow, if you cut off my finger, this is what happens like a finger being removed from its hand produces X numbers of physical responses and we can observe them and describe them forensically and repeat that experiment and hopefully stop cutting people's fingers off because we've, we've answered those questions. But why of doing that? The who of doing that? The, for what purpose are we going to use that knowledge? Is it, you know, maybe we're chopping our fingers for a really good reason. Maybe we're doing it terribly just to be awful. Like all of those questions are relevant and matter and we can use them to, to do science potentially in a more ethical way. And like, I know that's a really problematic statement for a lot of people. And it should be, I'm delighted that it's a problematic statement because it means that our thinking is like switched on and we're all going to go. Does that mean to do more ethical science? What does it mean to do ethical technology or ethical A I? And we see that with A I, right. I, I cannot think of another technology where so many people have been in such a hurry to claim that it's trustworthy, responsible, reliable, um ethical, you know, we don't, we don't feel the need to do that. And it's because I think people understand we've matured enough as a species to understand that A I is inherently not neutral, that we have to rush out to brand it as trustworthy or responsible because people are terrified. And there's, you know, there's a reason for that. There are some really scary uses, you could put this technology to. So you need to reassure people up front that, you know, we're just using it for I don't know, whatever it is. People want to be using it for, uh, it seems to me again, largely for advertising and marketing but there's a lot more, you know, drug discovery would be a great one. But even then, even then there could be, we could discover some really trippy stuff with that and it's gonna be, you know, what do we do with it? What do we do with that knowledge? You can't unknow something once you know it. Mhm.
Ricardo Lopes: Yeah. Uh, BUT I mean, uh, explain a little bit better why in certain instances, technology A I systems, for example, can be biased. I mean, they can have, for example, sexual racial bias or biases of any other kind because there are people that, uh, particularly when it comes, for example, to how certain police departments in the US use systems to, uh, fight against crime, I guess in that way. And, uh, pe people, uh, just rely on the data, those systems generate and they say, oh, you're just feeding them data and they are processing the data and then they help, uh, the police. Uh, uh, I, I mean, when it comes to, uh, they have limited resources and they have in terms of when, when it comes for them to decide which particular areas they should police more or less. So it's not biased at all. It's just, uh, computer processing data. But I, I mean, that's not really the case. Oh,
Stephanie Hare: no. I mean, it's totally biased. It is completely biased and like that's the tricky thing. So you're right to highlight things like facial recognition technology because that's going to be regulated under the EU A I Act. And this whole question of, can you just have real time identity surveillance? Right. So people walking around like you see in the movies and you and I'd be walking down the street in a little square with a pair above our head and it would be like, Stephanie her, you know, Ricardo Lopez are walking down the street going into a coffee shop and they would have like our height and our weight and our eye color, but potentially more, you know how we voted in our last election, whether or not one of us has credit cards and um you know, pick your, pick your, your fantasy information profile about yourself or your nightmare all about your political, you know, our sexuality, uh you know, last last pornography was looked at yesterday, right? Because all that stuff is being tracked, all of it's being track. Um So yes, that exists, it's there. The eu is looking to take a position not to do that. We use it here in the United Kingdom where I'm talking to you from the London Metropolitan Police loves facial recognition technology um for reasons that are really bewildering because it doesn't seem to improve their performance at all, but they insist on using it. Um Despite parliament asking them repeatedly not to. But that's what happens when you don't pass laws making something, you know, banned or controlling it. You just go, please don't do it and the police just ignore it. In the US. We've done a really different approach with that, which is that certain cities or towns or states have put into law rules about using it or not using it and who can use it? Is it the cops or do you also need to regulate private sector use? Because one thing you could do is you could ban the police from using it but allow the private sector to carry on because remember, we must upset the private sector in the US. But the problem is the police can then just work around that and buy the data or sometimes just get it for free. But you know why get it for free when someone can make money off of it. Uh You can buy the data from a third party data broker legally. So they're still getting the data and we see this with Amazon Ring is a great example of this. Uh PEOPLE putting all those doorbells up needing, needing to identify everybody have no idea that that footage can actually just be taken by the cops with nothing bumping it. So like well done for participating in a surveillance society in your neighborhood. Uh It's just, you know, it's just happening. So that question sounds like well, so what? Nothing to hide, nothing to fear. Right. Like I'm not doing anything wrong. I don't care if I'm being observed. I hear this all the time and it's like, ok, cool. But the problem is that this technology tends to perform better on men with lighter skin and worst on women with darker skin. So we've had a number of wrongful arrests in the United States in which the police have used facial recognition technology to make arrests like as part of their arrest process and they have misidentified people who were innocent. So these people get subject to really traumatic arrests often in front of their family, they might be held in custody for, you know, 36 92 hours, something horrific. Um, AND anybody who hasn't, you know, who thinks, oh, that's not so bad has obviously never been in a US police custody situation. It's not, not something you would do voluntarily. Um, NO disrespect to law enforcement. It's just, it's not, it's not nice to be treated as a suspected criminal and particularly if you're innocent. And then there's also the problem that there's no legal requirement in the United States when you are, you know, when you've been arrested and brought to trial, you don't have to be told there's no requirement to protect you telling you that facial recognition was used in your arrest, which is really problematic for technology that doesn't work very well. So that all of that is really bad, but then people will go. Ok, cool. So what we have to do to fix all of those, you know, acknowledge risks. It's really bad. Let's just pump it full of more data. Refine, refine, refine and get to a tool that works. It's never going to work 100% of the time, but like with high degree of accuracy, 99% of accuracy. So you're like, all right, if you solve the accuracy problem, have you solved the problem of facial recognition? I personally have argued that you do not why? Because now you've got your cameras that apparently are pretty accurate everywhere because and we've seen this um in the US is a great case in point. It isn't just the cops who want it. People are like, well, I I don't feel very confident we've got high gun crime in the United States or we're in schools or fill in the blank, whatever your personal security concerns are, I'm going to get a camera and put it everywhere. So now imagine a United States in which private individuals have cameras in their homes. Every business has cameras, cities get in on the act because like we got, you know, we'll do this to fight crime and you now have it where people are being surveyed nonstop. You would actually have to almost flip it and say, where are they not being surveyed? Right? And where is that data not being kept recorded? Traded? You know who owns the cameras. So here in the UK, we have a problem that a lot of the CCTV cameras that we have all over this country are very awkwardly sourced from China. And so a big campaign has gone to rip them out uh because this is now finally, finally being deemed to be a security risk. And then the question becomes ok. So is it the fact that the camera is made by a Chinese company? Is that the security risk or is the act of having these cameras up at all a security risk? Like, would you feel more comfortable if the camera making it was a British company? Right? And I think a lot of people to be completely frank would they would be fine with that. Um For them, the issue is, it's a, it's a known hostile power. I mean, China has been identified as a great one of the national security risks to the UK. So you get into these situations where you can just tell nobody thought this through because, and you always have to do this. And again, this is like my historian training, you have to always imagine whatever system you're building in your peaceful, liberal democratic happy country being weaponized if somebody gets elected, who is awful. Now, the classic example would be like, what if Hitler had all of this tech, right? But you know, let's not give that man any more um time and attention that he's already had. You don't have to go back to World War Two to look at examples of countries that have become increasingly um how can we say this, you know, cracking down on women's reproductive rights that happened in the United States right now, right? Like we don't, we don't have to go to World War Two. We can literally just open the newspaper today and see that women who need to get abortions or to go to an abortion clinic or a reproductive clinic simply to get health advice are now being criminalized in several US states. As are the people that help them, their friends, their mothers, whatever. And that was something that has always been a threat. But then I think most people, if you had said before, Roe versus Wade was, was challenged um successfully. If you had told them this could be a risk, they would have looked at you like you were some crazy feminist. Well, here you are crazy feminist. It's happened, it's happened. And so you've got your phone data, all of your internet data, right? So Facebook has had problems. Meta has had problems because they've been handing over data to cops, to arrest women who need to go and get an abortion. And if you had told somebody that when Facebook came on the scene in the mid two thousands, they would not have believed you. I'm sure that Mark Zuckerberg and his designing crew never thought that that would be a risk because no one imagined the US political context changing in that way. You can imagine it with the Lbgtqi A plus community which again is constantly under threat, not just in the US, but like everywhere, in terms of their right to get married, their right to adopt, um just their right to not be like harassed and persecuted on the street. You are now going to have all of this surveillance equipment that potentially can be put in the hands of a democratically elected government that is bad, right? And bad in the sense of like would prosecute them, persecute them. And that's, you know, that's my view. That's bad. I know there's people that would disagree with that and I don't care.
Ricardo Lopes: I bet, I bet that the Center in Florida would love that. Yeah.
Stephanie Hare: No, they absolutely would love it. They would love it. So, like, you just, you have to design for that in mind and that's where like sci fi training and like a very good, healthy uh background in literature and cinema can be really helpful because it helps you to imagine scenarios that will seem crazy at the time and be like, yeah, but what if, how many steps away or again, like, and that's why one of the chapters in my book is, where do you draw the line? I'm like, how many steps do you have before your liberal democracy turns into like fascism or simply authoritarianism? Or like a Christian theocracy or whatever, it doesn't have to be Christian. It could be any religion. But like that matters and those rights that we have in so many liberal democracies are so precious and they have been so fought and hard won and they're also so easily taken away. And that's where you must know your history because if you don't know history, someone will say, yeah. But you know, you're just worrying about nothing and you need to be able to go. No, this has already happened or it happened in this way. And actually here in this case, we're only, you know, it would only take two or three steps for us to enter that world. So how do I design for that? I mean, a really good strategist and a really good technology designer, technology ethicist can't simply rely on a technological background. You must have the social sciences and you know, liberal arts and humanities training as well. I think it's one of the toughest jobs to do actually. And that's no disrespect to engineering colleagues or more sort of classically trained stem colleagues. Their, their work is also extremely challenging and difficult. But there there's something especially hard about the human component, right? People don't use things that necessarily the way they were intended or they modify them because again, it's just in our nature. So you have to always design with the worst in mind. And that means you have to have incredible knowledge and a really good imagination and like a humility to think about your own blind spots, right? Which is why you don't want monocultural teams. You have to have people with really different experiences and those different views have to be really respected because those people will help you see risk that you would just miss you. You won't hear it, you won't see it. You need somebody else who's like, I know what this is like because I am of this community or I'm from this country and I saw this or my, my country lived through this. Um That's, that's a tough one.
Ricardo Lopes: But uh you mentioned uh sci-fi there. Uh And I mean, I, I understand all of your arguments but II, I have to say that at least to some extent and unless I'm un I'm misunderstanding what you're saying there about sci fi, I might have some issue with people uh real uh looking into or relying on. I don't know, something that is portrayed in a sci fi movie or sci fi book when it comes to uh worrying about the potential dangers of technology A I or something like that. Because um I mean, for example, if you think that uh the things you should be worried about are I don't know, a terminator like scenarios or West world like scenarios where technology suddenly becomes sentient and develops their own uh motives and goals and all of that and suddenly wants to wipe out humanity or perhaps, uh, the next day we have to be worried about, uh, mistreating, uh A I systems because they might be sentient and all of that. I mean, I just worry that, uh, if people take those kinds of scenarios too seriously, they might distract, that might help distract them from the actual more realistic issues that might derive from technology. You know? I mean, do you understand what I'm worried about here? I
Stephanie Hare: completely understand it. And that's been a concern that a lot of people have raised in the past couple of months ever since people started writing those letters saying we should have a pause and A I research and warning of the different risks that it presents on an existential societal level. And that group of people who are worried about existential risks of A I have been accused of worried about something that may never happen because it's not really a I that they're worried about, they're worried about artificial general intelligence, which is a different beast in the sense that that's, that's the moment that again, it's hypothetical, it has not happened and it may never happen. But in theory, it's the moment when smart machines become smarter than human beings. I love how I said that and the dog just that's hilarious. The robot dog, um smart machines, when, when these machines become more intelligent than us as a species and like, what would they do to us is the big fear and there is a concern that that's distracting from what I guess you could call now. Risks, risks you could have now, risks or
Ricardo Lopes: even near future
Stephanie Hare: quite of, you know, it's either it's happening now and here is, you know, the risk of discrimination in like how A I is used in healthcare, how A I is used in the medical or sorry, the legal profession or risks of misinformation and disinformation in terms of elections and journalism and like public confidence in the integrity of how the the citizen state relationship works. That's
Ricardo Lopes: all happening or selling your online data, for example,
Stephanie Hare: selling your online data, gathering your data. I mean, like literally a month doesn't go by when meta isn't being fined for some sort of data violation and they just, you know, they just price it in and everybody is there. It's incredible to me, but they do. Um So that's all happening now and then you're right to identify like near term risks, which is like we're not quite there as a risk.
Ricardo Lopes: Perhaps a good example, perhaps a good example of that would be certain genetic engineering technologies. I mean, we're not really there yet, but it's probably in the near future. So
Stephanie Hare: yeah, you could come up with, I'm sure a number of different categories to kind of like, you know, make columns if you will. So definitely need to worry about this now because it's already happening likely to happen soon, but we're not there yet. And here's the conditions that we would have to fulfill to get there versus like may never happen. But if they did call them low probability, high impact events, um egag I takes over um and what will they do to us and turn us into slaves or something? So, you know, is that distracting is your question? I think, I don't know. First of all, I don't think that you're going to get, I don't think it's like helpful to tell people not to think about something because they're going to think about it. It's like, you know, that, that experiment of don't think of the white polar bear. You know, as I'm telling you to do this, your brain is like all I can think of is a white polar bear. So I think, I think first of all, it's actually probably I'm not the cognitive psychologist, but my guess would be the easiest thing to do is actually to name it. Like do what we just did be like, there's these different types of risks. The one that the media really loves to go for is the one that's actually the least likely why do they like to go for it? Because it's great storytelling. We've actually, we've literally seen this movie before. So I talk with clients a lot and I've got a presentation that I've been using the past couple of months that it is really like driven people wild in a good way that is looking at cinema and we talk about the different ways that A I appears in cinema over time and it's so effective because you have, you can't have a conversation about A I without naming Terminator, you just can't. So I make it the first slide. I'm like, let's, let's name it, let's dismantle it and deconstruct it like a bomb, you know, take it apart and neutralize it if you will and then talk about why it's so resonant. Everybody has a really good laugh and this kind of like looks around like, you know, I'm not the only one who thinks about Terminator. No, we all think about Terminator. Why is that? And then you move on, you would maybe talk about the matrix, you might talk about Blade Runner. Um You can talk about Westworld, you could talk about black mirror, right? So you go through all the different things that are in popular culture and you also look at the imagery, right? So it's not just the story, it's the picture. So like the Terminator, what's quite terrifying about the Terminator is he's both human. Uh ALTHOUGH he's like a superman, she's, it's Arnold Schwarzenegger, but he's also when he like rips off all of his fleshy bits. He is made the designer who is very cleverly in the movie, made it to look like a human technological skeleton. So it's like a metal head. It's literally a metal skull and it has teeth, like it has actual human teeth and you're like, why, like, why does it need human teeth? It's a robot. It does not need. So what is that about? And the reason is because it's terrifying, right? It's really scary. If you saw that when you were a kid, you were like unable to not ever see it again. So you have to think about why that is or like the image of um like a human hand kind of going like that to another human hand. And it's, it's, it's riffing off of Michelangelo's Sistine chapel, right of God touching Adam and giving the spark of life. And you'll see that, that image a lot when we talk about A I because it's a really useful shorthand visually for us to think about, you know, God God, something that people call like God in the judeo-christian sense with that picture created humans. And so then they'll riff on it and you'll see another image of a human hand touching a robot finger, right? Like it's going like that. And then in this case, humans have become God and the robot is our creation. And that taps into one of the best science fiction films of all time or movies or books around of all time, which is of course, Mary Shelley's Frankenstein, like this idea that you can, you can create it and then it goes rogue and it's a fascinating piece of literature uh that I think captured something in, again humans because we create things all the time and it's this sort of primal fear of what happens when our creations come back and like kill us, which probably actually is about like parents with their Children, right? Like it's in a really demented way. But I'm talking, I'm talking subconscious stuff, right? Stuff that works across all societies that there's that fear of your creation getting out of control because that, that's a narrative that we know we recognize it uh across time. So you have to, I think you actually, it's a wasted amount of energy to tell people don't think about these things. It's distracting. I think it's actually more effective to think about it right up front. Name it have a chat together, build everybody's critical thinking and images on the news. Whenever you talk about A I, I do a lot of work with the media. When I go on television, it will be a split screen. It will be me talking about A I with a bunch of books in the background, hopefully saying intelligent things and then the images that they put on the other side of the screen to illustrate A I are like on drugs, they're like hallucinatory acid trips. It'll be like it'll be like a light sort of floating. You know, like when people say they, they almost die and they have like a near death experience, they're in a, they're in a white tunnel going towards the light, they'll show something like that, then they'll show like a video game graphic of a human head from the neck up. That's completely translucent. You can see right through it. And inside is a human brain glowing orange and it's like rotating around and it's like floating on a magic carpet, like a video game version of a magic carpet. Why? Because I'm assuming it's because they're like, oh if we're talking about artificial intelligence, we have to show what do we think of intelligence, you think of a brain? So we'll show the, we'll show the viewers a brain but like a like a sci fi version of a brain. It's not like an actual brain because an actual brain is a bit gross, right? It's sort of the biological wet thing. So they show you instead an abstraction of a brain and then they might show the finger image, right? Like they, they riff on weird stuff and once you see this or you'll see like lots of zeros and ones raining down, right? Like in the matrix. And there's a reason for that. They're not, this is not an accident. These things are highly iconic images from the films and shows that we've all seen. So they become a visual shorthand. So you can't, I just think you can't ignore this stuff. I think you actually have to go right at it straight at it. Discuss it and then you can if you want per it and go, let's actually look at the stuff that is perhaps less visually sexy, right? Less visually interesting to show which is going to be predictive policing algorithms, facial recognition, technology discrimination. Um BANKS using A I to decide if you can get a mortgage or not. Right? Like that, that is less sci-fi sexy. It's less scary, but it's actually most people's experience of A I. So that's fascinating. Most people don't realize how A I is already being used on them, right? Like I think they would be appalled if they did. So that's, I think the media's challenge and the challenge of writers and analysts and people are trying to educate the public is like you're going to need a visual iconography if you will a visual language and you're going to need a narrative language for storytelling that makes those real risks, risks that are right now as compelling as the ones that are more hypothetical, but that we've all been seeing in shows and movies for decades now. Right? So it's a, again, so many things with technology are actually about culture, right? They're about the human relationship with technology. So like getting people involved and caring about something like A I which, you know, we're in a cost of living crisis here in the UK, good luck going out onto the streets of Brazil. I guess the Portugal
Ricardo Lopes: getting people to care, I guess that across the entire room we're living through a cost of living crisis.
Stephanie Hare: Yeah. Right. So, if you're like, you know, do you want to have this chat? People are like, no, because I'm too busy, like, dealing with massive inflation or, like very high interest rates or unemployment or climate change, like, you know, half of Europe as we were just discussing under a massive heat dome and sweltering away. So, getting them to think about A I is tricky. So you have to help them, you have to make that really easy for them to do. And so that's where I really enjoy this kind of work because I've noticed the effect that it has on people when you take them through some of the concepts that we've just been discussing is they're like, oh my God, I see this everywhere. Now I see those images everywhere. I see. The media loves to go to some stories and then not to others. Why is that? You have to look at their incentive structure. What do they get measured on clicks, viewing figures, et cetera. So the scarier they can make the story. So why any time Elon Musk does everything, the media loves it because it's, he's there in some sort of symbiotic relationship, right? He says something outrageous, they cover it and go, isn't it outrageous? He then is like, I hate the press, you know, the world weirdest dysfunctional relationship ever. But people who are doing like really important work on A I are not getting that kind of media coverage, right? And they maybe don't talk the way that Elon Musk talks, they won't give examples or say things that are really outrageous to get attention. If you're a really measured, considered scientist or researcher, you're not going to say something that's very Clickbait.
Ricardo Lopes: Yeah. And by the way, since you mentioned that, let's talk a little bit more about social media because it's not only the misinformation issue but also in terms of these discussions surrounding the supposed neutrality of technology. I mean, if you just look at social media, it's ply obvious that there's no neutrality there at all because I, I mean, even what you get mostly exposed to, it's just what they decide to boost to get more views, clicks, likes whatever and usually it's the most outrageous stuff out there, right?
Stephanie Hare: And lies. I mean, there's fascinating research about how lies spread much faster than facts on the internet and like, yeah, now we're in this world where so much of it can be faked. Um NOT just imagery and video footage but sound audio. So you're going to have this thing where people are going to find it very, very difficult to know what is real and not real. I don't know, I sometimes think that we may be approaching if not the end of social media, it definitely a new phase. Um Are, you know, do people do people really want to go online and see their friends talking about holidays and babies and lunch, which was a lot of how social media was from the mid two thousands onward. And now it seems to be kind of invaded by content creators, you know, influencers and the like, and then if you start feeling like, oh my God, if I go on, it's actually manipulating me, that's not good either. And there's all the mental health research of, you know, lots of people who spend time on social media end up feeling really depressed or anxious. Um, AND it can feel really toxic for many, many people just to go on. It could be awful. I just wonder, I don't know. I'm, I'm thinking aloud here, but I do wonder if there's just going to be a bunch of people for whom this start to just feel really irrelevant because it's kind of like, I don't know the way that television, I just came back from the US and I'm always just stunned by American television. Now, the amount of advertising that you are just bombarded with and it's, you know, if you're not used to it, if you are in a country where you don't have a lot of advertising in your TV, you just kind of look at your American family and friends and you're like, you're just being like brainwashed with all of this all the time and that's how social media feels like my Twitter feed feed. Since Elon Musk took over, it's just full of,
Ricardo Lopes: no, but since you talk about that, about American television, let me just tell you that I'm a wrestling fan and so I watch wrestling mostly from the US, of course, because it's the, the biggest place for wrestling in the world. But, uh, I mean, I, I really notice that, uh, if you watched wrestling, like, 20 years ago you would have the matches and then a few breaks with advertisements and all of that. But, but now you get split screens where there's, uh, uh, a tiny window with the match and then next, with the advertisement. And I'm like, oh man, I want to watch the match. I don't care about pizzas and burgers and whatever. Let me change.
Stephanie Hare: I thought when I go to the cinema and it's like, I'm just here to see the movie and ideally the previews and I feel like I pay quite a lot to go to the cinema and then you have to sit through, you know, 20 minutes of adverts, but then you work it out and so you just show up later, right? So if you want, there's ways, there's always ways of hacking, there's probably some way of like hacking your wrestling channel to only see the WS about the ads. But I just, I guess I'm just saying that I feel like these things evolve and like social media for a while clearly had some sort of utility. It helps people feel connected clearly. But then it started to have, what would you call it if it's not utility? I was gonna say negative utility. But, you know, again, like, it has a cost to it and I guess you're constantly weighing up in life. Is the pain worth the pleasure. Right. Like, is it, is it full? And I just think for a lot of people, I mean, I've heard constantly is this the end of Twitter? And like, I'm very mixed because I use Twitter for instance, for my work quite a lot. But part of me was like, maybe the best thing that could happen to me is that Twitter, like, failed and tried because I'm not on other than linkedin, I'm not on any other social media channel. I managed to like, break all of those addictions and habits from my youth. Um, Twitter is like the last holdout and in a way it would just do me a huge favor if it just wasn't even an option, which is a really, that, that's a weird feeling to have. On the one hand, I miss the old days of Twitter because I had a lot of fun. I made amazing contacts and friends on it and it was really helpful to me for my work. But I do feel that it's, it's just, it's not a very nice place. It's like going into, I was in Frankfurt airport a couple of weeks ago and they have like an old fashioned smoking lounge in the airport and it's like a glass cage for smokers. And if you're a non smoker, which I am you're walking by, you see all these people in, you know what looks basically like a cancer room. You know, they all smoking and like, you just look at them and you're like, my God, it's expensive. It's dangerous. It's stinky. Like you, you're going to come out of there. Absolutely reeking.
Ricardo Lopes: It's
Stephanie Hare: a, but the, but like the compassion kicks in because it's one of the hardest addictions to kick. But, but rarely do you see it that way? And I kind of wonder if we're getting to the point where, where some people can either see themselves in the social media equivalent of the smoking lounge in Frankfurt Airport and be like, why are you doing this to yourself? Like if these things make you feel so bad, why do you go on them if you feel the news is unreliable or you just get shouted out by crazy people you've never met? Why are you signing up for an emotional beating? Like there's other ways to get your news that are probably more reliable and that don't involve that. So, you know, all of us I think are going to have a real and probably are already having, um, a sort of reckoning with a new relationship. And I think that could be a real challenge for the social media companies is if you just got a mass movement of people who were like, you know what, this isn't worth it anymore. Yeah, that's hard.
Ricardo Lopes: But by the way, uh related to that about collecting data online and selling people's data because there are people out there that are perhaps still not sold on this idea that they should care about this because they're just like, oh, come on, I go on the internet and I go on Facebook and just, uh, see some of the news and uh share a few silly stuff. I mean, uh stuff related to my vacation or something like that, my kids and perhaps I watch some vanilla porn and go on Amazon and buy some stuff. I mean, what, what's the big deal with their collecting that, that date, uh uh uh the, the, the update and then selling it to other people? I mean, I'm not doing anything bad than with the I, I'm not overwhelmed on the internet with ads that I don't really care about because they're also catering to the things that, that I tend to like. So what's the big deal there?
Stephanie Hare: So there's so many ways we could tackle that. I guess one that I would start out with is like, can they imagine like, can they literally conceive of a world in which they aren't having all of their data gathered? Because like, that's an option, right? That's a design choice. That we've all made, we could have a world in which you can't track me from site to site to site or which third party brokers aren't, are like, just not allowed to exist much less traffic in our data for profit. Um And it's true like I struggle with this because I think most people have just priced in that they're going to have their data involved in a data hack or leak at some point. And like, what's the worst that happens to them is maybe a little bit of casual identity theft, perhaps a bit of fraud, uh which hopefully their banks will cover, right? So if you look at where the incentives are to care about this, I don't know how much it is on the individual banks care about it because they want to stop fraud. Um Governments care about it because they also need to stop identity fraud for all sorts of security reasons. But like do individuals care and I I'm with you. I think the example that you just gave of the person who's like going through the journey of sharing their data is kind of a personal preference. Um At this point, some people really care about it. Other people don't, I think you could make it more of like a consumer protection competition issue to say I don't even really have a choice so I might care about it. But the regulators aren't doing a good enough job because there's no way for me to have options to choose to create an online experience for myself because I have to be online that doesn't involve this. But there's also the whole factor. The whole question in mind of so much of how we've allowed everything to work is for free, but it isn't really for free because your data is the, is the product that's being trafficked and traded for a profit. If you remove that, do I have to pay to compensate for it? Right. So if I want to have ad free listening on my favorite podcast, that option exists, but I have to join the podcast and pay a subscription because fair enough the podcast creators like have bills to pay as well and they're like, listen, we either get paid from ads or subscription, you decide. And like, that's what was interesting with Twitter is I think Elon Musk tried to go down that path by getting people to pay, you know, £8 a month to be verified because he was, he was saying he wanted to do it that way, which was slightly flawed because Twitter's revenue model was largely advertising based as it is for indeed most of the platforms. And the subscription model doesn't really seem to work with any of those. And it's also really difficult to get people to pay for something they've had for free for a really long time. Howls of protest. Uh People don't even want to pay for good journalism. So you'll constantly see people bitching online going. This article is behind a paywall. Not understanding. I guess they don't understand the cost of good journalism and not like reporters have to be paid. Editors have to be paid. Uh, THEY just, they just want it for free. So people always want stuff for free and they won't pay for it. So I think in that case for, for the majority of people like that, they won't care. And I'm not, to be honest, I'm not even sure that they should care, which is a controversial statement to make because they have almost no power to get it fixed. I think the people who do have power and who do care who have like an incentive to care are the big companies looking to bust fraud and they lobby like mad. So they could actually lobby for a safer internet if they wanted to. And in fact, instead what they're doing is just upping the amount of, you know, anti fraud protections that you would have to use when you're doing online banking. For instance, you might get like a, you know, you'll buy something online and then you'll get a code to your phone from your bank checking that you actually wanted to make this purchase, right? Like that's a new mechanism or getting people to have multi factor authentication to check that it's really them doing something. So we haven't, we haven't come up with, with a way around this. And I think that's because again, the ultimate extreme example of risk is you build this massive surveillance capitalism model or the Chinese social system, social credit system model of surveillance in the liberal democratic world. We haven't yet had unfortunately a pretty grim test run which would be a bad government getting elected into power who then weaponizes all of the information that's gathered on all of us. And because humans seem to need to see risks before they can decide to. Then, oh God, we better fix that and make sure that doesn't happen again. It's very difficult for them to think ahead and prevent something from happening. I don't see that going away anytime soon. It would be really nice if we did because honestly, the cost to society of fraud and identity theft is massive and we have a new generation, new generation uh that's been coming on really since the early two thousands, who all of their life so much of their life, more than any other human in history is online. So the, the risk is not proportionate if you're a baby boomer and the majority of your life is not online. If your generation z almost all of it is or not all of it. That's unfair. That's an exaggeration. Um, LET'S just call it.
Ricardo Lopes: Well, I mean, I can tell you, I can tell you that I have a little brother who is 10 years, 10 years younger than me. And he's basically lived with internet his entire life. I mean, I started using internet when I was 9, 10 years old, but he lived with internet, been
Stephanie Hare: posting pictures of them for years, you know, that they have an internet presence that they weren't even aware of. I mean, that you can create all sorts of existential um, angst and emotional problems about privacy and sharing and who's, who has to share your image or your story or your information. That's probably another reason that we haven't really evolved yet is I think when as that generation comes of age and they are coming of age now, um and only will continue to do so, they may have different views on this. Whereas I think for the baby boomers and possibly generation X even who are largely in positions of power, it hasn't affected them as much. It'd be interesting like to talk with your little brother or indeed anybody who's of that younger generation to feel. I think they are really different notions of privacy and culture around sharing and consent. So on the one hand, they have less privacy, but I think they're also much more alive to this question of consent and what that means and why the consent model doesn't even always work. I still though I still come down to, I think it's a very old school problem. I think regulators have really failed on competition because if you don't want to be tracked by Facebook, um, across the internet. It just doesn't matter that the, the regulatory fines, well, they can pay the fines like that. That model has to be updated to deal with giants who can afford to pay fines and actually really happy to do that. As long as you don't interfere with their operating model, what we actually have to do is make it so that it can't do that legally in the first place. You know, if Mark Zuckerberg literally faced criminal charges and could like do jail time for that, I'm sure he would change his model, but since he just has to write a check,
Ricardo Lopes: yeah, I mean, because those fines, they don't hurt their profits at all. Let's be real
Stephanie Hare: completely. I mean, in fact, I still remember with horror, the FTC had issued what was then the biggest fine against Facebook for privacy violations and Facebook share price went up, didn't hurt them, it went up. Why did it go up? It went up because people had been worried that the FTC might actually take action on their data gathering practices, but they didn't. So they were like super, if you're, if you're telling me that all you're going to do is find me, then I can build that into my financial modeling, call it, call it a tax pay it and carry on which is what they do. And so that's going to be, the question is like, we were talking about threads with, with meta coming up with this new social media platform to rival Twitter and the like. And I was fascinated by how many people signed up and we're talking about it. It was like, have you forgotten who is behind all of this? These are the same people who were complaining when Francis Hagen blew the whistle on mental health abuses by Facebook Instagram on Children. And they're the same people who freaked out when Facebook was involved with Cambridge Analytica in rigging the US election in 2016. You know, all thinking that Mark Zuckerberg is, is awful. But as soon as he put this out because he's not Elon Musk and it became the thing like they had a total memory wipe and they, and they all signed up not thinking, I wonder why this isn't allowed to be released in the European Union yet, right? So like we were just talking about the fact that you can't get it there because the amount of data that threads harvests is huge and we know that these people are not going to be using it for good, they're using it for profit and yet people signed up. So like that was again, as a a technology ethicist, I was fascinated because I was like all that you can give people all the information in the world and they will still make bad choices. It's just not, it's not enough to give them information and it's not enough to remind them again and again and again, that need to have a social media life, uh seems to be really strong in a lot of people. We saw it first with Mastodon, then we saw the blue sky now its threads like it's, it's this thing and it never seems to get the critical mass that, that may have changed by the time this podcast go out, um they'll have a spike in early adoption and then it kind of falls off. And that's what I mean about kind of looking to the long term is like, are people actually going to want this? Because what, what need does it serve and is the price too high to pay? Yeah, very fascinating. But I don't know.
Ricardo Lopes: But by the way, since uh earlier, you mentioned that you wrote your book during the COVID-19 pandemic, I would also like to ask you what, what is your idea about uh some of the digi digital health tools that were used during the COVID-19 pandemic because of course, we were dealing with a major health issue and perhaps even if there with some data collection, uh I, I mean, it, it would still be ethical to do that. The pros would outweigh the cons because of the threat we were dealing with. But what do you think about uh some of the data that were potentially collected by, I guess governments mostly during that period?
Stephanie Hare: It's interesting, isn't it like we haven't really had all of the different countries that built that kind of health surveillance tech if you will um come out as a group, I don't mean individually but as a group or through the auspices of the, who the World Health Organization to say, like if we have another pandemic, God forbid in, you know, a few years time, is this a tool that we want to have in our toolbox Eg was it effective or not? And not only was it effective? Was it worth it? Which is a different question? Um The UK has had some studies that came out arguing that it reduced transmission. But II I still question some of that because we can't really run a couple of other sort of counterfactual experiments. Like what, what would it be like if we hadn't used it? There's a lot of problems still with some of the things that I cook. And I also just think I interviewed loads of doctors while I was writing that chapter and none of them were asking for this. Like this was not the tool, this was not the mitigation that they need, that they felt they needed. Um The most effective mitigations were the lockdowns unfortunately. And then obviously the big one was vaccines, but doctors weren't calling for this. It really felt for me like a solution in search of a problem in the sense that this massive crisis happened. Technologists wanted to do something investors saw a potential to make money from it and like, Voila, let's go for it. And you saw in the case of Singapore, um, they said at first we, you know, we're making this mandatory, but don't worry, we're going to collect data that we would never use for like a criminal policing aspect. We're only using it for public health and of course they did use it for the criminal policing aspect. So, like, not a surprise, like, saw that one coming a mile away. And absolutely, I think the UK would have done it. I mean, we've seen, unfortunately, we're going through our COVID inquiry now and it's been pretty grim, the behavior of people in power. Uh, CERTAINLY within our current government, a lot of the ministers who were not following their own laws or, you know, they were, the police were really clamping down and there was like a certain people were more heavily policed than others and there was a racial, ethnic class component to that, right. So you do that and then you put technology solutions on top of that. I just think you're going to create some really serious problems. The parliament hadn't really created pandemic legislation in place to deal with it. And part of the reason, um, I put my book out in February of 2022 and that was because I was in my poor publishers. I was like, we have to wait because they started passing legislation I think it was to remember now, the last order England was the last one to create a legal framework to allow these, to allow these apps which had already been in existence for like a year at that point, maybe even more. Um They didn't have a legal framework for it though in the four nations of the United Kingdom until starting in September of 2021. And then England was the last one in December of 2021. So I was typing against the clock because I wanted to capture that before we went to press uh and put it out to show really, the UK had, had completely changed its position. We used to be a country that was super proud of not having identity cards and all of a sudden in this case, we did and we've also put them in for voting now, by the way, like massive cultural shift here, not without protest. So they did that. And then within like two months of England passing that law, they basically were like, we're not using the app anymore. Why? Because it wasn't like it wasn't effective, it was not what, what was needed. So we did all of this mass, you know, they blew a huge amount of political capital and wasted loads of time on something that they in the end themselves decided it wasn't going to be a requirement. And what was really awful in that is they kept lying about it. So they kept saying, hey, we're not building it when they were secretly building it, then they were like, well, we're building it but it's not going to be mandatory and blah, blah, blah. So each time they kind of destroyed public trust, which is essential in something like a national crisis or international crisis, like a pandemic. So, you know, my takeaway for what it's worth and stuff like that is, you know, a I'm not convinced it was worth it scientifically and medically, it wasn't what healthcare professionals certainly were calling for. Um The risks are super high in terms of potential for abuse and they blew so much political and social trust and that matters in a democracy because if we, you know, I guarantee you, if we go again into another pandemic, a lot of people are going to remember all the lying that government did trust matters when you're handing over data, trust matters. If you're doing surveillance and trust matters in terms of health and because you, you're talking to people about their bodies, ultimately, you know, you play with that with your peril. So that for me was the real learning point was like, how do I create trust in the scientific method in health care? You're asking people to make massive sacrifices, you have to be even more trustworthy than normal because you're, you're, you're making a bigger, ask a bigger demand and for my two cents for what it's worth in the UK. We blew that. We absolutely blew it. Um So it wouldn't be great to, to have to have, you know, run that experiment again. If there's, you know, pandemic version 2.0 I would have a lot of confidence and I suspect the US would be the same. I mean, that was another case, my family and friends over in America, they were like there was no case of requiring people to use that kind of surveillance technology for health, for better or for worse. And you can look at the outcomes on that. But that's what I'm talking about. The technology is just one part of it. There's the cultural bit about adaptation. Why were certain European countries were happy to do it? And others weren't certain countries could afford to do it or have the infrastructure, do it? Certain populations have an iphone or an Android and others just don't. So it's not even an option for them, right? Like all of those things come into play. And if you're looking at utilitarianism, if you're looking at like the greatest number of happiness for the greatest numbers of people, you might want to do. What I was trying to do is go back to the original problem set and go what is it that the healthcare profession says it needs, it needs P pe they thought they needed ventilators that later was dismissed because to be honest, by the point, you're on a ventilator. It's pretty bad. Um, THEY needed people to stay home and they really needed a race for a vaccine. That is where you want to put, given that you have limited capital and limited resources. That's where you want to go. But people were super excited with this idea that this app on our phone is going to allow us to open up society again.
Ricardo Lopes: Yeah. No, no, I have to tell you here in Portugal, the government also put out an app in 2021 if I'm not mistaken. And back then they were moralizing people a lot. Oh, you should install the app, blah, blah, blah, blah, it's a health measure, whatever. Uh And I, and I was like, so first of all, I never installed the app because it was not mandatory. So I myself never installed the app at all. Uh But, but first of all, I was like, yeah, and if certain people do not even have smartphones to begin with, I mean, are you also going to moralize those people? I mean, no one should be forced to have a smartphone if they have one of those older, uh, cell phones, I mean, who cares? And, and then, and then I was like, I was taking the pandemic very serious. I was masking all the time. I was staying at home as much as possible. Uh I was distancing as much as possible. I got tested two or three times during the pandemic. It always came out negative. Fortunately, uh and I was like, doing everything apart from the app and I was like, ok, if I'm doing all of this, let's say that I uh cross paths with someone who also has the app installed and I get an alert on my phone saying that I uh went by uh for uh was less than 3 m or whatever away from a person who got diagnosed with COVID. Yeah. What am I going to do with that? Because let's get real if I, even if I get COVID, but I'm a symptomatic, I'm not going to get tested. Let's get real. So I, I mean, what, how uh valuable is that information for me? And, and I, and if I am COVID negative, no, no uh no one out there who goes by me will get any alert saying that they crossed paths with someone who has COVID. So, I mean, and I got vaccinated as soon as possible. So what, what can I gain or anyone who crosses perhaps with me can gain from this? And I never
Stephanie Hare: though because like, like the, the sort of wanna be scientist in me is like, I'm glad that we tried it because like, I understand in those pre vaccine days, like people were like, we have to try whatever we can. That's understandable. Um I'm glad we tried it in a number of different countries with different political and cultural flavors. So, that we could see like what words or what might not work because we don't know again, uh wherever the next pandemic will come from, we don't know when that will happen and where it will be with our technology. That's kind of why I was really motivated to write the chapters. Like I want to capture the UK case study to the best of my ability now, not just the tech part but the social cultural part. Because if this happens again, you know, when it happened for us in 2020 I was going back to books on the Spanish flu. In 1918, I was like, I need, you know, I need, I need to skill up fast. What do I need to know about this? And I'm sure loads of people felt the same way just as ordinary citizens. You don't have to be a researcher to be interested in, you know, learning what had happened in previous pandemics, but I wanted to create something so that future future researchers could go back and be like, well, what happened in the UK? And you know, I'm sure loads of people have written stuff on that um for what it's worth, I will share this, the Ada Lovelace Institute here in the United Kingdom is a wonderful research institution and they've just come out with their assessment of pandemic technology. I haven't had a chance to read it yet because I just came back from holiday. But it looks pretty interesting and they do great work. So that might be something for your viewers to check out if they want to. I feel like the Turing Institute might have as well. Um And definitely Oxford University has published a lot of stuff on it, but I'm intrigued by the ada Lovelace view because they often do an international comparison. So there is cutting edge research that I am unfortunately behind on at the moment. But you know, we know it's there. So if anyone wants to take a look at that, if you don't want to hear what I have to say by all means, don't go check out in a lovely, they got some good stuff
Ricardo Lopes: and I mean, just to be clear, I'm not completely dismissing these apps. I'm not saying 100% that they didn't help at all that they weren't good health measures. I'm just saying that first of all, no one should be forced to have a smartphone. That's the first point. Second of all, you shouldn't moralize people for not having a smartphone or install,
Stephanie Hare: really should moralize them when you're the British ministers who were breaking all of their own
Ricardo Lopes: rules. Yeah, that's one
Stephanie Hare: thing to moralize if you at least to walk the walk. But if you're doing hypocrisy, like get
Ricardo Lopes: out and, and then my first and then the third point and that's why I mentioned that I really follow all the guidelines and all of that to prevent COVID transmission is that you have to convince me that it's worth it because I'm not dumb and people are not dumb. Don't, don't treat us as dumb like, oh, you should do this. Yeah. Why, why you have
Stephanie Hare: with anything? If you're asking people to make a change, you have to show them why it is worth it. Listen, I'm just aware of the fact that it's approaching 11. I'm going to have to let you go if that's ok because I unfortunately have to talk to one of my clients.
Ricardo Lopes: Oh, yeah. No, no, sorry. I,
Stephanie Hare: I, I love this chat but I was like, oh, no, the time has flown. No,
Ricardo Lopes: no, I'm sorry, I wasn't aware that you had that time limit. So, yeah. So just quickly before we go, would you like to tell people where they can find you and your work on the internet?
Stephanie Hare: Well, I mean, I'm a Google search away. Uh, PROBABLY the easiest is my website, which is Habra, uh, Hare brain.co. So you can find all my, uh, television and radio and online writing. Um, AND paper writing there. My book is available at all good bookstores. You can just order it through your local independent bookstore or Pay Il Amazon. It's up to you your call, not mine how you want to do it. So it's there. It's an audio book too if you prefer to listen to it rather than read it. But then you'll miss the amazing date, this, which is in the book. So it just depends on what you want. Um And I'm on Twitter and linkedin in case anybody has questions or wants to follow up or share something that they think I've missed or that I need to look at. I'm really, really lucky for some reason, I have an incredible group of people from the public who just send me stuff randomly tips and things to look at or articles that are actually really interesting or examples from their country that I would never have heard about otherwise. So I love to hear from people. If somebody wants to get in touch, I would be very welcome. As long as polite kind. No romance stuff, please. But uh the marriage proposals get a bit weird uh at the end. But no, I would love to talk technology and ethics with um any of your listeners or viewers and I'm so grateful to you for the chance to have this chat.
Ricardo Lopes: No. Thank you so much for coming on the show. Thank you for your time and hopefully somewhere in the future we can have another conversation. I really love this one. So
Stephanie Hare: I would love that too. I will drop you a note if I am ever in Portugal again, which I hope I will be all be well and let me know when it comes out. I'd love to see it. Yeah. Sure. The marriage proposals get a bit weird at the end. But no, I would love to talk technology and ethics with um, any of your listeners or viewers. And I'm so grateful to you for the chance to have this chat. No,
Ricardo Lopes: thank you so much for coming on the show. Thank you for your time and hopefully somewhere in the future we can have another conversation. I really love this one. So I
Stephanie Hare: would love that too. I will drop you a note if I am ever in Portugal again, which I hope I will be all right, be well and let me know when it comes out. I'd love to see it. Yeah,
Ricardo Lopes: sure. Hi guys. Thank you for watching this interview. Until the end. If you liked it, please do not forget to like it, share, comment and subscribe. And if you like more generally, what I'm doing, please consider support the show on Patreon or paypal. You have all of the links in the description of this interview. The show is brought to you by En Lights learning and development. Done differently. Check their website at alights.com. I would also like to give a huge thank you to my main patrons and paypal supporters per Larson Jerry Mueller and Frederick Sunda Bernards, all of election and Weser, Adam Castle Matthew Whitting Whitting bear. No wolf, Tim Hollis, Eric, Alania, John Connors, Philip Forrest, Connelly, Robert Winde Ruin, Nai Zoup, Mark Nevs Colin Hall, Simon, Columbus, Phil Cavanagh, Mikel, Stormer, Samuel Andrea Francis for the Agd Alexander Dan Bauer, Fergal, Ken Hall, Herzog Michel, Jonathan Libra Jars and the Correa Eric Heine Marc Smith, Jan We Amal Franz David Sloan Wilson Yasa, Des Roma Roach Jan Punter Romani Charlotte. Bliss, Nicole Barbar and Pao Ay Nele Guy Madison, Gary G Haman, Samo Zal Arien Y Nick Golden Paul Talent in John Bar was Julian Price Edward Hall, Eden Brown, Douglas Fry Franca Beto Lotti Gabriel Pan Cortez, Lalit Scott Zachary Fish, Tim Duffy, Sonny Smith, John Wiesman, Martin Aland, Daniel Friedman, William Buckner, Paul George Arnold Luke Lo A Georges off Chris Williamson, Peter Lawson, David Williams Di Costa Anton Erickson Charles Murray, Alex Shaw and Murray Martinez Le Chevalier bangalore atheists, Larry Daley Junior Holt Eric B. Starry Michael Bailey. Then Sperber, Robert Grassy Rough the RP MD I Goran Jeff mcmahon, Jake Zul Barnabas Radix, Mark Campbell, Richard Bowen Thomas the Dubner Luke Ni Andre Story, Manuel Oliveira, Kimberly Johnson and Benjamin Gilbert. A special thanks to my producers is our web gem Frank Luca Stan, Tom Weam Bernard Eni Ortiz Dixon Benedict Mueller, Vege Gli Thomas Trumble Catherine and Patrick Tobin John Carlo Montenegro, Robert Lewis and Al Nick Ortiz. And to my executive producers, Matthew Lavender, Serge Adrian and Bogdan Kut. Thank you for all.