RECORDED ON NOVEMBER 27th 2024.
Dr. Marc Steen works as a senior research scientist at TNO, a leading research and technology organization in The Netherlands. He worked at Philips and KPN before joining TNO. He is an expert in Human-Centred Design, Value-Sensitive Design, Responsible Innovation, and Applied Ethics of Technology and Innovation. His mission is to promote the design and application of technologies in ways that help to create a just society in which people can live well together. He is the author of Ethics for People Who Work in Tech.
In this episode, we focus on Ethics for People Who Work in Tech. We talk about the ethics of technology, and a three-step approach to ethical reflection, inquiry, and deliberation. We discuss whether technology is neutral, what value is, the trolley problem, the importance of privacy for users, and responsibility. We talk about four different ethical approaches: consequentialism, deontology, relational ethics, and virtue ethics. We discuss where people can start when developing a new kind of technology, and we talk about three different methods: Human-Centred Design, Value Sensitive Design, and Responsible Innovation.
Time Links:
Intro
The ethics of technology
A 3-step approach to ethical reflection, inquiry, and deliberation
Is technology neutral?
What is value?
The trolley problem
The importance of privacy
Responsibility
Ethical approaches: consequentialism, deontology, relational ethics, and virtue ethics
Where can people start?
Human-Centred Design, Value Sensitive Design, and Responsible Innovation
Follow Dr. Steen’s work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello, everyone. Welcome to a new episode of the Dissenter. I'm your host, as always, Ricardo Lopez, and today I'm joined by Doctor Mark Sin. He works as a senior research scientist at the TNO, a leading research and technology organization in the Netherlands. And today we're talking about his book, Ethics for people who work in Tech. So, Doctor Sin, welcome to the show. It's a pleasure to everyone.
Marc Steen: Thanks for the invitation. It's a pleasure to be here.
Ricardo Lopes: So tell us first, how do you approach ethics in your book? I mean, I guess that even the more basic question would be what is ethics
Marc Steen: exactly? Yeah, so, uh, uh, I've worked for 25 years in research and development in ICT and what we would currently call uh AI, uh, data-driven uh innovation. And in that context, uh, uh, clients, partners, co-workers ask me whether I can help them with ethics, and that gives me a very practical perspective on ethics. So for me, it means those people involved in development and deployment of technologies, digital technologies, how they take into account various ethical questions during the process. So it's very much a process approach to ethics, and you can contrast it if you want, if you will, with the With more like, uh, yes, a bit of caricature but with a checkbox approach like did we do the privacy correctly? Did we do the bias correctly? All those topics appear also in this process approach of ethics, but much more in the In the form of conversations that people can have about these topics, deliberation, reflection, and then, yeah, changing the products that they're working on based on their findings from those dialogues and reflections. So it's an ethics as a process.
Ricardo Lopes: Mhm. And in the book you mention or make reference to a humanistic approach to the ethics of technology. What is that?
Marc Steen: Um, I think humanistic, you can say it means putting people center stage that is also also being in the in the tradition of human centered design. So you're looking at the people, the ultimate beneficiaries of these systems where they can flourish better. You can also look at more level of society, whether these. This can help to create uh more equity, uh, uh, rule of law, fairness, uh, and also that's maybe another meaning of humanistic. It means involving the humanities much more than is often done in technology development deployment, so, uh, philosophy and, and culture and all those things.
Ricardo Lopes: Mhm. So, since we're going to talk a lot about ethics in technology today, what is the distinction between ethics in people and ethics in machines?
Marc Steen: Yeah, that's, uh, I, I, I would say that for me, ethics only happens within people and, and, and between people. Uh, I don't think a computer can have ethics. I know that people, uh, also co-workers of mine, they make efforts to Uh, put a bit of ethical reasoning into robots. So the autonomous system will then have a rule like, uh, uh, don't do this because it is dangerous, uh, for people, so don't do it. So you can say, well, it is programming a bit of basic ethics into the robot. Yeah, in that sense, the machine can be more or less, behave more or less ethical, but I mean, the ethics happened in the programmers who then wrote the program and then the computer, the robot executes. So yeah, I would say that ethics happens within people and, and in between people, how we, how we engage with each other, how we treat each other, of course, but not so much in robots.
Ricardo Lopes: Mhm. So, since you've been working a lot on the design and application of technology, are there similarities between normative ethics and that that is the design and application of technology?
Marc Steen: Yeah, that's, that's a, that's an interesting question. In my experience, they have lots, lots in common, so normative ethics understood as that branch of ethics that looks at the world and then says, hey, we can do this better, more towards fairness, more towards equity. It bears resemblance to the work in my experience that engineers or designers can do. They also look at the world and say, hey, there's something not quite right here. Something can be improved, and then they go about it. So in that sense, they share a similar worldview looking at the world, finding some elements of it problematic, and then making efforts to to improve those situations. And of course, the engineer will use technology and a normative ethicist will use uh ethics, but they have, they have lots of things in common in that sense, and I I tried to include that, I, I, I included that in the book in the introduction just to, to get um engineers or developers or technologies more broadly to get them more on board like the work of ethics is not weird. You have something in common already. You look at the world, you want to improve things, which is good. So, yeah, it's, it's making ethics more attractive, more, more, more, more accessible to them.
Ricardo Lopes: But do you think that uh philoso moral philosophers and ethicists would be open to approaching the ethics of technology in this more sort of design and application way or?
Marc Steen: Oh yeah, I mean, uh, where I live in the Netherlands, uh we have uh 4 universities of technologies, and they, they all have great departments of applied philosophy. Uh, APPLIED ethics, ethics of technology. So, yeah, that, that combination is, uh, is almost normal, you can say in, in the Netherlands. Mhm.
Ricardo Lopes: So tell us then about what in the book you present as a three-step approach to facilitate ethical reflection, inquiry, and deliberation. So what are the three steps here?
Marc Steen: Yeah, thanks. So that, that ties back to what we said in the introduction, ethics as a process to make it a bit more systematic. I'm introducing this three-step progress process and I'm also emphasizing this is an iterative process. So it's not like you go 123, then you're good. You can do 12, you can do 1 again, you can 2, you can, so you, you can switch between, but it's also useful to distinguish these phases, so. Uh, STEP one, phase one is identifying those aspects, those topics, those issues that are or may become problematic in your project. So say you're working, a project team is working on fraud detection algorithm, uh, well, Then they can have a bit of a reflection of brainstorming even what could go wrong? Well, bias, discrimination, uh, the problem of transparency, uh, the problem of false positives where you accuse somebody and then they did not commit fraud. So there's discrepancy between the algorithm and reality. So it's, it's, it's putting those issues on the table to then go to step 2 that is have conversations about these issues. And first, these conversations can be between uh the people in the project team, but I'm also advocating that um bringing in the clients, bringing partners, maybe bringing in uh uh NGOs or people who know about human rights or know about technology more. uh, AND then, and then having More depth to the conversation. So what is exactly the problem and what are ways to, to, to address and solve these problems? And then thirdly, and that is I think um where the, where the normative ethics and the engineering uh mindset, uh, yeah, go together best. It's essential step 3 is do something with those findings. So you have the issues on the table, you had good conversations about it within the team and outside the team, and then do something with it and typically you can You, you need also, of course, the commitment of project management or of the customer or the commissioner of the project. Like, OK, OK, I see this problem. I've heard the solution, let's just implement this and then it's always a good uh uh uh practice to, to make this into an iterative process again, like after 3 months, we'll meet again and then we'll evaluate how the, how the improvements have worked out or have not worked out, whether we need to adjust more. So, uh, identify issues, have conversations about this and act. That's really just the 3 steps.
Ricardo Lopes: Mhm. And uh what, what do you reply to a question such as is technology neutral? Because this is very highly debated, a very highly debated topic. Uh, WHAT is your answer?
Marc Steen: Yeah, I think I quote somebody, I forgot who said it, but Technology is nor good nor bad, nor is it neutral, so you can't say the technology, this technology that is, is good, you can't say it, you can say it's bad, you can also say not say it's neutral. It is what is it then? And I I think there's things to be. There are various ways to approach this. One could say, well, it's, it always depends upon what people do with the technology. That's of course the first way of understanding this. So this technology, which is not good, not bad, nor neutral, but it can be applied to good purposes or to good ends or the other way around to evil purposes. And to complicate matters further, often technologies have a history. They have like a path dependency. Say your, your, your gasoline car, uh, can you just change it into an electric car? Well yes, you can, but then, yeah, you can only do that at the moment when, when your car is old enough that you can afford to buy. NEW one and then you must also take into account how does it fit into the situation where there are more gas stations and electricals, well, that's not the case anymore because there are many electrical charging stations nowadays. So this, this, this examples maybe not a good one anymore, but you can't also not change something because there's a past dependency, the choices that you've made in the past. Sort of restrict, not totally restrict, but they, yeah, they modify the choices that you have. So a technology can therefore, if it has already been used in certain applications, if it is already embedded in certain practices, it tends to do something more than it can do something else. So at that moment, you can't say, well, you can use it for, no, because you can only really practically use it for that purpose for which it, yeah, has been embedded uh in, in all those practices and structures already. So, Yeah, I think that like you're saying exactly the debate can go on, but it's not neutral. That is, uh, it's more complex.
Ricardo Lopes: Mhm. And how do you look at the relationship between people and technology?
Marc Steen: Um, THAT'S a difficult question. WHAT exactly?
Ricardo Lopes: Uh,
Marc Steen: I'm, I'm.
Ricardo Lopes: I, I mean, I guess that we could put it in two different ways. I mean, one of them is since technology is created by people and uh the the kinds of technologies that you're interested in in terms of the ethics behind them, it, it's people who need to program them in certain ways to do this or do that or act in the ways that we want them to. Uh, I mean, I guess that, that, that would be one way of putting it out, how do you look at the relationship between people and technology in that sense, so.
Marc Steen: Yeah, yeah, I think I'll approach it this way. This is taking into account like the last 1 or 2 years of Jet GBT and all the AI hype and You've talked about artificial general intelligence or artificial superintelligence, all these things. This may be a context in which I can, can say something that can be, can be useful, um. Here it occurs to me and I'm, I'm not finished thinking about this, but I often get the sense that people confuse whether technology is a tool for people to be used and then that that back to your previous question, not neutral, but bad dependent, etc. OR whether this technology, the AI or the ASI or the AGI. Whether it, it can be or can become an agent, uh, maybe not a moral agent, but then an agent in the sense that it acts more or less autonomously. And this really, this really makes me think and like I said, I'm not, I'm not. I'm not decided yet, but I don't like the idea of a machine being autonomous in the sense that as if it is totally autonomous, there will always be a dependency like you're saying, there's maintenance, there's programming, uh, it needs to change its batteries and people need to do that. So, but there, there will be moments or periods of time where it acts autonomously and at that moment, yeah, it, the machine, the robot can behave almost. As if it is like a fellow worker, a co-worker, maybe helper, assistant. These are, of course, the words that people often use. And at that moment, it's not really a tool anymore, not, not, not a tool like a shovel that you use to dig a hole. I mean, it's much, it's much more than that. Um, SO relationship between people and technology and what I, in this, I hope that uh people will, will remain like in charge, in control of the technology. And then also, but that's entirely different question that we may or may not go to like uh that who owns the technology, uh, who controls it, is it, is it big tech companies in the US? Is it big tech companies in China? Is it the European Union who can do something uh uh besides regulating, can we make technology? That works for people. I think that would be my uh my hope for this relationship, that technology is there for people and not the other way around.
Ricardo Lopes: Mhm. And I guess that is also at least to some extent raises the question of if technology behaves in ways that we find to be bad, who is to be held responsible, right?
Marc Steen: Yeah, and now you're, maybe you're using or moving towards the, the vocabulary of uh of of law of uh uh accountable, responsible, liable. Oh, that's a whole different uh uh field of, uh, yeah. Uh, LEGAL personality of those robots, yes or no, and etc. I'm not an expert on that, so, uh, we should park that topic.
Ricardo Lopes: OK. So, and in the book you also question what is value at a certain point. So what is value actually and what is technology specifically for? What do we expect to get from technology?
Marc Steen: Yeah, that's a nice question. So, while writing the book, uh, it occurred to me, uh, that people use value in various ways, and some people will associate value with value creation and associate that with, uh, market share and profits and, and, uh, like the financial or economic side of it. Uh, THERE'S something to be said for that, but you can also understand the value in a sense of uh what value does it have for society or for the public goods. And so that's a whole a whole different vocabulary where it's more about um. Uh, JUST this, um. Democracy, even it can have or undermine a democracy. Um, So I just think it is important when people use words like value, let's create value. Is it valuable or not, that they, that they make explicit. So what are we talking about? Are we talking about the financial side, economic side of it? Are we talking about societal, the, the, yeah, the public good, uh. Or the, or the, or the well-being uh side of, of value. And yeah, in that sense, both are, are, are options. I just think it's, it needs to be clear, uh. What people are discussing.
Ricardo Lopes: So it's not so much that one meaning of value from the ones you mentioned there would be better than the others in designing technology, but what's more important is for people to agree on what they want.
Marc Steen: Yeah, that's maybe that's maybe a sort of Uh, diplomatic answer and indeed I gave that answer, but then if you, if, if, if we associated with, with what I said earlier on the humanistic approach and uh Uh, the serving people rather than the other way around. I think I can say that I, I would prioritize when you're talking to me. I would, I would just find it more interesting, uh, to, to understand value in the, in those, in those second, uh, way like, uh, what does it add to society, to democracy, to fairness, to people's daily lives, their well-being, uh, and I know that companies must make a dose of profit. There must be a level of profitability. Uh, FOR it to, uh, maintain its, uh, to, to continue to maintain its services and products. But it does not need to be, and this is maybe, uh, uh, a political orientation of mine, uh, to that extent where, uh, the big tech companies are just immensely incomprehensibly, uh, I mean, how many, how many zeros are there in those. Market cap valuations, there's like a billion, trillion, I mean, there's these are. Yeah. Hard to comprehend, so yeah, value for me means more like uh the other way.
Ricardo Lopes: Mhm. So, in the book, you get into 3 topics that you say regularly featuring discussions about ethics and technology or the ethics of technology. Let's go through them all. So, the first one is the trolley problem. So, would you like to introduce what the trolley problem is and then why it's important in these discussions?
Marc Steen: Yeah, so it is a, it is from 1967. Philippa Fo introduced the trolley problem. It was not called the trolley problem. It later became known as the trolley problem, but it goes like this. You're standing near a railroad or a trolley or a train, but that's, yeah, a trolley. It was, of course, and It goes Uh, if it continues to run, it will run off of 4 people. However, there's also a switch in the track that you can control, and if you apply that switch in the track, it goes to an alternative track and there it will kill one person. Mhm. So then the question is, what would you do? Well, utilitarianism or consequentialism would say, well, uh, one death is less evil than 4 deaths, so must apply the switch so that it only kills one. And then they make it more complex. They say, well, there's also the same situation, but there's not a switch, but there's a bridge over the railroad, the track, and there's a person who is like a heavy body weight. So if you throw him down from the bridge onto the track, it will also he, the person will also stop the train. But that of course, feels much more like murdering because you're pushing, whereas the switching of the track, the first scenario is, is like the train killing it and you, uh, yeah, you're not really killing. So all of that is psychology, it's philosophy. People know this example very often when people talk about ethics and machines, machine ethics and programming the ethics in the machine, what we had at the beginning of our conversation, people will say, Well, the trolley problem, and I just got a little bit tired of that problem because it, it assumes. That these questions can be solved mathematically by saying 1 is less than 4, so that option is better, and it is, it is not taking into account so much. Well, it does take into account the psychology of it, but there's also human rights, there's the ontology, there's relational ethics, there's virtue ethics. So all of these other types of ethics that will go into, I guess. Later on, are out of the picture and it's just simplifying all of it and even more, if I look at that thought, I know it's only a thought experiment, but if people treat it not as a thought experiment, but as a realistic situation that you can actually solve through computation and calculation, just do the numbers. Then they're missing out the picture and I have a background in industrial design engineering, so I'm a practical person. I can also think very much and easily in in solutions, alternatives, creativity, all of that. So I would say, well, can you, can you, can you also do another analysis where you ask why did the brakes malfunction and how is maintenance of the train organized? And is it possible to do something else? Can you just yell at the people on the track? Uh, LIKE, hey, watch out, OK. There are there are like 1000 other things that you can ask on the problem definition site or that you can explore on the solution finding site. Around that very much too simplistic thought experiment and um yeah, I just like to draw in all of that complexity and say, well, the trolley problem is, is what it is. It's a thought experiment, but you must not use it as a realistic situation as if the computer can solve it because reality is more difficult and you must treat reality as difficult and as complex as it is. And later on I published something that maybe if you have shown notes I can put it in where I combine this more explicitly uh with systems thinking, the need to understand a problem and a situation in its complexity and then uh address it.
Ricardo Lopes: Mhm. But then, do you think that the really problem with uh all of those caveats and perhaps adding some of those other considerations that you mentioned there is still a useful thought experiment in the ethics of technology or
Marc Steen: not? Yes, yes, it. IT is, it can be used, so we'll later go on into like 4 ethical perspectives that I used that that that are that often people in in ethics of technology use. And one of them is this consequentialism where you look at the positives and the negative consequences, you compare them, you evaluate them, you choose. For that perspective, it makes sense to understand the basics of the trolley problem. Like sometimes you must choose between something that's bad and something that's less bad. And then if you take into account the agency that you have, and if you take into account the moral implications of that agency, like switching is different from pushing the, the, the, the, the, the, it's called the fat man, but I think it's ugly title, the, the, the, the larger person from the bridge. Um If you take that into account, I mean, that, that, that's what the thought experiment can help you with. Mhm. But not, but not more than that.
Ricardo Lopes: OK, so the second topic that you mentioned in the book has been regularly featured in these kinds of discussions has to do with privacy. So, in the realm of technology, what is privacy, what does it entail, and what are perhaps some of the biggest questions we have to tackle in this domain?
Marc Steen: Yeah, so, uh, uh, another topic like you're saying that often pops up in conversations about ethics and technology, ethics and AI is privacy that has been a topic since at least since Cambridge Analytica or Facebook, uh, in, in an unfair way, barely legal way, illegal way, uh, scraped all those data from all those Facebook friends of friends and then used it to, uh. Uh, WEAPONIZED the advertisements on Facebook and then skewed the presidential elections in the US at that time and the Brexit vote in the UK around the same time. So all of that. That has drawn enormous attention to privacy, but privacy then means who owns the personal data and who can just or can or cannot just scrape it and use it for other purposes. So it's an interesting topic, but then people, uh, sometimes if you combine it with the context of a company who uh develops or deploys technology, they will have a legal officer and they will say privacy. And then nothing else as if ethics is only privacy or they'll say privacy compliance to the GDPR, the General Data Protection Act of the, of the EU, uh. As if, if you do the compliance of privacy in terms of GDPR, then you're all good, whereas privacy can be much more and even more so, there's more topics than only privacy, as I mentioned before, that's, that's really a key topic for me. There's, there's, there's fairness, justice, and non-discrimination bias, all of that. There's transparency, accountability, responsibility, all of that, um. Uh, YEAH, privacy is only one thing and then this narrow legal interpretation of privacy is yet only one other thing. And if people want to know more about this, Karissa Felli wrote a great book on it. Privacy is Power. Well, it's already in the title where she sets up this argument that Who owns the data has power. And then uh you can, uh, a big company can then even transform its economic power into political power. We've seen that recently in uh the activities of Elon Musk in the presidential elections in the US where like economic power sort of equals financial power, sort of equals political power. And well, in, in Elon Musk's case, that is not necessarily directly. Tied to this privacy thing, but there are many other big tech companies who do this privacy thing, all of those businesses who, who thrive on ad revenues, of course, like Google and Facebook or Me I should say, of course.
Ricardo Lopes: Right. So the first topic has to do with responsibility. So how does responsibility apply to the ethics of technology.
Marc Steen: That's really the third term that, that, that often pops up even in, in, in, in, in, in vocabulary like responsible innovation, uh, I call my area of work responsible innovation and now people say, well, can we do it responsibly? We can do this responsible. RESPONSIBLE AI. So this word just pops up and I was just looking for what does it mean? And then I found in the literature of ethics of technology, uh, it, it is often understood as having two components like a knowledge component and an action or control component. And what it means is a person, imagine a developer and a team working on this algorithm for fraud detection. He or she. Can be responsible, but can only be responsible to that level or to that extent or to that quality that they know about the, the, the knowledge component. So if they know about the current situation, uh, how bad is the problem they're trying to solve if they know about possible future situations like how good is the algorithm? Can they really improve the situation? So, yeah. Knowledge about it. If they don't have the knowledge, they cannot be really held responsible or and even more so not not accountable for it. The other, but if they know, then, then the responsibility sort of grows with it. So the more knowledge you have, the more responsible you are. The same for the, for the control or the Uh, what's that the word control or action? What you do it? I'm forgetting it. But anyways, the amount or type of control that this person has over the situation, if they can influence the project a lot, maybe because they're a senior developer, maybe they're the project manager, that means that their responsibility also grows proportionally to that. Whereas if they're only, and some people are only a smaller player in the project team, they may be junior, maybe the budget is not too big. So then, And this may be harder to to to claim, but still it sounds reasonable. That they are can then be held less responsible for it. But now that I'm saying it, it sounds like you can buy your uh uh get out of jail card very easily by just claiming, but I'm only a small player. That's not, it's, it's really meant the other way around, um. That uh you can take you, there's an invitation to, to grab more control of the situation that you would normally do, which can require courage like raising your hand, asking a question, because that, that goes hand in hand with being responsible. And then in the book, I have a drawing um that was uh I took it, I think from um What's his name? You can look it up in the book, of course. It's not my metaphor, but I looked it up, uh, climbers, rock climbers, they, they do it where they, where they go up climbing, um. Uh, AND this is also. They go from knowledge to control to knowledge to control, so it's an invitation for people to, to improve their responsibility first by raising their knowledge about the situation, secondly, by uh trying to get more agency, raising their hands, applying courage, uh, and then, and then that will give them more knowledge and that will give them more control. So it's an invitation to To be more responsible in that sense, it's certainly not what I just said a couple of minutes ago, uh uh invitation to claim the other way around. I know a little. I'm just a small player. I can't be held responsible. It's really the other way around. Mhm.
Ricardo Lopes: So, let's get then into the 4 different ethical perspectives that you present in the book. I mean, you talk about consequentialism, the ontology, relational ethics, and virtue ethics. Uh, TELL us about each of them. Let's start with consequentialism.
Marc Steen: Yes, thanks. So, um, The first two are relatively easily. I found out this 3rd and 4th 1 are more innovative for some people's expectations. The context in which I use this is I have developed a workshop format, rapid ethical deliberation. I call it rapid because it just sounds like it doesn't take ages. That's correct. You can do it in 2 hours. Of course, after 2 hours, you don't know everything. You are invited to do it again and again, it's an iterative process again. Um. And I found that uh this, this first perspective, consequentialism is relatively easy for people to, to get into. It's the mindset of uh imagining the product that you're working on, it being out in the world, it having effects, it's having consequences, and then relatively simply making a list of the good things that can happen and the bad things that can happen. Uh, IF you do this. Option A and for option B, uh, which can also be not doing option A, but let's say you have A and B and therefore B also the making the, the pluses and the minuses. And then out of it will come uh often uh Yeah, a direction like B is better because it has less or, or smaller disadvantages or it has more or larger advantages. So that appeals very much to anybody with a technology background because it's, it's, it's, it's very comparable to an engineering mindset where you have two options. You just look at how good are they? Well, that one is better, so we do it. So that first one is uh consequential relatively straightforward. It has its drawbacks. Uh, YOU can't compare apples and pears, so, uh. Um, The plus here is for that group, but the minus is for another group. Say you have your self-driving car, it's a good thing that you don't have to drive. It saves you time and energy. The bad thing is that maybe it will be a threat to pedestrians or people on bikes if they, etc. So it puts the pluses for the car owners. It puts the minuses for the non-car owners. And consequentialism does not have a really easy way to solve that. So that's, that's where you go to the second perspective, the ontology or duty ethics. It puts center stages, the duties that people have and the rights that people have, and typically schematically, the duties will lie with the people involved in developing the deployment of technology, the people who make the algorithm, the people who deploy the algorithm or the electric car. And the rights are often on the side of the people uh implied suffering the consequences on the receiving end of it, and they will have fundamental rights, human rights, and that ethical perspective gives you a much more principled outlook like, no, no, no, I don't care so much about the pluses and minuses. There's just this fundamental rights and you can't violate that no matter how many pluses for that actor. In financial sense that are there because in the vocabulary of duty ethics you're not looking at plus minuses there something else you're looking at people's rights and they, and they will trump other things. So that's um that's that perspective appeals very much to people with a, with a legal background because of course they, they know about compliance, which is an obligation and know about human rights, which is the right side. So those, those two perspectives are relatively easy for people and then I go to the third one, which is relational ethics. Um, AND in the book, I explained that I use it as an umbrella term, also ethics of care or care ethics of feminist ethics. Uh, WHAT all of it comes down together is often when I use it in the workshop, I'm, I'm using like two questions. First question is Often easy for people to get into like, suppose this algorithm is implemented in the world and it is being used in practice. So again, this fraud detection algorithm. Um. How will it change the way that people engage with each other, and look into each other in the eyes, or cannot look each other in the eyes? Will it, will the, the, the tax inspector use the algorithm? Will they, will they view the person as a number or as a person with a, well, of, of, of, of flesh and blood so to say. So this, this, this looks at The way that technology can be like very machine like or the other way around if you take relational ethics more seriously, where technology stays a little bit more to the background and there's room enough for person to person engagement and uh uh yeah, discretion, uh professional discretion like I know that the algorithm says this, it has a flag for your name. But I'm looking at you and I'm using my human senses. So this is again ties back to what we said earlier like then the technology is only a tool. Uh, FOR me, but I can, if I'm the tax inspector using the fraud to text and algorithm, I pay more attention to my, to my human sensitivity and capabilities to do discretion. Uh, SO that, that's one question in relation to ethics. Second one is, uh, that's more abstract, but people also get it easily, um. But it's abstract it goes up a couple of levels, it looks at power. So if the system is there, what happens to power? Do the bureaucrats become more powerful or do the citizens become more powerful? And in that sense, that's why I mentioned it, it has uh uh connections to feminist uh critique, uh, so it's feminist ethics in that sense. And lastly for relational ethics, it looks at um At care and justice, and justice is the thing that's very much in this duty ethics like. Uh, RIGHTS and human rights are very much associated with justice, but there's also care and then relational ethics will say care cannot do without justice, just like justice cannot do without care. So it's drawing attention to those dimensions. Lastly, yeah, virtual ethics, yeah, yeah, yeah, yeah, I, I know, I know, but, uh, I, I am treating this with some depth because I consider it really the, the heart of the book like what are these ethical perspectives? How can you make them practical? So fourth one is, um, virtual. Ethics. Uh, I'm very much inspired, informed by Shannon Veler's work, uh, the book of hers of 2016, I think it was technology and the virtues. She makes a great argument, uh, drawing from various virtue ethics, um, uh, traditions. Uh, THE way that she looks at it and the way also apply it is um. You can look at a certain technology, say this mobile phone with a social uh social uh. Uh, SOCIAL media app on it, um, does it help or does it hinder? People to cultivate certain values, sorry, um I'm doing this, certain virtues, um. Uh, THE self-control, understood as one of those, uh, classical card cardinal virtues, uh, the virtue self-control, it means that I command my own actions, uh, and I can stop when I can, when I want to, I can start when I want to. Well, Of course that virtue of self-control is very easily undermined by social media apps because they have like 500 psychologists on the payroll making the app as addictive as possible because of the ad revenue, they grab, they hold the monetize your attention. Um, And then you can use virtue ethics creatively and think of, OK, can you think of a social media app that does not erode your self-control, but that can help you to cultivate self control? Yeah, that can be the social media app where if you, if you start up the app, it says how long do you want to spend? I say, Well, 5 minutes is good. I just want to be updated on LinkedIn. I just want to be updated on my friends on on somewhere else. And then after 5 minutes it goes beep. This is a reminder to cultivate your self-control. It's time to think of doing something else, more useful, more creative, or whatever, and thanks. So it can be done. If you have another business model or a revenue model. So this is virtue ethics in the sense that Shanveer is often applying it and many others as well. And I'm adding like uh 11 sort of layer to it, looking at the virtues that the developers and the people involved in developing deployment of technology will need. So um if you want as a development team, your algorithm. To not discriminate if you want it to be fair with bias, then, well, very easily you can you can imagine that those people will then have to cultivate within themselves the virtue of, of fairness or of justice, uh, being sensitive to things going like in a weird direction, combined often with courage, raising your hand, asking questions combined with wisdom, which is also a cardinal virtue, um, seeing things more clearly. Uh, USING discretion. So, uh, those are the those are the four, ethical perspectives that, that, that are often used and I make them very practical in the book.
Ricardo Lopes: OK, so let me ask you a few follow-up questions about these for ethical perspectives. So, um, do you think that they are, I, I mean, what, how should we approach them when it comes to designing technology? Are they all viable? Should They should the four of them always be considered? Do you think that one of them is better than the others? How basically should people who work in ethics think about and apply these four different perspectives?
Marc Steen: Yeah, it's an excellent question. Thanks. My invitation would be to use all four of them and to use them like in parallel, but not really in parallel, but like serial like uh 20 minutes this, 20 minutes that 20, so you can do it within 1 or 2 hours, 2 hours more likely. Um, WHY am I inviting people to look at all four perspectives? It's just because they, they, they complement each other so beautifully. It would be silly not to look at the pluses and minuses. Yeah, but it would also be silly to not look at human rights. Yeah, but it would also be silly not to look at power and how it changed relationships. Yeah. So all of them are viable. Uh, HAVING said that, um, uh, in my experience, it works best if you also isolate them because it's like the language, uh, uh, you're Portuguese speaking, uh, I'm Dutch speaking, we're now talking English, so we're finding ways to communicate and if in the middle of the sentence I will change the language, it will be harder for you to follow. So in the midst of a discussion of pluses and minuses, it's good to, to remain in that mindset, the pluses and minuses mindset of consequentialism and then people can say human rights something and then I found out that's a good topic. We'll park it for later for the next uh part of the agenda and then we do fully all of the obligations, the law, uh, compliance, human rights, all of that. And then somebody says plus the m, yeah, it's a good idea, but we, we don't let, we give the full floor to that perspective for as long as we want, let's say 20 minutes, so we can really dive into that perspective, uh, and this is again my, yeah. My recommendation to make all of these conversations more explicit. So what are we talking about? Is it pluses and minuses? Are you now saying it's good for profit? I mean, it's, it's, it's, yeah, it's perfectly practical if somebody says, well, this option is better because it gives us profitability economic, you can do that. But if somebody, somebody says human rights, that's also a great topic, but a different topic with a different perspective. So, uh, making things explicit and also making, making, making careful in the sense that Don't confuse those perspectives, uh, because then it will be a messy dialogue. Uh, HAVING said that, the first two So consequentialism and duty ethics are, as I, as I suggested before, relatively easy for people to get into. That's also why I start uh workshops with those and they were good, then we're halfway, and then I make somewhat more complex ones like the relational ethics. By the way, I don't say feminist, I don't say care, I just pose the questions like how would it change interactions between people, what does it do to power? So I, they don't need the theoretical backgrounds. Uh, SO you were saying, are they all needed? Yes, they're all needed. Do I have a preference? Not really. However, given the situation that people are already very often doing consequentialism duty ethics. Uh, MY favorite ones are the relational ethics and the virtue ethics because they're often relatively new to people and it adds something to their, to their vocabulary and to their sensitivity. So next workshop they can say, Yeah, but we can also imagine it's having influence on people's virtues and they're flourishing and then I think, well, that's great. You would not have been able to say that if we had not also spent some time on that ethical perspective.
Ricardo Lopes: Right. But uh for example, if we're dealing with different types of technologies that someone is designing, would, uh, different ethical perspectives apply to different kinds of technologies better than the others or, for example, For example, if we're dealing with a specific ethical problem that we are trying to solve in designing a specific kind of technology, is it that one of them is better than the others, or should we all, uh, should we always try to apply all four of them?
Marc Steen: Yeah, I think, I think it's a, it's a, it's a relevant question. People have asked me also, do we need all 4? Yeah, we need all 4, but then in practice, it often happens that one or two of them get most of the attention, uh, often because that's just the application or the system or the product or service they're working on. It just raises more questions in that perspective. So, uh, an, an app that facilitates human to human communication, you will spend a bit more on the relational ethics side of it. An app that uh or a system that people use all the time that really ties into their habits. Habits is also a vocabulary of virtue ethics, uh, to develop virtuous habits is like the whole thing of virtue ethics. So for those systems where they become part of a behavior of a habit, their virtue ethics is a, is a, is a, is a good place to spend some extra time, like how does it change people's, uh, attitudes, behaviors, habits, virtues. Uh, SO yeah, just, just do all four of them and it's also great if uh two of them get some more attention because, well, that's just like you're saying, uh, projects are different and will require different emphasis.
Ricardo Lopes: Mhm. So to get a little bit more practical now that we're we're reaching the end of our conversation, if someone is thinking about developing a new kind of technology where ethical considerations might apply, how do you think they should proceed? I mean, would there be a set of principles or guidelines for them to follow?
Marc Steen: If they, if they just want to get started, uh, uh, I've made a canvas, so that's a big piece of paper with a handful of questions for each of those four perspectives written on them, and I've used it often in workshops. It starts with, um, in the middle of the, the drawing of the canvas is like a blank circle, and I think it's good to Because you're saying, where do you start? I always advise people to start with the thing they're working on to put that center stage. Uh, SO, OK, what is that the algorithm that you're working on? Tell me in somewhat practical detail with some specificity, who uses it, how do they use it, what then happens, who then gets the consequences from what happens? Just tell me, what does it do? And that's the starting point for the discussion and uh uh technology-oriented people love that, I found because it makes it practical because the 1st 5 or 10 minutes of each workshop is just filled with their project, but it is their project, also the workshop is their project, but it's just a good starting point and what it helps people to do, uh, critically is that it makes all the conversations after that much more practical. Because you can't say, OK, suppose that we don't do it, uh, the first step, but we just say, well, there's the project, fraud detection something. Well, it doesn't matter the details, we'll just talk about ethics. Yeah, then people will stare at each other like, yeah, something fairness, something biased, but then, then the then the conversation just drops on the, on the ground, it falls flat. Whereas if they had spent 5 or 10 minutes at the start, OK, this is data that go into it. These are the databases that we're using. So these are the biases that are in there. This is how the algorithm works. These are the false positives, these are the false negatives. This is what then happens. This is the process organized around it? Is there, is there slack? Is there maneuver room for the inspector person to use their own discretion against the algorithm? Can they do pushback? Is there a feedback mechanism in uh uh envisioned? Is it being evaluated, so all of it. Tell me all the details. OK. Now we're good. Now we can start. And then, And then the, all the dialogues after that are much more informed and much more, much more detailed. So your question, where do you start? Well, with the project and its details and specificity, and then you let loose the four perspectives.
Ricardo Lopes: Mhm. And then, uh, finally, I wanted to ask you about, uh, 3 different methods that you present toward the end of your book. One is human-centered design, the second one value sensitive design, and the third one, responsible innovation. So, uh, tell us a little bit about them. What are these 3 different methods, how they apply and things like that?
Marc Steen: Yeah, yeah, great. Yeah, that's indeed the last bit of the book. What I do there is I look at three, traditions, if you like, that are already there. And so my argument is then don't waste your time on setting up something, something ethical. Just look at what is already there. Maybe people are doing human centered design already or value center design already or responsible innovation already. And I'll go into all of three of them, but the, the mechanism that I want to uh recommend is just look at the practices that are already there and then add ethics just to it. Uh, BECAUSE otherwise it, it puts much too much pressure on the, on, on, on having to organize an entirely new and different pro, and no, don't do that. Just take what's there. So human centered design is broadly the approach where the developers uh talk to potential future end users, maybe in focus groups, maybe usability testing, maybe user experience testing, uh, all of that, uh. And it's very, uh, not, not fair, but it's relatively easy to just add to that the four types of questions from the four ethical perspectives in the focus group in the usability testing, uh, and then, uh, more practically. Uh, HUMAN centered design, if you look at its, uh, there's an ISO norm for it. Uh, IT is already iterative, so that's good. And it is already participative in the sense that you invite the potential future end users to collaborate, but also you're asking experts, uh, and then you would now add also maybe a human rights expert or an NGO person to talk about human rights just to the team. So it's acknowledging human centered design is already halfway great. You're already putting people and their experiences center stage. It is already iterative, it is already participatory. So just add ethics to it and you're good. Another approach that I mentioned is value sensit design. Uh, IT came, uh, also from industry. Uh, It recommends. Identifying those values that are are potentially at stake in this project and then inviting stakeholders around the table to talk about these values and then having a decision mechanism to yeah, to make choices like, OK, so this is the value that we will prioritize and we want to prioritize and then secondly, how do we do that? How do we develop the technology or modify the technology or the airport system in such ways that indeed that That value that we want to improve is improved. Uh, SO that's a great way already, uh, to do ethics, and many people, uh, especially, I think in the Netherlands, it's like a very common approach in the four, universities of technology that I mentioned. So that's Delft aint of tre and Wageningen, those were Dutch words, uh, but people maybe know one of them, uh, um. They do often something very akin to value sense of design. They can also do it designed for values, which is acknowledging that during the design process, you can do it such and such that the values are uh uh promoted rather than corroded. Now between parenthesis, I can add a bit of critique on value sensitive design because uh sometimes I've seen that it is difficult for people to, to, to become practical with the value. Say we're, we're having a discussion. Well, justice, justice is good, right? Fairness is good, right? Yeah, but We don't nobody will very few people will disagree on that. So how do you then decide uh what it means very practically in the project. So, um I found that talking about virtues. Can sometimes help people uh to become more, to have a more practical approach to the system. So fraud detection thing again or the social media up again, uh, the values are like, yeah, we, we agree on them relatively easily, but then what happens practically in terms of what people's habits and, and, and, and traits and, and, and virtues, how do they, how do they change and that's Yeah, I'm not the only one. There are a couple of other authors also they're saying like something like, like virtue centered design or people are saying capability, sensitive design. I will not go into that, but the capability approach is akin to virtue ethics. So uh it, it's a way of making the values which have a risk of remaining a bit abstract of making them more practical. And thirdly, responsible innovation. I'm not sure whether people in industry will recognize all of the vocabulary that goes together with responsible innovation. Maybe responsible innovation is more like an academic term, but some of the elements I think will be, will be uh uh uh yeah, very, uh. Uh, PEOPLE will be familiar with them in industry. So there are 4 of them often, um, anticipation, responsiveness, inclusion, and reflexivity, and I can say just a little bit about them, but, uh, the thing is that If in industry. People are imagining future consequences or outcomes. They're doing anticipation and then if they want to act upon it, they're doing responsiveness. So, uh. It's really a recommendation to, to, to bake some bit more of that into your projects. Just think about what could go wrong, but also the other way around, what could, what could go enormously successful, which, which can cause its own problems, uh, weirdly enough, um. Third one of those is diversity and inclusion and I like that uh that uh dimension a lot. Uh, I'm associating it also with, uh with the participatory aspect of it, uh, what I said before. So rather than having only the technology-oriented project team members discussed those topics, uh, doing the ethics, uh, practical workshops, what I said, it's, it's, it's just more advantage. To advantageous to, to, to invite other people with other backgrounds. Notoriously, uh, many of the products that come out of Silicon Valley look like they've been designed by White rich, 30-ish people and maybe they are, I think they are. Had they included in their project teams, I don't know, older people, people of color, people who are not affluent, uh, other concerns would have been on the table, uh potentially and would have been taken into account. So this is just a big recommendation to uh if possible, if relevant, uh make the project team and interactions that you have just a bit more or a lot more uh diverse and inclusive of, yeah, those concerns and those perspectives that otherwise remain out of sight. Uh, 4th 1 of responsible innovation is reflexivity, um. Yeah. In short, it just means be aware of what you're doing. Mhm. Don't think that uh other people will solve the problem that you're looking at because no, you're the person who's looking at it, so beware of that and just try to improve it.
Ricardo Lopes: Great. So, the book is again, ethics for people who work in tech. I'm leaving a link to it in the description of the interview. And Doctor Stein, just before we go apart from the book, are there any places on the internet you would like to mention where people can find you and your work?
Marc Steen: Yeah, thanks for the question. Yeah, uh, so the book has a website associated to it. It's simply called Ethics for people who work in tech.com. And the great thing of that website is you can find uh lots of resources there. So the, the good thing is by the book, but the less good thing, but still great if, if you just go there and look at shorter articles, essays, uh, I have a sort of annotated, uh. Uh, ONLINE resources for each chapter. So, uh, on the go, you can just have a listen to a podcast here or there or a short YouTube, uh, a video to just also explain the things that are in the book. It's just to make it more accessible to people. Oh, and Mark Steyn, Mark M A R C Steyn at sorry dot NL. That's the Netherlands. That's my personal uh where all my academic stuff is.
Ricardo Lopes: OK, great. So thank you so much again for coming on the show. It's been a real pleasure to talk with you.
Marc Steen: Thanks a lot for the invitation. I enjoyed a lot. Thanks, Ricardo.
Ricardo Lopes: Hi guys, thank you for watching this interview until the end. If you liked it, please share it, leave a like and hit the subscription button. The show is brought to you by Nights Learning and Development done differently, check their website at Nights.com and also please consider supporting the show on Patreon or PayPal. I would also like to give a huge thank you to my main patrons and PayPal supporters Pergo Larsson, Jerry Mullern, Fredrik Sundo, Bernard Seyches Olaf, Alexandam Castle, Matthew Whitting Berarna Wolf, Tim Hollis, Erika Lenny, John Connors, Philip Fors Connolly. Then the Matter Robert Windegaruyasi Zu Mark Neevs called Holbrookfield governor Michael Stormir, Samuel Andre, Francis Forti Agnseroro and Hal Herzognun Macha Joan Labrant John Jasent and Samuel Corriere, Heinz, Mark Smith, Jore, Tom Hummel, Sardus Fran David Sloan Wilson, Asila dearraujurumen ro Diego Londono Correa. Yannick Punterrumani Charlotte blinikolbar Adamhn Pavlostaevsky nale back medicine, Gary Galman Samovallidrianei Poltonin John Barboza, Julian Price, Edward Hall Edin Bronner, Douglas Fry, Franco Bartolotti Gabrielon Corteseus Slelitsky, Scott Zachary Fish Tim Duffyani Smith John Wieman. Daniel Friedman, William Buckner, Paul Georgianeau, Luke Lovai Giorgio Theophanous, Chris Williamson, Peter Vozin, David Williams, the Augusta, Anton Eriksson, Charles Murray, Alex Shaw, Marie Martinez, Corale Chevalier, bungalow atheists, Larry D. Lee Junior, old Erringbo. Sterry Michael Bailey, then Sperber, Robert Grayigoren, Jeff McMann, Jake Zu, Barnabas radix, Mark Campbell, Thomas Dovner, Luke Neeson, Chris Storry, Kimberly Johnson, Benjamin Gilbert, Jessica Nowicki, Linda Brandon, Nicholas Carlsson, Ismael Bensleyman. George Eoriatis, Valentin Steinman, Perkrolis, Kate van Goller, Alexander Hubbert, Liam Dunaway, BR Masoud Ali Mohammadi, Perpendicular John Nertner, Ursulauddinov, Gregory Hastings, David Pinsoff Sean Nelson, Mike Levine, and Jos Net. A special thanks to my producers. These are Webb, Jim, Frank Lucas Steffinik, Tom Venneden, Bernard Curtis Dixon, Benedict Muller, Thomas Trumbull, Catherine and Patrick Tobin, Gian Carlo Montenegroal Ni Cortiz and Nick Golden, and to my executive producers, Matthew Levender, Sergio Quadrian, Bogdan Kanivets, and Rosie. Thank you for all.