RECORDED ON FEBRUARY 7th 2025.
Dr. Vlasta Sikimić is an Assistant Professor at the Eindhoven University of Technology. Her research focus is on Philosophy of Science, Philosophy of AI, Empirical Philosophy, Logic, Science Policy, and Animal Ethics. More specifically, she works on data-driven approaches to optimization of scientific reasoning. Previously, she worked at the Weizsäcker Center (University of Tübingen), the Institute for Philosophy of the Faculty of Philosophy (University of Belgrade), she was an associate member of the Laboratory for Experimental Psychology (University of Belgrade), etc.
In this episode, we start by talking about cognitive diversity in science. We discuss what it is, as well as epistemic diversity. We discuss whether political diversity is important, and how to achieve cognitive diversity. We then delve into the ethics of AI, and talk about ethical principles and guidelines for AI, high-risk AI systems, and robust and accountable AI.
Time Links:
Intro
Cognitive diversity in science
Epistemic diversity
Is political diversity important?
How to achieve cognitive diversity
The ethics of AI
Ethical principles for AI
High-risk AI systems
Robust and accountable AI
Follow Dr. Sikimić’s work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello, everyone. Welcome to a new episode of the Center. I'm your host, as always, Ricardo Lopez, and today I'm joined for a second time by Doctor Vlastasikimi. She is assistant professor at the Andover University of Technology. I'm leaving a link to our first interview. In the description of this one, and today we're talking mostly about cognitive diversity in science and its importance and also a little bit about the philosophy and ethics of AI. Solaster, welcome back to the show. It's always a pleasure to talk with you.
Vlasta Sikimić: Thank you so much for the invitation, and I'm very happy to be here again.
Ricardo Lopes: OK, so let's start with the topic of cognitive diversity.
Vlasta Sikimić: So,
Ricardo Lopes: to start off with, what is cognitive diversity?
Vlasta Sikimić: Cognitive diversity is used in epistemology, um uh to understand uh different ways of reasoning, um, and it can also be used from the perspective of psychology, whether someone has different types of biases, different type of ways of processing information. What is of interest for us is to also understand epistemic diversity. And I must admit immediately that uh I even consulted with some of my colleagues in preparation for this interview, and we realized that we are using, um not uh clearly, um, these two terms, but often interchangeably, which might not be the case, it's something to think about. The point there is that when we have, we can have social diversity, which I think it's intuitive for everyone, that people come from different backgrounds, um, and that they have different social, uh, experiences, but what we, uh, sometimes neglect is the aspect or a cognitive aspect. And that's the way how someone is processing information, the background, um, uh, knowledge that one has. Uh, ALSO, when we talk about epistemic diversity, these are even some positions that we might take in philosophy of science. Or in general, uh, so the way, how do we learn, the way how do we observe the world, some background assumptions that we are having from which we are building further hypothesis. And that becomes really relevant, not only in philosophy of science, but also in epistemology in general, uh, because, um, the, the premises that, and, and, and there is, uh, um, good evidence for it, that when we have the diversity of thought. Uh, THIS will help on the group level to gather a more appropriate knowledge. And I guess then we will dive into the conditions that need to be satisfied for that.
Ricardo Lopes: Yeah. But, but, but I mean, just to try to make this a little bit more clear, um, are we, when it comes to cognitive diversity here, are we talking about specific ways that people, uh, specific ways that people, uh, diverge in terms of their ways of processing. Information psychologically. I mean, I was thinking, for example, this is just an example of the work done by cultural psychologists and people like Richard Nisbett when they talk, for example, about analytic versus holistic thinking and, and he compares, for example, Western people to East Asian people in their thinking. I mean, are those the kinds of Things we are talking about or what exactly. I mean, when it comes to uh cognitive processing or cognitive mechanisms, or are we not talking about specific ways of processing information or just, uh, in a more general way, people just coming from different, I don't know, cultural backgrounds, social backgrounds, and things like that?
Vlasta Sikimić: Uh, THANK you. This was an excellent example that you used, and indeed, that would be an example of cognitive diversity. Uh, THE, the different social background will not necessarily make someone cognitively diverse. There is a high chance. That this will be the case, or let's say, epistemically diverse. But if we are in the, um, if we went to the same school, if we were exposed to the similar content, if we belong to the same ways of thinking, like what you just said, like in the Western way of thinking, then it's likely that our cognitive diversity or epistemic diversity or, again, these two terms are not identical, um, uh, will, uh, be different. And um there is also, um, so, so when we think about the epistemic diversity, can also be about which type of methods in science I want to use, right? Whether I'm a qualitative psychologist, or whether I'm a quantitative psychologist, or whether I like the mixed methodology, and, uh, whether I, um, uh, what I'm experienced in. So now imagine a team of experimental psychologists hiring someone, uh, doing quantitative research, hiring someone from the qualitative side. And this person can really be beneficial for the team because it can add certain depth to their type of research. But we have to ask then this researcher to help, uh, with, for instance, implementing some mixed methods, adding some additional layers, doing, uh, qualitative analysis in. And if you just say to this person, OK, now you need to learn all the methods, or, I mean, they presumably they know it, but because they study it and just apply them the way how we are applying, then this diversity will be lost because there, the, the, one of the ideas there is that we want to get knowledge. Uh, ANOTHER thing there, um, diversity makes a lot of sense in the epistemic context is when we think about the context of epistemic justice or injustice. And by the fact that someone is a woman, for instance, means that this person has specific experiences. WHICH uh only this person can explain. Um, AND, and, and we know this from the literature on intellectual, um, uh, uh, justice and injustice. And that there was like a huge paradigm that women cannot do difficult work or, uh, physically difficult work or, or that they cannot do, uh, intellectually the same as men. And but then you kind of get some other counterexamples and experiences, and that enters. Then the general discourse, but you need a testimony from someone belonging to, to that specific group to explain, OK, uh, this is actually happening here and it's not being recognized. So it has, it's a multifaceted. Point. Um, THEN we talk about the cognitive diversity. Indeed, there can also be the, the thinking styles, um, different cognitive, um, uh, I mean, potentially even some types of neurodiversity and so on, but in this epistemic domain, we can really observe, um, also this other type of phenomena. Right? So, so, just our, uh, our background, our positionality, and that's why it's very helpful in the research papers to always specify what is our positionality. And that helps the reader to understand where we come from. And if we are working, right, so I'm saying, I am a woman, I come from East Europe, and I can, uh, write from this perspective. I could maybe be sensitive to gender issues. I, I, I, I am sensitive to, um, this east-west division, but of course, it's not that I can now testify, uh, for people who belong to other communities and who have the different. Um, uh, uh, OBSERVATION of the world and different style of thinking about the world. If I would be exposed to that type of culture or, or if I was studying in such an environment or working, that would for sure, um, change the situation. But also having colleagues. Um, uh, COMING from different backgrounds, uh, willing to test different hypotheses, um, having different perspectives is very helpful if you are listening and if you are ready to learn from them because there is no one. Uh, A right way, uh, to do science, but also in general, to communicate with people. Because these questions in epistemology, they are related to the questions of the trust in science, but they are also related to the questions of how we are learning from each other and how much we understand each other. And the more we understand each other and the more we're open to different types of thinking and understand the different experiences leads to different, um, uh, behavior. Um, THAT, that, um, expands our horizons and it's very helpful. So it leads to some type of, um, uh, wisdom of the crowds, potentially, instead of, on the other hand, polarization that we say, OK, you don't think the same as we do. Thus, your viewpoint is not legitimate. And, and, and, and that would be an example of an exclusion of someone based on, on, on the opinions.
Ricardo Lopes: So when it comes to epistemic diversity specifically, I was wondering, does that include people being diverse in terms of, for example, the different ways they approach science. Um, I mean, let me just give perhaps two different kinds of examples. I'm not sure if in philosophy, uh, nowadays, you still use these terms or if people still identify as such. But let's say, for example, someone might Have a more, uh, idealist approach and another might have a more empiricist approach, or someone might be perhaps still on the camp of positivist, more positivist like, something like that. Or even, for example, uh, another example would be someone, uh, might come from a perspective of, of a substance ontology and another person from a perspective of, uh, process ontology. I mean, are those, let Let's say, uh, I, I'm not sure if I should call this theoretical diversity because basically our people come, it's people coming uh to the, or bringing to the table different uh theoretical approaches and perspectives, but, uh, does the, uh, do these kinds of things, uh, apply in the context of epistemic diversity? Do you also consider these differences in terms of kinds of thinking or not?
Vlasta Sikimić: Definitely, definitely these examples and thank you for them, Ricardo. Um, uh, THIS is how we, um, Uh, uh, how, how we, uh, think of, uh, epistemic diversity in philosophy and already, um, also what you explained before coming from, um, A lot of philosophy that's being taught in Western Europe has this history of philosophy coming from Western Europe. So we can get this classical education that starts from uh a pre-Socratic period and goes over um uh rationalism, uh, um, or can't, um, I mean, like the, the, the Western thinkers. And, of course, that, that we, a lot of us don't get enough exposure, um, to different type of, uh, philosophy, and that will form some background assumptions that we are having. And, of course, if someone, but that is completely, uh, Sofite was also saying this, right? The, the type of person you are, the type of philosophy you're choosing, and that is completely legitimate. So, for me, formal approaches to philosophy were resonating, but that doesn't mean that this is the only uh approach which is valid and to which we should pay attention. On the contrary, I find it very nice um to receive different perspectives and to then also. DEEPER in my research, but it's also legitimate that we have the division of people working on different subcategories of research, um, as long as, uh, uh, eventually, we can aggregate that knowledge, which happens over the longer periods of time, and make, um, some recommendations. And, um, I think it's very enriching and very interesting to, um, have the positive view on diversity and because in the nowadays, Um, they are facing this that there is this pressure that people have to really think in one way, or that, um, uh, it even gets connected with the social belonging. They they hope you're thinking on some values that we have, but even if we have different background values that can help us, uh, to be cognitively diverse as a team. And again, as long as, so, so there is one premise that we have to have some intellectual tolerance towards each other, as long as we have the intellectual tolerance towards each other and openness towards each other, that can be really beneficial. Uh, THE other, um, aspect that I, I wanted to point out is that even kind of mistakes in reasoning and science, uh, often, um, fell often. I mean, can, can be fruitful. And also someone who is like, it's not. Immediately clear what, what is the rational thing, but people can just follow their passion, can maybe even have some biases. If we spent a lot of time exploring one hypothesis. The hypothesis is not working. We have the suno bias and we want to work further on it. So rational thing would be to drop it, and that's something that I wrote about. But from the perspective of a bigger team, it does make sense that people even who have biases are there. Uh, AND, and who not only that, we all have biases, but I mean, the people who follow their biases in research are there because sometimes that leads to, um, a breakthrough. Uh, AND, uh, the, the more, um, so one simple explanation is, think of the epistemic lens, uh, epistemic space. So, we are trying to see, to learn something, and we have an epistemic space. We are exploring epistemic space, we are exploring it together. And, um, if we're all only in one point A or in around this point of the epistemic space, then all other parts of it get neglected. So it is good. That people are exploring other parts of it and that we try to encourage them to do it, uh, that we have the um understanding for it. So I guess nowadays, because again of the politically heated debate about the diversity and inclusion in, in the US academia, uh, we are now, um, making a case for why, um, why it matters. And not only, uh, uh, I mean, it, it, it really matters also for the people, not only from the group perspective, but even from the individual perspective and someone challenges our viewpoints and shows us, look, we can think in a radically different way that can also help us, uh, to think better. And, and open our horizons, or only just challenge us to answer and to sharpen our ideas and to, yeah, uh uh maybe have better answers and better understanding for our own position.
Ricardo Lopes: Mhm. So, let me ask you, because now with the, nowadays and particularly in certain countries like the US when people promote diversity in academia, they usually talk about or primarily talk about A diversity in political viewpoints or diversity in terms of political orientation. They say, for example, that we should have more uh right wing people in academia because there's too many left-wing people. Well, I mean, is political orientation also something to be considered when it comes to cognitive diversity or not? And is political orientation really a good enough proxy for cognitive diversity?
Vlasta Sikimić: I think there are two debates currently happening. One is, um, uh, there, there were these measures of, um, uh, uh, hiring people and, and promoting people or, or, or trying to empower people from the underprivileged groups. And has social impact. It has a long-lasting social impact because more, the more people who get the opportunity to get education and the more people who are on these higher academic positions, they just. Just by example, are leading and are making a change. And we have to keep in mind that academia was closed and it's still often closed, especially in some disciplines for specific genders already. And just imagine other underprivileged groups. Um, uh, BECAUSE I, I can also talk from my own experience and I get feedback from my students who are the 3rd year students at the technical university and from one specific group, right? They're, they're different groups. So we are teaching. Engineers, uh, we, we're teaching philosophy to engineers. So, so from this one group, several people told me that I'm their, um, uh, some said that I'm their 2nd female lecturer and others said that I'm their 3rd female lecturer during the 3 years. Of how long they studied and that is worrisome. And I am not even an expert in their subjects, so I'm even teaching them philosophy. And if you would have kind of more role models, um, uh, female role models on these higher positions, the, the, the, the assumption is, OK, that will also encourage other, um, women and girls and, and, and, and parents, because everyone has to get motivated and believe teachers in primary schools, right? To, to, to motivate, um, motivate them to pursue this type of career. And there was even one research, which I found. Very interesting, pointing out that, um, uh, at the end, in the end of high school, it is not the difference, um, that, that, um, uh, between boys and girls. The point, uh, within the natural science subjects and humanities, and they were performing similarly well in natural sciences, it's just that in humanities, girls were performing better. And they use this again, so this is just the research that was done. Um, uh, BUT I found this, um, it was an empirical research, but I found this, um, uh, engaging. Uh, I think Breda and AA are the daughters of the, um, That research, maybe we can link it later. Um, IF I'm mistaken, please, or mispronouncing the names, please accept my apology. Um, BUT, um, uh, I found already this idea interesting, right? That it can be something completely, um, uh, if you're really good in humanities, someone tells you, OK, then you have to study humanities, and it's not opening the option, OK, but you're also good in natural sciences. Maybe you should, uh, explore that. Maybe that is a nice. Career for you. It won't be a nice career if, uh, you enter a classroom and you are in the minority and there are no female teachers with whom you can connect, and that, that all adds pressure over time. Uh, SO, so, so that, that, that is one aspect of why, uh, we want some type of diversity in science. Uh, uh, THIS is a social aspect. But then we have the epistemic aspect is, of course, the different type of, um, uh, uh, dynamics, social dynamics, intellectual exchange, different type of thinking will be brought. And then we have people from uh diverse backgrounds. Um, AND again, this might be more obvious in humanities, then I say, OK, my positionality, where I come from, can shape my thought, then I can think about feminist philosophy in a certain way. And, and, of course, also men are most welcome to enter and, and think about feminist philosophy, and, and that, that is highly desirable. I'm just trying to say, I, I, um, when I say men, I mean, uh, Yeah, the straight man, the, the, the, the paradigm, but they are most welcome and it would be super helpful, um, uh, and they do, and, and that is great. Uh, AND that is this inclusive environment, so everyone is invited. Uh, TO do it. And, and, and that will also bring the whole field and the whole subject. It will bring more attention to it. Um, IT will sharpen it and it will develop it. And that we see very strongly in natural sciences, sorry, in social sciences, but even in natural sciences, when you do biology, or when you do medicine, the gender does matter. So, so, I would definitely, um, yeah, keep that in mind. Um, AND, and, and, and, and, and also we'll bring different perspectives and maybe we'll also bring the idea of which type of research to prioritize. Type of questions are more um bothersome for, for women. And, and, and to leave it in that way. So I think that all matters. Uh, AND the deeper we dive, we can realize why it matters. So that is, that is the epistemic component why this type of diversity would make sense. Uh, BUT when you, when you, you ask specifically about the other aspect, and this is this political whether we want more right-wing, uh, people in, in science. And that I think is a, is a great question. And I, I like, I like this question. And, well, let's first say that, um, contrary to popular or to the mainstream view, I would, I would say, yes, we do. Or at least we should be blinded towards someone's political orientation. So we should definitely not exclude someone. Uh, BECAUSE they are right wing. On the, I even did the empirical research. We last time talked about it. Um, ON the impact of social-political attitudes, uh, on the views of scientists, and indeed, scientists also in the research we did, but also in other studies that we surveyed are usually more on the left side. And um this is some experience we are probably also noticing all of us who work in science or everyone who communicates with scientists and so on. Um, uh, uh, BUT sometimes you also wonder how much people are actually talking. Uh, BECAUSE of the social pressure and how much silent they are, because of this microclimate social pressure. And again, I can also say my positionality, my positionality will be more on the left. However, um, also maybe, again, because of my positionality, because of everything that I experienced through my childhood and, and, and growing up in a country that was torn by war. Um, I, I, I have a lot of understanding for people who come from the right wing perspective, and they often do that because of, uh, not all this, right? But, some, some people do that because they, they felt that this is what preserves their identity. This is what preserves their life ultimately. And, uh, and, and I would not like them to be excluded. It would be, it is a challenge, of course, to have the open dialogue. Um, BETWEEN the groups, but I, I am very um sympathetic to, to, to, to people coming from the. Um, RIGHT. Again, modules, that's certain ways of communication, uh, human rights and so on are, are there. And, and I think this is really important. Um, uh, WHAT I also think is, uh, that it, it should be welcomed, that they express, that everyone expresses how I was feeling now free to express my political orientation. Is that also other people feel free to express their political orientation? And, and not only to say this in confidence to me because they know I'm not judgmental person, but they're not speaking out loud, um, uh, within larger groups because then they think they would be stigmatized. I think that's really problematic. Number 12 is the question, how impactful is the political orientation on someone's research and That was the, the, the, uh, some of the findings of this paper, they were kind of promising is that within the specific field, uh, of research, um, people are not overly influenced, scientists are not overly influenced by their uh social-political orientation, which is a promising thing, but we know historically. Um, THAT science can be really politicized and can also be abused for different political agendas. And that's something that we have to protect science from, because science is probably the, I mean, it's still highly trusted by the general population, not still, I mean, it also makes sense that it is. But if someone comes from the perspective of the epistemic authority, if someone is also a lecturer at the university and is spreading a political agenda, no matter from which side. That is violating um people's um rights, but it's also kind of imposing the pressure, um, that you have to think in certain ways, um, or, or, or installing certain values which shouldn't. And it can also justify practices, which again, I mean, of course, when we talk about it, everyone thinks immediately about the Second World War period. Um, BUT we, uh, and, and kind of how harmful, um, uh, uh, using science for political agenda can be. Um, BUT you should also keep in mind that even coming from the A left wing or it doesn't have to be left wing. It can also be this neoliberal paradigm, which is, um, uh, uh, yeah, advocating for freedoms, but is excluding others, potentially, not always, of course, that, that, that, that is also dangerous. So, so, uh, in my opinion, I am always very careful about funding, which has a political background. And I'm, uh, and, and I think that all this has to be acknowledged if you received for some type of research funding for that comes from a political background, no matter, uh, whether this is from the liberal part, uh, conservative part, right, left, and so on, uh, that we are just there, um, because funding, for instance, um, science funding can really direct science in different ways. And if you feel the obligation that you, I mean, it can also be the implicit um pressure to kind of deliver what you promised in the grant application, uh, to deliver really, um, yeah, if you promise that you will, um, Uh, detect certain effects or if you promise that you will work on, I don't know, reconciliation, which is of course a nice idea, but then, um, uh, uh, but then you realize that actually there are some deeper problems where this cannot happen, and then you might still misreport it because again of this idealistic ideas, but maybe also because of the expectations of the founder. And then you, it's important, right? So, so if you work on reconciliation, which I think is an exceptionally interesting and important topic, that, but we have to understand the perspective of both sides. We have to be there neutral as much as possible, um, and then in social sciences, we know it's not really possible to be completely neutral. That's why we have to report our positionality. So I would be there. I think it's a big topic. Um, AND I would be, um. Very careful, um. About it. And also one thing to keep in mind, the last thing, I'm sorry, taking too much time. Um, ALSO, a lot of our students might be coming from different political backgrounds, and they are, because I, I interact with them. And, and, and, and, um, just, um, ignoring, uh, or, or denying, or it, it is not helpful. So one has to understand where they come from and, and we are there to have a dialogue with them in higher education, and also to um Acknowledge it and acknowledge their arguments and and be there for them and and so forth, so um. That's why I think for someone who wants to be a proper intellectual, um, practices of trying to, at least when we are at workplace, to, to be neutral regarding the political orientation is really important. Mhm.
Ricardo Lopes: So, what would you say are perhaps the best ways for us to promote cognitive diversity in science specifically? I mean, do you think that things like just removing or trying to remove as As most as possible, biases and barriers and obstacles, uh, uh, would be enough for us to have more cognitive diversity in science and then, I guess, academia more generally, or do you think that we should, uh, we need, uh, things or Uh, solutions that are perhaps a little bit more interventional like affirmative action in hiring and uh in helping certain kinds of students attending, uh, attend the university. I mean, what kinds of solutions do you think are best?
Vlasta Sikimić: Mhm. I'm writing down the notes. Yes. Uh, I, I really like, uh, one quote which says, diversity without inclusion is an empty gesture. Mm. Um, uh, uh, AND I think we need inclusion. Inclusion doesn't have to only come from affirmative measures. Uh, uh, IT'S, it can be something much simpler, what we talked about just now, and this is to try not to be, to try to be open to different viewpoints, to, to consider them legitimate as much as possible. Um, uh, TO, um, uh, uh, uh, if we are in, in the simple example we had from before, right, we are hiring someone who is doing qualitative research in our highly quantitative lab. Uh, uh, WE did that deliberately because we want this person to bring this new, they were suspect, and that's why we're gonna let this person do qualitative research, and we will try to learn from it and be open to it. Uh, OTHERWISE, if you are just trying to uh make this person do the same as what we are doing, which it's highly likely that it will work, but that doesn't bring any benefits of the epistemic diversity. Uh, SO, so the inclusion is important and, and, and epistemic inclusion, which means that we really then consider these viewpoints and really try to integrate them. Uh, WITHIN our research. And when it comes to the affirmative actions, that, again, is a big debate because some people feel, um, the majority group can start feeling envy or can find it unjust. Um, AND, and that's why we have to be careful with affirmative actions. Uh, WHAT you asked about the education, there I have a very leftist view. Education should be available as much as possible to everyone, and that everyone should be encouraged and possibly stimulated to go to education if they have the motivation to, to get education, if they have motivation. I think it's also completely legitimate not to have this motivation and just be. Happy in different ways. Um, uh, BUT, so I wouldn't go to this perfectionism of you, oh, you have to motivate everyone to study. No. But for people who do want it, um, uh, I feel that we as a society, um, have to provide as much as possible resources that this happens. And a lot of, actually, professors do share this, and, and that's why professors also like to, um, help. Give interviews as, as, as this one, but also, uh um uh uh I'll uh uh put their courses on some of these open platforms and whether that will, um, Bring education closer to everyone. I don't know, we can only hope, but we also often give guest lectures and, and, and, and, um, yeah, education, like, like also mentoring when someone approaches us and so on. And, and they do this for free, but it's, well, it's not for free, it's for our soul because we are happy to do it. And I think a lot of people are actually happy to do it. They, they see this as, uh, Uh, one of the very important things, uh, of our job, then, then also, uh, I think it's also nice and and important that we are uh. Publishing, um, uh, the ones well. People who have the opportunity to publish open access, and I know because I also, again, based on my positionality, I was also in the environment in which you couldn't afford to pay the fees to publish open access and so on. But, and I know there are different types of open access, uh, so some everyone doesn't have to pay and so on. But in any case, if you're in a privileged position that you can afford this and that, um, Uh, both career in the career terms and and in the financial terms, that is also a great thing to uh make a lot of resources available and. Uh, PEOPLE are really doing it, and my um. Professor of logic from Belgrade, who unfortunately passed away, but he wrote a lovely, uh, book, uh, for high school students about logic. And that book is freely available and, but it's in Serbian language, but everyone can download it and read it. And I think that's, uh, that's something which I think he felt as one of his legacies and. Very, very nice thing to do.
Ricardo Lopes: So, on the topic of cognitive diversity, is there anything else you think is important to add or that I might have missed in my questions, or can we move to the philosophy of AI?
Vlasta Sikimić: Your questions were excellent, as always. Uh, uh, YEAH, maybe I can just shortly connect it with, um, uh, uh, intellectual virtues. Um, BECAUSE we, we talked a lot about the, um, uh, diversity from the perspective of the social epistemology and, and kind of how socially it makes sense. And we touched upon these, these virtues of open-mindedness, of epistemic tolerance, of intellectual justice, and, and, and, and also, uh, epistemic charity, I would also say to all these interpret, uh, in the most charitable way what someone is saying. Um, uh, um, TO not care about, uh, language fluency, but to care about the content. So all these things are, uh, something which we can also on the individual level practice in order to become more, um, epistemically inclusive.
Ricardo Lopes: Great. So let's talk then a little bit about uh the philosophy and ethics of artificial intelligence then. Uh, SO you worked on a set of ethical guidelines for development, implementation, and the use of robust and accountable artificial intelligence adopted by the government of Serbia. So, first of all, what are these guidelines and why were they needed?
Vlasta Sikimić: Um, MAYBE to make a connection between the two topics, just to say that, um, use of AI can potentially decrease. The epistemic diversity. And it can decrease it in education, it could also decrease it in science, because, uh, based on what the AI is trained, there can be some dominant way of thinking coming from it. And this is research that, uh, um, my colleague, Alexandra Wolzkowitz and I did uh regarding the, uh, use of AI in education, uh, and, and the potential epistemic danger of the global injustice that could also come, uh, from these epistemic parts that there is in a dominant paradigm that we're teaching, uh, children something which is in the dominant paradigm and also not critically, um, uh, assessing it enough. And we know this, um, now, even UNESCO is pointing out that critical thinking in the era of AI is becoming really important. And one of the important aspects that we want to teach everyone when it comes to the critical thinking is exactly to question, um, uh, and to check whether this viewpoint is fair and diverse enough. And the fairness comes together with diversity, and that is one of the, the principles that responsible AI has to follow. And you asked me about, yeah, uh, the, the, the Serbian guidelines. I was very Happy to, to contribute to them. Um, THEY were written before, right, before almost, the huge, uh, generative AI, uh, expansion. But there was this visionary view that, um, and, and like a larger strategy also, uh, in Serbia, as everywhere else in the world, I assume. Uh, uh, TO, um, develop, uh, um, ethically driven AI. And it had two purposes. One is to, um, um, uh, provide platform, uh, for people who are developing AI to really, um, uh, there are even forms. Uh, ATTACHED to this document, which one can go through to check whether their AI solution is low risk, high risk, uh, what should change, uh, in order to make it, um, ethically acceptable and so forth. So, it was, one was, uh, this to, to, to really help people who are developing it, to develop it in an ethical way. Um, THE other one was, of course, um, Understanding that this is a big question of the future and that we need to align the, the general laws which we do have about data privacy, um, uh, um, uh, dignity, autonomy, and so on, that we, um, align. That with the use of new technology, because it's not always clear what is happening in which case. And the ethical guidelines are part of the soft law, so they're just recommendations. Based on them later, uh, specific legal solutions and strategies, uh, are made. Mhm.
Ricardo Lopes: So, I would like to, for you to tell us a little bit about the main ethical principles that drive the guidelines, namely, explainability and verifiability, dignity, revision to cause damage, and fairness. So, tell us about each of them and why these ethical principles specifically.
Vlasta Sikimić: Yeah, the, the, this is the, there are certain common frameworks that we are thinking about when um Using AI, um, uh, that what is being implemented on, on people needs to, um, we need to have certain control of it and, and, and to have the understanding of it, and that we can also, um, check and dispute what is happening, uh, within, uh, this specific, uh, technological solution. Uh, IT has to respect our dignity. Um, AND, uh, it should always, like, the first thing one has to prove is that your AI solution is actually bringing some benefit and that it's not causing harm. Um, THEN the fairness aspect is something which I find Very important because what I touched just now, uh, upon the fairness, also has the epistemic dimension. So, it has a social dimension that someone can be discriminated by AI, but it also has an epistemic dimension. Let's go back again to the simplest. Example, we use automated translation, the automated translator, uh, from the English language in which we don't have genders. Now it translates a text into Serbian, which is the language which has genders, translates the doctor is male and a nurse is female. And this is, um, maybe it's one of the common examples. And, uh, this, of course, um, can have certain impacts because it is encoding, uh, stereotypes and biases, which we are, uh, having as a society. So it mirrors these biases, but we want that this doesn't really happen. But it can, it can, it can even have, um, uh, depends, depending on which uh area we are using AI for, it can even have bigger impact. And, um, uh, so, so, if certain biases are there, then we can also algorithmic biases, then it can also hinder us from learning properly, uh, but we are getting the information which is biased, and then we have an additional problem is how much we trust the machines and That is a big question, right? So, so, so, and it's an empirical question which psychologists are trying to study, and we often trust the machine a lot. I know that there are conflicting viewpoints on it. Um, BUT the machines can be very suggestive. And, and those are, um, some of the aspects that come with the epistemic questions. But of course, AI can really, um, uh, have deep, profound impacts on human life, especially if you use it. Um, IN, uh, uh, uh, uh, in certain domains. Um, AND, and already like if you use it to decide who will get the credit and who will not get the credit, and if some discrimination there is being used, that can be really bad, um, systems which are recommending, um, uh. Um, I mean, in the law, there are some ideas of its application to kind of see kind of who, who might commit a crime again that can also be very potentially dangerous, can have really strong effects on one's life, or we can have some applications which are Uh, more neutral, right? So we are just trying to optimize, um, uh, the distribution of the gas stations around the city or whatever, or the bus stops or whatever. It also has some impact, but it is less, uh, less of an impact on humans. And what is important. Uh, WHY it's important to have, um, some of these basic criteria, um, uh, is because whenever we want to test a certain AI solution, we can think about it within these criteria. So if you're using a medical AI we are thinking, OK, is this really helpful? Can we really explain why this is being used, uh, or sometimes we don't know what's actually going on, but it's implemented on people and that is dangerous. Is it really fair to all the patients? Um, DO patients feel that their dignity? Uh, IS not being respected. The similarly we are using some tool for the use of AI in education, so we again think through these principles and try to see um whether they are matching. Of course, there is the privacy. As well, and it's also mentioned in the document. And what I think is also nice is that these principles can be connected with certain virtuous behaviors. So they can also be some recommendations of how to, um, as a developer, how to think about the solutions, whether they are satisfying them, and as a legislator to what one should say yes and no. And if you're having the ethical and responsible AI, ultimately, this is also good for the users. Because then users can trust it more. And I think it's also good that users also are aware of certain criteria and think about it. OK. Is the AI really, like, is it really explainable? Is it like, honestly made? Um, uh, uh, uh, IS the privacy, um, is the fairness there? The, the fairness translates to this, uh, intellectual justice, epistemic justice, right? So are we, or justice in general, so it has, as I said, both epistemic and non-epistemic component. But like one can really think, OK, but is, is, is, is this following. Um, THIS type of virtues. And then do I want to use it or I don't want to use it? Do I want to give my data into the system or not? Or even, uh, I mean, we have to keep in mind some, some of these solutions are suboptimal, right? So we, we will all use translation tools or, or, or generative AI for, um, uh, generating text, but we have to be then there to critically think, OK. There is this fairness question. It might, uh, misrepresent certain groups and, uh, uh, and, and then I'm aware of it, so I will correct for it. One example I give to students are the photos generated by the LE, uh, where you see kind of, um, I, I asked it to create inclusive science. And it gives different pictures. It's really hard to get it as inclusive as possible with a lot of prompting. And then even with that, you notice that there is no older women in the picture. So you might get, uh, different groups, but when you intersect or, or, um, uh, uh, when you intersect, uh, these groups, then you will, uh, you might still have the underrepresentation. And then the idea is, OK, you show a picture. To students and ask them, OK, what, what is wrong on the picture? What is missing on the picture? And if they are aware of it, that, that is already helpful. Because, um, yeah, that's the best we can do, because, of course, AI is very helpful and, um, uh, brings a lot of efficiency to us. So sometimes it is a balancing act of trying the optimal solution. And I think that the, the, the ethical guidelines are exactly in this spirit. That we wanted to uh facilitate the progress, provide some framework for both legal but uh both legal uh solutions, but also for the uh developers and users to really see, um, uh, how they can do this responsibly and what the potential consequences might be of their solutions. I, I hope that not many people have. Immediately evil plans, it just might go wrong. Yeah.
Ricardo Lopes: So another question, what constitutes a high risks artificial intelligence system?
Vlasta Sikimić: Uh, THEN we are, then these systems are influencing life of. Humans in a significant way. Uh, AND, and, and profiling them. This is how we were reasoning here. But we also, what I'm very proud of, we also consider the environment, so the ecological aspect. And we also, I think this was the first act, uh, which actually considered the animal, uh, welfare as well. I mean, I would even go stronger and say animal. Right. But I mean, this is already great. And it was, um, uh, my humble contribution, which came, uh, from the work of my colleague, Thio Haageendorf from Tubingen and Peter Singer. The, the, the famous this is, uh, who were, uh, at that time already pointing out the potential impact of AI on animals. Uh, I think nowadays that will become more and more standard. So we have to take care of, um, Our planet living beings in the planet and that AI is not giving recommendations which can harm, uh, neither uh animals nor environment and of course, especially humans, and we are in the, in the document, it's, uh, the human aspect is, of course, being, uh. Elaborated on, and some of the cases, uh, which are the high risk cases, not only, I, I, uh, I mean, also in other acts that I was reading because you have to do that to prepare and, and it also makes sense, um, very related to the healthcare, um, uh, education, um, because there, it can really impact someone's life, um, very related to the justice system, very related to something that could potentially hinder democracy. So, also, that is, um, Um, and, and there I would say for sure different type of media, not only social media, but also the standard media if, if it's, if the AI is used too much for generating text, can become problematic. And so those are just some illustrations, but this list is not exhaustive, um, uh, and what is very important is that we always have to, uh, update. Um, SUCH lists, and then that is written and noted in the document, and the technology might develop in ways that we're not fully aware yet and that's why it's important that this is. Um, NOT fixed, but that we are constantly revising it. And when we say that something is high risk, doesn't always mean that it's just that it's prohibited, right? So the use of AI in education shouldn't just be prohibited. That's not the point. The point is that there has to be a special care and special attention of how it is implemented.
Ricardo Lopes: Mhm. So I asked you about the ethical guidelines, but now, what does it mean for AI to be robust and accountable? What what does that mean exactly?
Vlasta Sikimić: That, um, it gives, uh, uh, it gives reliable estimates or depends what you're using it for, uh, outputs, um, over different uses and parameters, um, uh, that it, uh, has also this from the technical perspective, uh, high accuracy of what it is doing, but I think it's, um, for me, very important that, um, To keep uh the idea of the human in the loop, even though we are seeing that we're going more and more in the automation direction. But the human still, I mean, in my opinion, has to be the one who decides about values, who decides, um, About the quality, uh, who guarantees for its use and implementation. So, um, I would always, yeah, it's, it's also important and I think this is how the legislators nowadays are thinking that there has to always be a human responsible for whatever he is doing because the human can also do monitoring and updating and um. Um, I, I, I, I, I mean, yeah, I, I definitely think that uh human control is important.
Ricardo Lopes: Uh, SO, uh, I mean, my last question and I think that you've already partly answered it in my, in your previous answer, but what are then the requirements for AI to be robust and accountable?
Vlasta Sikimić: And Yeah. So, so, of course, to follow the, the principles, which we discussed. And I would even say these principles are not necessarily exhaustive. They're just the main ones to, to follow the privacy, to follow the dignity, fairness, um, explainability, um, uh, uh, bene beneficience that it is, uh, causing no harm and it's actually doing some good. Um, uh, THAT is performing well, that it has, uh, I mean, that, that is part of it, right? So, so what we say that it's really doing good, that it's performing well, um, on these parameters, that it's, um, useful and um Uh, uh, again, so, so I don't wanna finish on the note of being too critical towards technology. On the contrary, I think technology is great. And, uh, then we have the trustworthy technology that can help us a lot. And what I mean by trustworthy is that it is also designed, uh, for humans, by humans. Um, AND, and that is, um, this is maybe an addition which is not in the law, but this is what I think is, is, is very important is that Uh, it really takes into account also the, uh, underprivileged groups and, and, and how the underprivileged groups might be, that's part of the fairness, of course. But, um, um, the, in order that humans trust. AI and the AI really has to satisfy, uh, uh, our criteria and our demands. Also, then we are in the underprivileged group, so that we really feel that this is doing a good job for us. Often we are not aware of it, so I think it's a complex system. Where AI is now interact with humans. And, um, there the education plays a role that we really understand what they, I can do, what I cannot do, at least at the moment, how it is developing. And, yeah, I always encourage everyone, let's, let's try to use it. Let's, let's try to experiment with it. Then we will also know what are the boundaries and, and, and how we feel about it and to maintain the critical attitude and the trust in our own judgment.
Ricardo Lopes: Mhm. Great. So where can people find you when you work on the internet?
Vlasta Sikimić: Uh, YES, of course, on my web page, I have a personal webpage, uh, la.com, and, um, uh, also on my LinkedIn page, um, uh, on my, um, university web page. Uh, MY name, my first and last name are a unique combination. So when you type me, you cannot miss me. OK.
Ricardo Lopes: Great. So thank you so much for coming on the show again. It's always a great pleasure to everyone.
Vlasta Sikimić: Thank you so much, and thank you for all these wonderful efforts and the educational material you're providing for everyone.
Ricardo Lopes: Hi guys, thank you for watching this interview until the end. If you liked it, please share it, leave a like and hit the subscription button. The show is brought to you by Nights Learning and Development done differently, check their website at Nights.com and also please consider supporting the show on Patreon or PayPal. I would also like to give a huge thank you to my main patrons and PayPal supporters Pergo Larsson, Jerry Mullern, Fredrik Sundo, Bernard Seyches Olaf, Alexandam Castle, Matthew Whitting Berarna Wolf, Tim Hollis, Erika Lenny, John Connors, Philip Fors Connolly. Then the Matter Robert Windegaruyasi Zu Mark Neevs Colin Holbrookfield governor Michael Stormir, Samuel Andre, Francis Forti Agnseroro and Hal Herzognun Macha Joan Labrant John Jasent and Samuel Corriere, Heinz, Mark Smith, Jore, Tom Hummel, Sardus Fran David Sloan Wilson, Asila dearraujurumen ro Diego Londono Correa. Yannick Punterrusmani Charlotte blinikolbar Adamhn Pavlostaevsky nale back medicine, Gary Galman Sam of Zallidriei Poltonin John Barboza, Julian Price, Edward Hall Edin Bronner, Douglas Fre Francoortolotti Gabriel Ponorteseus Slelitsky, Scott Zacharyishim Duffyani Smith Jen Wieman. Daniel Friedman, William Buckner, Paul Georgianeau, Luke Lovai Giorgio Theophanous, Chris Williamson, Peter Vozin, David Williams, Diocosta, Anton Eriksson, Charles Murray, Alex Shaw, Marie Martinez, Coralli Chevalier, bungalow atheists, Larry D. Lee Junior, old Erringbo. Sterry Michael Bailey, then Sperber, Robert Grayigoren, Jeff McMann, Jake Zu, Barnabas radix, Mark Campbell, Thomas Dovner, Luke Neeson, Chris Storry, Kimberly Johnson, Benjamin Galbert, Jessica Nowicki, Linda Brandon, Nicholas Carlsson, Ismael Bensleyman. George Eoriatis, Valentin Steinman, Perkrolis, Kate van Goller, Alexander Aubert, Liam Dunaway, BR Masoud Ali Mohammadi, Perpendicular John Nertner, Ursulauddinov, Gregory Hastings, David Pinsoff Sean Nelson, Mike Levin, and Jos Net. A special thanks to my producers. These are Webb, Jim, Frank Lucas Steffinik, Tom Venneden, Bernard Curtis Dixon, Benedic Muller, Thomas Trumbull, Catherine and Patrick Tobin, Gian Carlo Montenegroal Ni Cortiz and Nick Golden, and to my executive producers Matthew Levender, Sergio Quadrian, Bogdan Kanivets, and Rosie. Thank you for all.