RECORDED ON FEBRUARY 26th 2025.
Dr. Alexander Thomas is a Senior Lecturer in Media Production and Film at the University of East London. He is the author of The Politics and Ethics of Transhumanism: Techno-Human Evolution and Advanced Capitalism.
In this episode, we focus on The Politics and Ethics of Transhumanism. We start by discussing what transhumanism is, its tenets, different kinds of transhumanism, enhancement, hierarchy and elitism, and arguments against transhumanism. We delve into advanced capitalism, longtermism, accelerationism, effective altruism, TESCREALism, the politics of transhumanism, and eugenics. We also talk about data totalitarianism, the consequences of a cybernetic approach to the human mind, artificial general intelligence (AGI), and posthumanism. We discuss whether transhumanism can be a religious movement. Finally, we talk about the risk of dehumanization, and how we can ethically approach a potential transhuman future.
Time Links:
Intro
What is transhumanism?
The tenets of transhumanism
Kinds of transhumanism
Enhancement
Hierarchy and elitism
Arguments against transhumanism
Advanced capitalism
Longtermism, accelerationism, and effective altruism
The politics of transhumanism
TESCREALism
Eugenics
Data totalitarianism
A cybernetics approach to the mind
AGI
Posthumanism
A religious movement?
Dehumanization
How to approach a transhuman future
Follow Dr. Thomas’ work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello, everyone. Welcome to a new episode of the Dissenter. I'm your host, as always, Ricardo Lopez, and today I'm joined by Doctor Alexander Thomas. He's a senior lecturer in media production and film at the University of East London, and today we're talking about his book, The Politics and Ethics of Transhumanism, Techno-human evolution and Advanced Capitalism. So Doctor Farmers, welcome to the show. It's a big pleasure to
Alexander Thomas: everyone. Thanks very much for the invite, Ricardo. It's a pleasure to be here. Thank you.
Ricardo Lopes: So let's start perhaps with a bit of, with a bit of background here also for the people who are listening and watching us who are not familiar with the topic of transhumanism. So what is transhumanism, and I know that it is very associated with different kinds of technology. So what kinds of technology go associated with it?
Alexander Thomas: Um, OK, yeah, sure. So, um, transhumanism is, is basically the idea of, of self-directed human evolution. Um, SO it's the claim that we can and that we should, uh, radically enhance the human condition through the use of, of applied technoscience, essentially. And, and most transhumanists think that we're on the cusp of making this a reality reasonably soon. So, uh, one way of thinking about it is that if the history of scientific and, and technological progress, um, can be seen as an attempt to use nature to better serve human needs, transhumanism can be seen as the revision of human nature to better serve our fantasies. Um, AND, you know, as you pointed out there. There are a bunch of technologies that that they think will make this so, and, and that is the reason why after, you know, 300,000 years or so of, of our existence, they think maybe we're suddenly about to become something else. And it's um it's what they sometimes refer to as the converging NIC suite of technologies that make this thinkable. And NIC stands for nanotechnology, biotechnology, information technology, and, and cognitive science. So nanotech is technology on a very small scale, um, even on the scale of individual atoms and molecules. uh, SO they, they have a concept called um atomically precise manufacturing. Um, SO that's part of the transhumanist kind of nano nanotechnological dream that might allow us to turn smoke into strawberries and cancer tissues into healthy tissues and create radical abundance. Um, THEN there's biotechnology, which includes things like genetic engineering. Um, AND obviously information technology, which we're all very familiar with, computers, the digital world, developments in AI, the potential of quantum computing, and, uh, obviously cognitive science is the study of the mind. But in particular, it's the fusion of these technologies which really for transhumanists offer profound possibilities. So an example might be brain to computer interfaces. Such as those designed by Elon Musk's company, Euroink, and that contains, you know, biotech elements, infotech elements, cognotech, all within one system. So, so on the one hand, transhumanism is a kind of ideological commitment to being proactionary in, in using and utilizing these technologies to apply them to humanity and to evolve humanity into a kind of enhanced entity, and then continue with that process. Um, BUT on the other hand, it is sometimes, it's sometimes also used descriptively. So as a term for this process, the, the kind of material unfolding, the co-evolution of humans and technology. So, um, but personally, when, when I use the term, um, I, I tend to describe the process using terms such as technogenesis or techno-human evolution, and um I use transhumanism for the, the ideology saying yes, this is a good idea.
Ricardo Lopes: And how all these transhumanism, because just looking at the ideas associated with it and particularly the kinds of technologies that go associated with it, we, one would think that perhaps it's just a very recent thing that came about over the last few decades or something like that, but maybe feel particularly from a philosophical standpoint, it might be a bit older. Yeah,
Alexander Thomas: absolutely. I mean, I, I would say humans have probably been dreaming about kind of transhumanist possibilities for pretty much as long as human culture has existed, I'd say. Um, YOU know, we, we, I'm sure throughout history, humans have been very aware of the limitations of, of what it is to be human, uh, you know, we die, we get disease, our bodies are frail, um, and I'm sure we've always kind of wanted to imagine these things away. Um, BUT obviously in the, in the modern sense of the word, transhumanism is, is much more recent. Um, NOW transhumanists often point to kind of the Enlightenment as the philosophical lineage of their thinking. So, um, you know, we had Enlightenment philosophers such as Francis Bacon. He imagined science leading to incredible possibilities. And then, you know, a bit later in the, in the 19th century, uh, there was a thinker called Wynwood Read, for example, and he, you know, wrote about most of the ideas that modern transhumanists talk about now. Um, THERE was also biologists in the, in the 20th century, the early 20th century, like, um, Haldane and Bernal, who, who provided very strong kind of proto-transhumanist visions. Um, BUT modern transhumanism is usually said to. Begin with Julian Huxley, um, and he's an evolutionary biologist and, and a eugenicist, and, and the brother of Aldous Huxley, uh, who wrote The Brave New World, obviously. Um, HE, he's the one who coined the term transhumanism in its modern sense, and that was in the early 1950s. But there is another kind of argument which says actually it's even more recent than that. Um, SO there's a guy called Max Moore, and he claims to have coined the term independently and around the 1980s and 1990s there was a a kind of transhumanist movement called the Extropian movement, and that was very influential, and there was a kind of extropian mailing list that included many of the, the very famous transhumanists that have emerged since then. Um, SO, you know, that you could argue that the Extroian movement has, has been very important in terms of conceptualizing the modern meaning and movement of transhumanism. So maybe transhumanism starts in the 80s and 90s with kind of Maxim more and extropianism. So those are the kind of, um, you know, depends how you define it, is how far you go back.
Ricardo Lopes: And what are some of the most prominent figures associated with transhumanism today?
Alexander Thomas: Um, WELL, today, I, I mean, yeah, there's, there's lots. I mean, as, as I say, you could, if you think historically, you've got Francis Bacon, obviously. Um, SO, uh, I think actually Max Moore argued that transhumanists should forget the Christian calendar and start a new one where year zero is the year that Bacon uh published Noum or Ganum. So he thinks he's that important in terms of, in terms of transhumanism. Uh, SO, uh, so, um, yeah, um. Uh, SO yeah, so you could argue, bacon is very influential. There's a, there's a Russian philosophical called Nikolai Fedorov, who was the father of Russian cosmism, which is a kind of proto-transhumanist offshoot again, um, in the, you know, kind of 19th century. Wynwood Reid, who I've mentioned. His book The Masdom of Man in 1872 kind of imagined all the current ideas of transhumanism such as space colonization, new human bodies, um, humanity functioning as a hive mind, uh, the invention of immortality he talked about, and, and the belief that humans would one day run rule the universe essentially as a kind of godlike posthuman entity. So all of that was imagined in 1872. Um, JBS Haldane and and JD Bernal, um, they, they kind of wrote very important, uh, proto-transhumanist ideas. Uh, Julian Huxley in 1951 it was that he potentially coined the term transhumanism in the modern sense in his article, Knowledge, Morality and Destiny. Um, I would also say FM 2030 is very important. He influenced Max Moore and Natasha Vieter Moore, who kind of were key figures in this Exopian movement. FM 2030 was a guy called FM Esfandiari, and he changed his name to FM 2030 because he thought in the year 2030, he was going to be 100 and life would be completely changed and radically improved. Uh, SADLY he died in 2000 and was cryonically frozen. Um, I also think Ray Kurzfell is, is a very important and influential thinker, um, uh, on the transhumanist movement, a particular type of transhumanism that we might talk about later, which you could call singularitarianism. Um, IN terms of modern thinkers, I would then add, Nick Bostrom, I would say is probably the most influential transhumanist thinker of the 21st century, I would say. Um, AND other notable people that I haven't really mentioned so far, David Pearce, who co-founded the World Transhumanist Association, which later became Humanity Plus, um, Anders Sandberg, who was kind of Nick Bostrom's sidekick at the Future of Humanity Institute, which was an important transhumanist kind of academic institute. And James Hughes, who's, um, the main kind of uh figure on, on, uh, the left leaning side of transhumanism, so-called techno-progressivism, he's quite an important guy. Um, BUT the truth is today. I would say the most important transhumanist thinkers are really the billionaire elites of Silicon Valley. Um, I'm not sure we could call them thinkers exactly, they're more like robber barons than philosophers maybe. Um, BUT you know, the, the, um, Peter Thiel, Elon Musk, Sam Altman, Mark Andreessen, the, the, and, and, you know, other tech billionaires as well, they have been extremely forthright in their transhumanist aims and ideas. Um, THEY'RE absolutely transhumanist ideologues and, um, they're also heavily invested obviously in doing. What they can to extend their wealth and power, including creating techno authoritarian political formations, and they use transhumanism, I think, as a kind of justification to themselves, mainly for these extreme extremist political positions, um, because it kind of tells them they can be the architects of some grand utopian future. So, um, so yeah, that's a kind of, you know, the, the, the both the. They're kind of more established transhumanist intellectual figures, but also the, the most important ones in terms of um real world impact and and the ones that are really taking, taking transhumanism seriously today.
Ricardo Lopes: And what are the main tenets of transhumanism or the main values associated with it?
Alexander Thomas: Well, I think one thing to point out here is there's a lot of disagreement and a lot of variety of transhuman transhumanisms if you like, um, they don't necessarily see the same techno human evolution, what that should look like, what it will look like, etc. Um, HOWEVER, there are a few things we could point to. There's a, there's a line in the Transhumanist Reader, which is edited by Max Moore and Natasha Vitemore, uh, where they claim that transhumanism should be inclusive, pluralistic, and lead to the continuous questioning of knowledge. And, uh, for my book, I use that as a kind of jumping off point, because I, I kind of, what I tried to do was argue, well, those values are not going to be realized if transhumanism is developed in the context of advanced capitalism, which we, we can talk about later maybe. Um, SO, so those are values that I don't think are realizable for transhumanism in a capitalist context. Um, BUT, you know, you could also say that those aren't really foundational tenets or values. They aren't something that all transhumanists would ascribe to. Um, CERTAINLY these kind of new techno authoritarian formations would be strongly opposed to all of those principles, in truth, um. But another three kind of popular ideas amongst transhumanisms, which you could see as kind of values or or kind of, you know, um, tenets I guess. Um, AND those would be the proactionary principle, a morphological. Freedom and existential risk. So the proactionary principle is a kind of counter stance to the precautionary principle. Um, THE precautionary principle argues that we should be cau cautious not to cause harm when undertaking, you know, scientific practice. Um, BUT the proactionary principle argues, well, no, we should count the cost of not undertaking kind of risky measures, as well as the focus on potential harm. So that's the proactionary principle. The morphological freedom that emphasizes the right of each individual to have a free choice as to whether to adopt or reject kind of human enhancement possibilities. But in doing that, it kind of characterizes each human as having equal agency, uh, kind of almost by default, and effectively it fails, therefore to consider questions of power, social context, and so on. And Max Moore, who I mentioned, he, he was the progenitor, I think, of both of those ideas. But the concept of existential risk was really developed primarily by Nick Bostrom, and this was a kind of response to transhumanists realizing that the very technologies they wanted to develop may not create this kind of utopian future that they dreamed of, but rather it could bring about the destruction of humanity. So existential risk was a way of applying reason, uh, in a kind of a with a kind of uh I guess an academic gravitas, if you like, to say, don't worry, we're in control of this. Uh, WE shouldn't just give up on the development of dangerous technologies, we can manage the risk technocratically. AND and kind of employer a quantifying attitude to risk and and think about things in a very measured way. So those three ideas of of pro-actionary principle, morphological freedom and existential risk are very much uh central to transhumanism today, I would say most transhumanist discourse today.
Ricardo Lopes: And are there also at least some common goals across the different types of transhumanism out there?
Alexander Thomas: Um, AGAIN, I would say there's not necessarily a complete consensus, in truth, um, but, uh, one way transhumanists sometimes frame their goals is, is to talk about the three supers. So there's super longevity, uh, so transhumanists would like us to have radically expanded lifespans, potentially even to live forever, some of them, or at least until we choose to stop dying ourselves. Some of them are also focused there on a health span, so increasing the human health span rather than the lifespan per se. Um, THEN there's super intelligence, so that's the aim of vastly increasing the cognitive capacities of the human race, or potentially the post-human race, which might be a different thing. We can, we can maybe get into that. Um, AND super wellbeing, so they would like us to improve on what it feels like to be human, you know, they talk about us being better than well. Perhaps we could, uh, radiate pure joy. Or we could have also maybe more choice and freedom about what our physical bodies are like and what they enable us to do. So why shouldn't we, we be able to, I don't know, echolocate like a bat or be stronger than a bear or fly like a bird or run faster than a cheetah, you know, so these kind of things as well, so, uh, that would come under there as well. Um, BUT, you know, I would also again point out that given the emphasis on morphological freedom, some, some transhumanists would even reject these three supers and would rather say, well, it's up to each person to decide what enhancement is for them. So some transhumanists would say that.
Ricardo Lopes: So you mentioned that there are different kinds of transhumanism, of course, we're also going to get into more detail about some of them, particularly when later on in our conversation, we talk about task realism and things like that. But what would you say are at least the main kinds of transhumanism?
Alexander Thomas: Um, YEAH, I mean, there's, there is a lot of subcategories, increasing numbers of splinter groups and, and lots of kind of transhumanist adjacent ideas as well. So, you know, you could kind of name almost, there's just loads of them, proliferating all the time. Um, BUT there are certainly different political positions within transhumanism, so there's a kind of more inclusive left leaning version of transhumanism, which is usually called techno-rogressivism, which is largely associated with James Hughes, who I've mentioned. Um, ON the right, there's, uh, there's a more, there's the, the extropian movement, which I also mentioned, for example, that tends to be much more techno libertarian. So that's more of a kind of right wing um transhumanist philosophy. There are also religious and spiritualist kind of transhumanist groups. There's a Christian Transhumanist Association, a Mormon Transhumanist Association, there are Buddhist transhumanists, um, and there are also, you know, designations, some of which you pointed to there, things like singularitarianism, cosmism, long-termism. I'm sure we'll speak about some of these later on. Yes. And also now there's kind of newer related philosophies I would say, so we'll talk about those I'm sure later, but things like effective accelerationism, neo-reaction, these have, um, you know, a lot of transhumanist kind of influence in there as well, and they can be seen as kind of subcategories of transhumanism as well, potentially.
Ricardo Lopes: Mhm. So, just a minute ago when I asked you about the goals of transhumanism, you mentioned the word improvement, and I guess that attached to it lots of times goes the word uh enhancement. So what does enhancement mean in the context of transhumanism?
Alexander Thomas: Yeah, it's a very good question and, and bizarrely, I don't think it's completely clear that that's the truth. So again, transhumanists don't necessarily agree on, on this. Um, SO some emphasize again morphological freedom, which means enhancement is a question of individual choice. Whereas others kind of think, well, objectively, there are, we can point to certain things and say that's an enhancement, so they, you know, it might be better to be able to move quicker or to be stronger or to be healthy for longer or to be more intelligent, they would say, well, that's just objectively better. So that, that, that kind of this, this kind of um almost contradiction between this objective assumption uh assumption and this kind of um individual choice is is a contradiction that most transhumanists don't even acknowledge in truth, but I I think it's definitely there. Um, AND I think, you know, maybe an even more important question is not just what does enhancement mean to transhumanists, but what does enhancement mean in the context of the social world we live in, you know, which is again what my book was exploring, this, this question of capitalism. So, so that's the question again that transhumanists don't tend to answer. They, they've they kind of neglect to think about that we live in an actual context, you know, they imagine. There's uh completely imaginary futures that are divorced from the reality of our, of our times. Um, SO absolutely, it's true that context shapes our conception of what we want, what we need, um, who we are, and so. Uh, YOU know, much of what we'd like to enhance is about really our competitive relationships with each other, for example, our, our ability to compete in the marketplace. So, of course, you know, here, um, the capitalist context means we're unlikely to be able to afford all the same level of enhancements, um, and, and the implications of that could be quite extreme. The more you can afford, the more you can enhance, the more you can enhance, the more you can afford. It's, it's, so it's kind of true already that we have, you know, social mobility has always been extremely limited for that very reason. But transhumanism I think could social stagnation and the class differences we already see. So I think transhumanists need to spend much more time thinking about what should enhancement mean if they are really committed to, you know, these values we mentioned earlier of inclusivity, plurality, and how we might restructure society in ways that ensure that that does become the reality, because I, I don't think it will in, in the context of capitalism. So, so yeah, that's a big challenge for transhumanists, I would say.
Ricardo Lopes: And transhumanism, transhumanist suggest different kinds of enhancement as you mentioned there, there's, for example, physical enhancement or biological enhancement, psychological enhancement of different kinds of traits. And very interestingly, they also talk about moral enhancement. So what is that exactly?
Alexander Thomas: Yeah, um, that's again a a a very good question. So, um, interestingly, I would say there's a, a guy called Julian Savulescu, and he's probably the, the thinker that's most strongly related to this kind of claim or or idea of moral enhancement. Um, I don't think he actually identifies as a transhumanist, but to all intents and purposes, he, he really is one. I can't, I can't see quite how he's different, but um. He, he argues that, yeah, we need to upgrade the moral nature of the human species in order to survive. So he says we're facing, um, he calls it a Bermuda triangle of extinction. So the three things that threaten catastrophe are radical technological power, liberal democracy and human moral nature. So he argues, well, there's nothing we can do about radical technological development, it's coming. Um, SO, you know, it's the other two we need to kind of mess with for him. So, you know, liberal humanism might have to give way to a surveillance society for him and human moral nature. Should, well, we should maybe reprogram ourselves to kind of more desirable ends, um. So essentially Savulescu believes our moral natures are kind of now at odds with this modern socio-technical world that we've developed, this globalized world where we're all interconnected. So he, he points to our empathy and says it is limited and out of step with the global reach. We now have. He points out that we're short-termist in our thinking and we tend to only cooperate in smaller groups when, you know, we're being watched, essentially, um, and, you know, maybe that's why we can't fix climate change for him. Uh, HE points out that we tend to be distrustful of strangers and that we're naturally xenophobic. But the truth is all of these ideas are actually highly debatable and clearly socially dependent. Many of us aren't really xenophobic and um some people are highly empathetic and prosocial, and some societies develop these tendencies much more than others. So again, capitalism with its focus on individual self-interest, um, it's insistence that life is a competitive struggle of all against all, that kind of produces these shortcomings that Savulescu is is saying are essential to human nature. Um, SO, so, you know, he therefore wants us, he thinks that you can't fix it with social changes, it has to be rewriting our, our code effectively, rewriting our biology, um. Yeah, so, so Savulescu claims that these enhancements, um, you know, actually should be imposed potentially against people's will as well, if necessary, he thinks. So again, that obviously contravenes this principle of morphological freedom, but but as I say, none of these values are really held by all transhumanists. There's a, you know, a bit of play with all of them. So, um, yeah, Savulescu, what he doesn't seem to recognize, I think, is that moral perspectives differ incommensurably. We can't make them all agree. So he, he kind of almost makes an assumption that what is moral and what therefore what constitutes moral enhancement can be somehow universally agreed. Uh, WITH reason, rationality, um, but in reality there is no transcendent kind of view from nowhere, um, that can, you know, justify this, this authoritarian position. He, he kind of creates this imaginary thing he calls a god machine which could arbitrate with perfect fairness, but of course it doesn't exist. So this, this typifies, I think, um, a misguided transhumanism presumption that technological progress can actually solve moral problems. It it can't. Values can only be situated from a, from a particular perspective, and in the context of new technologies, of course, they would likely be enacted through a filter of, of kind of powerful vested interests. So again, Savulescu misses this influence of power in determining moral norms. Um, SO his naive hope of kind of fixing moral nature, I think, points to this really important flaw in transhumanist thought, essentially, you know, technological progress does not guarantee moral or ethical progress. Two world wars of the 20th century tell us that, and all the ensuing conflicts since and the ongoing climate catastrophe, you know, facts can only tell us. So much they can direct our means, but they can't in in themselves effectively determine moral ends. So Savulescu's idea that morality is a, is a kind of a potential site for, for, um, instrumental reason, I think, you know, something we can just fix, is symptomatic of this failing in so much transhumanist thought that. The simplistic idea that instrumental technical progress can fix or solve moral problems and so yeah, I think that's uh it's it's worth, you know, thinking through that because it's, it's something that just recurs again and again in transhumanist thought, that error.
Ricardo Lopes: So there are two questions here that I think are very important for us to address, uh, particularly after we talked about enhancement, because I think it can bring them to mind is or does transhumanism imply hierarchy in any way?
Alexander Thomas: Um, YEAH, I mean, given that the notion of enhancement is really at the center of transhumanism, it's difficult to suggest that doesn't kind of have an assumption that comes along with it of one thing being better than another, which of course then implies hierarchy. Um, AGAIN, savvy transhumanists will deny it by pointing to morphological freedom, that's always their get out clause. Um, BUT to me that that's kind of, it's totally unconvincing because without taking the further step of recognizing that we live in a given social context, which may also dictate a hierarchical structure, um, transhumanists are simply kind of, you know, using that morphological freedom as a piece of rhetoric to avoid the real challenge. So. You know, could we create a form of transhumanism that was genuinely inclusive, non-hierarchical, pluralistic, I don't know. I think that's the great challenge, but it certainly demands a kind of ethical rethinking of our social structures, as well as, you know, as well as those kind of technological developments that transhumanism vision. So I think if you focus just on the latter without the former, you're actually likely to radicalize the hierarchical structures and the potential for. You know, unjust, inhumane and, and potentially catastrophic outcomes. So, so yeah, I do think transhumanism has a tendency towards hierarchy, absolutely implicit within it.
Ricardo Lopes: OK. And the second question then is, is transhumanism an elitist movement?
Alexander Thomas: Yeah, I mean, um, again, you know, similar I would say there are transhumanists who would characterize their politics and aims as, as definitely anti-elitist. Um, AND in fact, I think, you know, if I'm honest, most of them would espouse some kind of inclusive anti-elitist values. Um, BUT the problem is how transhumanism will actually manifest itself in the real world. How do we ensure, you know, that there is no hint of elitism in the real world process, um, the kind of. The techno human evolution is taking, that's taking place at the moment is absolutely, it's reminiscent of the colonialist, exploitative, highly elitist, racist, patriarchal structures of the past. So for me, I, I don't know, I, I think any, any kind of respectable form of transhumanism would actually begin with that problem. That should be the starting point. So you know, rather than super longevity and the singularity and radical abundance and all of that, the real ethical challenges of rethinking society in an equitable and just way, in the face of the development of these radically potent technologies, I think should be the starting point of a, of a kind of ethically responsible transhumanism.
Ricardo Lopes: So, ideologically speaking, what would you say are perhaps some of the most prominent arguments against transhumanism coming from people like, for example, the bioconservatives and others that oppose transhumanism.
Alexander Thomas: Yeah, well, I mean, that's a, a good question and, and absolutely, you know, you mentioned the bioconservatives there and, and that has been their, their vision has been one of the most stringent critiques of transhumanism and also perhaps the most famous critique of transhumanism. Um, ESSENTIALLY what they say is that they, they kind of wish to advocate for the integrity of modern humanity. So they, they criticize transhumanism on the grounds that it is in some way unnatural or in some way insults the integrity or dignity of the current human race. But that really depends on a kind of essentializing of some particular facet of the human. And I think therefore it's actually quite a weak critique because we've always developed alongside our technologies. We're, we're always changing and becoming something new. Life is processed. So I think clinging on to some imaginary essentialist quality of the human is is quite a misguided way to criticize transhumanism, to be honest. Um, SO that is the most famous critique, but I don't think it's the best. I think instead, much more effective critiques are coming to the fore now, and they're ones that, you know, really kind of focus on ethical questions around power, ecology, complexity, pluralism and inclusiveness, ideas I've mentioned. And when we think in, in these terms, we can see how transhumanism could become highly discriminatory, uh, it could radically increase inequality, it could lead to the development of technologies that threaten the existence, especially of vulnerable and disenfranchised people. So, um, so I think those kind of critiques that think through power and justice are much more effective, uh, when directed at the, at, you know, at transhumanism as a critique.
Ricardo Lopes: So earlier on, you mentioned the a tie or a potential tie between transhumanism and capitalism particularly and more specifically advanced capitalism. So in what ways does transhumanism tied to capitalism, are there specific ideas, principles of capitalism that get manifested in transhumanism?
Alexander Thomas: Yeah, I think, um, I think definitely, I think it's worth, you know, kind of going through a few of the logics of capitalism and thinking about how transhumanism mirrors these logics, or how the implications of transhumanism could worsen the dangers that are inherent within capitalism. So, um, you know, I, I'll, I'll just list 6 things now and kind of explain the relationship, if you like, between transhumanism and capitalism in these ways. So the first is that, you know, capitalism is, as we all know, it's dependent on growth. So without growth, the system stalls and falls into crisis. And this growth fetish kind of motivates the drive to constantly bring as much of life as possible into the kind of um the auspices, the, the, the, the tentacles of capitalism if you like. um. So it is always for that reason, consuming new frontiers. And of course the possibility of endless growth on a finite planet is dependent on a notion of perpetual progress, cos otherwise it just doesn't make sense. And of course progress, growth and conquering new frontiers. Uh, WHETHER it's outward into space or inward into the secret codes of human beings, those things are integral to transhumanism too. So transhumanism kind of helps to, to, to justify or, or, uh, you know, enable this myth of, of perpetual growth on which uh capitalism depends. Um, SECONDLY, capitalism isn't just an economic system. SO that growth for progress and, and, uh, that, that, that, that kind of, um, desire for, for growth and progress means capitalism kind of, it, it cannibalizes non-economic zones. Um, SO it's, it's, for example, dependent on a history of colonial looting. Um, AND of course there are constant ongoing privatizations and enclosures of public forms of wealth. Um, IT also consumes nature and con conceptualizes the, the kind of, um, uh, the resources of nature as limitless and and free for the taking, so it sees nature as both a sink and a tap. And transhumanist in a, in a kind of capitalist context repeats this relationship, but explicitly for the human body too, which is then broken up, turned into data objectified. So bodies are needed for experimentation, data analysis. And just as bodies that are deemed surplus to the requirements of capital capitalism tend to be excluded or marginalized or incarcerated or often even killed, this might also become true of bodies surplus to transhumanist progress as well. Um, SO thirdly, capitalism conceptualizes humans as free, rational, autonomous individuals where each of us is, is responsible for our own position in the market, and we're at liberty to choose what to buy and when. And transhumanism absolutely echoes this, this kind of vision with, with its concept of morphological freedom, which which we've discussed already a lot. So the claim that each of us is at liberty to pursue our own version of enhancement. We're essentially all entrepreneurs of the self, as Foucault might put it. So, um, this, this responsibility for our position in the market also means that all humans are in competition with all other humans. So that's the fourth point. Our fundamental being is defined by the logics of competition rather than, say, collaboration. So it's rivalry and contestation, not solidarity and care. So this closes the possibility for transhumanism to become inclusive and pluralistic, and it also becomes rivalrous and subject to the demands of of capitalist progress which we might talk about. Um, 5th, for anything to be capitalizable, it needs a calculable value, uh, an exchange value. So effectively everything exists in a new empirical reality with an imagined price tag. So, um, you know, incomparable things are incorporated into the to a kind of system of equivalents. Uh, EVERYTHING becomes an object that way as well. And this is also true of people who are objectified both by the role they play in the system, but also their very being is an object for exploitation by capital. So again, our data, our genes, our desires, all are abstracted into products for profit making. And this objectification of the human is absolutely integral to uh to transhumanism as well, you know, it it it's, it needs it for, for transhumanist progress to exist. Um, FINALLY, I would say that capitalism is in a way, it's ethically indifferent. It doesn't promote an innate conceptualization of human flourishing. It just asks, is there a buyer, is there a seller, what's the price? That that's it's kind of ultimate position. So it projects and generalizes instrumental reason, uh, by which I mean the how of things, and not ethical reason, which, which might help us ask why, um, beyond simply the profit motive. Um, SO as we've seen with our analysis of Savulescu. Um, THIS is also a transhumanist trait, the belief instrumental progress can solve the question of moral progress. So it doesn't neither asks deeper questions about human flourishing, really. They just both assume a kind of individualized, progress based, growth-based model of enhancement. So, um. So yeah, I mean, I don't know if you want me to talk a little bit more about advanced capitalism specifically or or.
Ricardo Lopes: Yes, I was going to ask you now about that. What is advanced capitalism? What does that mean
Alexander Thomas: exactly?
Ricardo Lopes: Yeah,
Alexander Thomas: so, so I use the term advanced capitalism in the, in the title of my book and, and really what I'm pointing out to there is the fact that capitalism has been around for some time. So it has advanced its logics of development have developed. So it, it is a process itself. It's not um something that's set in stone and forever the same. It's always moving and and changing essentially. And, um, for the last 45 years or or around about, we've had a particularly toxic form of capitalism called neoliberalism. Um, AND, you know, that's a term that many people don't know what it means. Most transhumanists have no idea what it means. Um, AND it's interesting as well that at the moment, this, this kind of version of capitalism may actually be dying, and it might even be getting replaced by something much worse, uh, whether it's a kind of techno feudalism or tech authoritarianism, we will see, and maybe we can speak about those terms a little bit. Later as well. Um, BUT there are a few more logics, I would say I'll, I'll probably go just through two, just for the sake of time, um, which, uh, where these things tend to be true of capitalism generally, but they're made much more extreme by this version of of capitalism, this neoliberal version. So, um, the first is, is huge increases in inequality. So, you know, maybe the most significant trend that has kind of taken place during this most recent incarnation of, of global capitalism is a growth in inequality within developed Western countries and between the wealthiest 1% and the rest globally. So. These dynamics kind of completely undermine the idea that a process such as the development of transhumanist technologies, um, will be an inclusive process. We see in modern capitalism incredible concentrations of wealth and at the same time exclusions and expulsions of those who are not productive to the kind of wheels of capital. So. Um, YOU know, think about what happens at the moment to the marginalized, for example, the number of migrants who are allowed to die at sea every year. Um, AND if improvements in AI lead to significant automation unemployment, you could have vast swathes of society becoming the marginalized and the excluded. And at the same time, a small group of billionaires who own these technologies may have an almost total concentration of wealth with access to the most powerfully transformative technologies in world history. Alongside this kind of redundant mass of people. So that's, uh, you know, one potential danger. Um, THE other aspect of neoliberalism that's worth a quick mention here is the notion of financialization. So finance is, is pivotal in the increasingly kind of pervasive, complex, decentralized, um, and global character of capitalism. So if wages stagnate, the economy itself needs financialization because if if workers don't have the means to purchase consumer goods, the economy would collapse. So debt has become so pervasive, it has pretty much defined this, this kind of era of economic history. And and transhumanist narratives are actually extremely useful for a debt-based system because debt relies on the idea of speculation and future returns. So as climate crisis deepens, um, and political instability becomes more entrenched, the promise of a bright tomorrow looks more and more fanciful, but transhumanism, uh, you know, and the related technologies that it depends on, it promises a kind of unbelievable return on investment. So the AI bubble we're seeing right now is a great example. The AI industry is constantly saying, oh, AGI is just around the corner, super intelligence is on its way. AI will solve all problems. These claims kind of draw investment. And despite currently disappointing returns on that investment and the limited use of of current AI technologies now, it is these stories of radical progress around the corner which keep that debt bubble buoyant. So transhumanist stories are absolutely key to this kind of debt-based model of neoliberal capitalism.
Ricardo Lopes: So let me ask you now about long-termism. What is it and how does it relate to transhumanism?
Alexander Thomas: Um, YEAH, so, so long-termism is, is essentially a fusion, uh, between a movement called effective altruism, or EA and and transhumanism. So basically, effective altruism was trying to rationally determine uh what was the most effective forms of, of doing good, essentially. So how do we do altruism well? Again, I would say it's a fairly, it's fairly misguided from the start, in my opinion, because it it doesn't understand that ethics and values are situated and perspectivible and rather assumes a kind of calculable, rationally determinal, uh almost utilitarian logic. OK, so I think it's flawed from its inception. But um the first iteration of, of EA uh came up with bed nets, I think that was what they said we needed, more bed nets, um, because mosquitoes were the great killer. Um, THEN it focused on the meat industrial complex, I think, uh, uh, a huge amount of suffering of animals. Um, IT, I kind of identified that as the greatest source of suffering in the world. Um, SO those were the two initial projects. But then transhumanism interjected into this kind of rationalist, uh, analysis of, of doing good. And the key transhumanist thinker in long-termism is Nick Bostrom, who I said is, is the most influential, uh, transhumanist bostrom of the 21st century, I think. And what Bostrom argues essentially is that we are on the verge of making posthuman digital consciousnesses that are at least equivalent to human lives. Um, AND I think he has different numbers in different articles, but one of them says that, um, 10 to 29, so 10 with 290s after it, potential human lives are wasted every second that we are not colonizing the Virgo supercluster with computer generated minds of human equivalents. So the number of people on Earth right now is just 8.2 billion or so. And this number is so vastly smaller than the potential what he called cos cosmic endowment is the term he used. So what we can build as as a human race, this kind of post-digital empire that we can build amongst the stars. So, so this, you know, put simply, the the the. The comparison here is that these 8.2 billion people just don't matter. Their lives are what he calls mere ripples, um, you know, so climate crises, genocides, wars, all of them are minor episodes, as long as some survive and are able to pass on the baton of technological expertise to create this cosmic endowment. And of course what that means is that um you know, it's those who hold that that uh that kind of the baton of technological expertise are the people that matter. In other words, it's the Silicon Valley billionaires and their big tech companies. Um, THEY'RE much more valuable than the rest of the 8.2 billion people who, who just don't really matter in comparison. And so of course, for that reason, long-termism has proved extremely appealing, unsurprisingly, to the Silicon Valley elites, because it, it says to them that they're the, the central protagonists in the most important moment in human history, effectively. Um, Toby Ord who I think co coined the term long-termism, I think. Um, HE, he wrote a book called The Precipice, so the idea that we're, we're right on the edge of, of, you know, either going, falling to existential risk and disappearing, or fulfilling our cosmic endowment amongst the stars. And again, you know, Elon Musk, he, he, um, there's a guy, Will McCaskill is another long-termist thinker. He wrote a book called What We Oe the Future, and Musk um tweeted that it was a close match for his own philosophy. And Musk also tweeted one of Bostrom's articles saying it's likely the most important paper ever written. So, so you can see the real influence of long-termism on, on people like Elon Musk. Emile P. Torres, who's done, he's written extensively on this, and very effectively as well, and he calls radical long-termism um the most influential ideology in the world today that most people have never heard about, and that's for, for that reason of its influence on people like Elon Musk.
Ricardo Lopes: How about accelerationism and I know that uh some prominent figures in the effective altruism movement have also recently embraced accelerationism. So what is the link there and also the link between accelerationism and capitalism and transhumanism.
Alexander Thomas: OK, yeah, that's, uh, that's a very good question, bit of a complex one, but um. So essentially, in, in capitalism, labor is both the source of all value, so all profits are generated from the exploitation of labor. And and yet labor is also constantly squeezed out or replaced by automating technologies. So that that is uh for accelerationism, that's the central contradiction in capitalism. Um, AND so accelerationism is a, is a philosophy which essentially tries to solve this contradiction. Benjamin Noyes says it's, it's, um, it's by alchemizing labor with the machine. That's what accelerationism is trying to do. And it has, it has quite a lot of different manifestations actually, and, and proponents on both the left of the political spectrum and the right. So, uh, Nick Sernerchek and Alex Williams, who once wrote an accelerationist manifesto. They claim that um Marx and Nick Land are the two kind of paradigmatic uh accelerationist thinkers, so Marx obviously on the left and and Nick Land on the right. Um, SO the left version of this might see technology as bringing about a kind of post-work utopia, something like fully automated luxury communism, that's uh Aaron Bustai wrote a book called that. So, so the idea we'd no longer have to work, the machines would do it all and all for us, and we'd all live happily together in, in kind of, uh, you know, abundance and luxury. So that that's the kind of left wing version. The right sees a kind of um what Nick Lam calls a meat grinder future, where it's the pure pure kind of um spinning wheels of capital frenzy, uh, that creates a humanless future essentially. It's pure growth and efficiency. So the human is just a kind of detritus, it's a drag to this process, and it could be made much purer without the messy human. So in a sense, it's, it's not very transhumanist at all because it's, it's not at all anthropocentric. It doesn't put the human at the center of things, it puts capital at the center of things. So it's a, it's a very capital centric idea, the right wing version. But for that very reason, actually, this right wing version has proved very popular with techno utopian libertarians. Uh, AND as I mentioned earlier, extropianism, this kind of 1980s, 1990s original transhumanist modern movement. That was exactly that. It was a techno optimistic, libertarian movement. It was hugely influenced by people like Ayn Rand and and Hayek, etc. So there's this huge overlap between this transhumanist techno libertarianism and this kind of accelerationist techno libertarianism, essentially. Um, AND, and now, as you say, accelerationism is. Inspiring new kind of tech inflected movements on the right, uh, which are also very informed by transhumanism. The most notable ones are effective accelerationism, or EAC for short, and neo-reaction, um, of which actually Nick Land is also one of the, the central thinkers along with uh Curtis Yarvin, who's been, you know, noted as a, as an important neo-reactionary thinker.
Ricardo Lopes: So it's interesting that you mentioned that in the case of acceleration is, we can find accelerationist takes on the left and the right with figures like Marxx and Nick Land. So when it comes to the political side of things, is transhumanism, not accelerationism but transhumanism associated with any specific kind of political poll or movement? I mean, can we see it both on the left and the right or not?
Alexander Thomas: Um, YEAH, yeah, so, um, I think I mentioned that you've got techno-progressivism on the left and techno libertarianism on the right, and they can be seen as two holes within transhumanism. Um, THERE have also been transhumanist political parties, OK, I think um there was one transhumanist Italian politician who was elected, actually, um, but I think, I think there was some sort of scandal that made his, his political career slightly short-lived. Famously, also, there's a, there's a guy called Zoltanistan who was another very famous American transhumanist, and he toured the US in a, in an immortality bus, he called it, to campaign for the 2016 election, uh, very unsuccessfully, of course. Um. But what transhumanists, I think, have essentially realized is, uh, is, you know, kind of the democratic route to political success is, is not for them. So, um, transhumanists, however, at the same time have, have kind of uh they've they've got more power than ever before, and it's, it's like I say, it's not through campaigning on a transhumanist agenda or being democratic. Elected, it's, it's it's essentially by hijacking democracy. So Elon Musk and Peter Thiel have been the architects of this. Thiel is JD Vance's benefactor, and Musk, as we've seen, seems to be incredibly influential and active in the new Trump government. Um, AND indeed the, the neo-reactionary ideas of Curtis Yarvin, which I've mentioned, in particular, these appear as essentially the intellectual backbone for a kind of strain within the American government right now. So it is not pure transhumanism, of course. Um, I would say it's a kind of almost a cancerous offshoot, uh, but it's taken its place right at the center of global power, which is another reason I think it's incredibly important to understand what transhumanism is. I think also um another kind of interesting political poll that might be worth mentioning uh comes from a thinker I mentioned earlier, FM 2030. So, um, in 1973 he published a book called Upingers, a Futurist Manifesto, and in it he claims that the traditional politics of right and left are all down politics because they're all concerned with limitation. Um, BUT, you know, transhumanism and, and this kind of the transcendent possibilities of the upwing means that we will, we will just progress beyond ethical con contestations, we'll progress beyond the right and left. Again, this false assumption that technological progress solves all ethical contestation. And building on Esfandieri's notion, FM 2030's notion, um, some transhumanists called Steve Fuller and Monica Lipinska, they later claim in their book The Pro-actionary imperative that that up down politics are are they're kind of the new political polls for the future really, uh, with Up being transhumanist and, and, and pro-actionary and. Techno utopian and down being posthumanist and precautionary and environmentally minded. So, so there's a, a kind of, you know, a version of, of a new political polarity where transhumanism is seen as the upwing and kind of environmentalism, etc. IS seen as as the down.
Ricardo Lopes: So tell us now about this term, which I think was coined by Emil Porres task realism, which is, I think we could say a collection of different kinds of ideology. Some of them we've already talked about. It includes Transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and long-termism. I mean, tell us about the term why and why do all of these different kinds of ideologies go together here.
Alexander Thomas: Yeah, OK, good, good question. So, so Tesreel is essentially, it's an acronym that was invented, as you say, by uh Emil P. Torres and and Tim Gebru. And as I understand it, it came from a conversation between them where Emil was, was, he was trying to describe the kind of the lineage of transhumanist philosophies which had led to a current obsession with artificial general intelligence in the AI industry. And whenever he was, you know, referring to, he would talk about, oh well, this person who was an extropian, and it would just get very, very confusing, so he kind of, you know, uh sorry, they, they coined this, this concept of tests reel. So the list, as you say, it starts with transhumanism, and here they're referring to Julian Huxley's notion and identifying, um, you know, his connection with eugenics actually, which they write about a lot as well. Um, NEXT emerged extropianism, which I've mentioned, that's the movement in the 80s and 90s, uh, really the start of the modern transhumanist movement, very libertarian, fairly juvenile philosophy and truth, um, complete obsession with, with Ayn Rand and and Hayek, etc. After that, you get singularitarianism, that's mainly inspired by Ray Kurzweil. It's the idea that humans will merge with technologies and super intelligence will kind of bring about a kind of event horizon where we can't even imagine what's on the other side of that of that event horizon. Then there's cosmism, which is not a Russian cosmism, which some people have cited it as Russian cosmism, that's a different philosophy. Uh, THIS is um uh cosmism, it's um. Uh, IT'S particularly associated with a transhumanist thinker called Ben Goetzel, and he's key to popularizing the term artificial general intelligence. Um, THEN after that you get rationalism, that's mainly related to the less wrong wiki and the work of Eliezer Yudkovsky. Uh, WHO again also, by the way, uh, you know, he was part of the Estroian mailing list back in the day. So again, these are all, these people are all connected and have been for decades. Um, AND then you get effective altruism, which I've mentioned, which is this utilitarian rationalist approach to morality, and it's got a. Huge overlap with the rationalist community, same kind of group of people essentially. And finally, long-termism, which, as I said, is a, is a kind of fusing of transhumanism and effective altruism, a kind of short circuit to the movement which makes transhumanism the ultimate form of moral good in the universe. Now, essentially what Emil and Teamn are trying to uh kind of signal or or suggest um with this term is, well, firstly, it's the historical links to eugenics, which they see manifest all over the current AI industry. Um, THERE'S also, uh, the point that there's an incredible amount of overlap and through line of the characters involved in these movements, especially from Extropianism onwards. So Bostrom, Yudkovsky, Moore, they're all there on the original Extropian mailing lists. Goetzel too is a long-term transhumanist. They have been members of the same organizations and going to the same conferences for years, so all these people are, you know, a small little clique that are interconnected, um. And whilst the ideas of transhumanism have always been maybe a little bit niche, a little bit crackpot on the edges, uh, you know, suddenly big tech is the most hegemonic sector of the economy and its leaders are the richest people in the world. And that, that whole industry and in particular the AI part of it, are absolutely steeped in the mythology. Of these ideologies. So it's the water they swim in, it's the air they breathe. If you want to understand the most powerful forces on planet Earth right now, you can't do it without understanding the influence of this this strain of transhumanist philosophies. It's why OpenAI's stated aim is to build artificial general intelligence, for example. So that's what Tere was it was a temp. Thing to do, um, Emil is kind of quite keen to point out that you can be a transhumanist without being a test realist, uh, which might seem like a contradiction in terms, but this test realal notion is, is kind of, it, it both points to a historical through line, though, those, those names kind of follow an order, so that's the historical order in which they emerged. Um, AND also. Um, uh, YOU know, the, the, it, it, it's the, it's the form of those, uh, philosophies that, that kind of influence the AI industry and, and, um, it kind of create this obsession with artificial general intelligence in particular. So, uh, so yeah, it's that, that particular type of of transhumanist mindset which they're drawing attention to.
Ricardo Lopes: Mhm. But since you can be a transhumanist, a transhumanist and not a task realist, and at a certain point there you mentioned the link that Emil Pitter is establishes between task realism and the eugenics. Is there also a link between transhumanism and eugenics?
Alexander Thomas: Uh, YES, um, I, I think there is, um, so. Well, for one thing, I think I mentioned earlier that um er Julian Huxley is, is often identified as, as the kind of progenitor of, of transhumanist ideology. And um Julian Huxley was a member of the Eugenics Society, for example. So, so that's one very obvious link between transhumanism and and eugenics. Um, SOME transhumanists even just embrace the term eugenics. They, they say, yep, that's what we are, it's great. Others play it down. Um, SO some, uh, again, they point to morphological freedom to differentiate their form of eugenics to state sponsored forms of eugenics of the past, such as Nazism. Um, uh, YEAH, I mean, Tim Neat and Emil have been doing some great work on the links between transhumanist ideologies and eugenics. Um, I think what is particularly worrying at the moment is that there is a kind of fusion again of transhumanism with a bunch of other political ideologies on the right, including overtly racist ideas that advocate deeply troubling forms of eugenics and conceptions of natural human hierarchies. Um, SO, you know, I think forms of transhumanism that engage with this kind of thinking are. REALLY problematic, utterly abhorrent in truth, and, and I would say a clear and present danger to society. We, we absolutely need to contest and reject those ideas. Um, I think the work of Quinlibodian is very useful for understanding how the history of neoliberalism actually underpins this trend. His new book, Hae Bastards, explores this, um, this relationship between this kind of, um, eugenic ideas, uh, human, natural human hierarchy and, and, uh, and kind of the, you know, how it emerged out of, out of, um, neoliberalism, uh, so it's, which is absolutely worth um worth investigating. So yeah, I think, you know, transhumanism has a strong link to eugenics, ultimately eugenics is how do we make better people. Transhumanism is the same project, um, so yeah, strong links there.
Ricardo Lopes: What is it that in your book you call data totalitarianism and what part is played by things like surveillance, uh, the idea of the quantified self, algorithms, personal data, and, and these other things that, uh, well, I mean, what kind of role do they play in data totalitarianism?
Alexander Thomas: Yeah, OK, so, um, a, a, a, a big desire of transhumanists is essentially to make the world tractable to human will. Um, SO the, the mathematician John van Neumann, for example, he said, uh, all stable processes we shall predict, all unstable processes we shall control. And that sentence has proved a very powerful inspiration to the transhumanist ideology. Um, MAYBE we can talk a bit more later on about the kind of cybernetic influence on transhumanism, but essentially, if the world with all its complexity can be simplified or abstracted into something we can control, then transhumanist fantasies are, are boosted, um. So you could think of this as a, as a um as a kind of dataism. You could use the term deism. I think Yuval No Harari calls it a kind of new religion of deism. Essentially it's the belief that big data and AI uh will allow us to predict and control everything. And of course big data is now absolutely central to the new capitalist structures as well. To the extent that, that, you know, theorists have suggested we're we're entering a new form of capitalism. So for Shoshana Zubov, it's, it's surveillance capitalism. um OR, you know, for Yanis Varoufakis, we've we've now actually gone beyond capitalism and entered techno feudalism. Um, SO again, transhumanism and, and capitalism, they align here, uh, and, and indeed many transhumanists advocate extensive surveillance systems, which of course would provide the data to enable all of this. So the urge towards, you know, algorithmic control essentially, uh, comes from the idea that, well, humans in this, in this system are perceived as knowledge objects, essentially in, in the tradition of radical behaviorism. So that, you know, we're trying to understand this as a as an object, a system that we can predict and control. Um, AND, uh, just, uh, so what, what you need is information that allow that allows us to analyze, you know, predict even direct human behavior, and big data is, is supposedly enabling us to do this. And of course you would think that that would leave transhumanists fearing for human agency and, you know, realizing their beloved morphological freedom can't survive this process. But instead they actually embrace it because without, you know, this fantasy of big data and AI um we we simply won't be able to control everything, which is essentially their, their aim. So we have a, a kind of potent reconstruction of social reality premised on a small subset of humanity having privileged information. And powerful knowledge over the rest of humanity, you know, and that's kind of there in surveillance capitalism, techno feudalism and in transhumanist statism. So, so the desire within transhumanism to control all unstable processes, uh, and to transcend all confining qualities requires a world that is limited its in its complexity, um. So the entirety is tractable to human reason and will, and it is, for me it's that totality which I call data totalitarianism. It's a kind of extension of surveillance capitalism, an extension of what Ulysses Mejias and Nick Cory called data colonialism. Um, YOU know, again, Varoufakis, techno feudalism, it's an extension of that. It's a, it's a, a kind of coming together of that capitalist trajectory of data colonialism, surveillance capitalism with this transhumanist idea, this dream of controlling everything, meaning everything needs to become um data that we can then just manipulate to our own desires. Um, SO I use the concept in my book as a way to undermine that transhumanist value I mentioned earlier of the continuous questioning of knowledge, because it's based on, on the reverence essentially of instrumentalist forms of knowledge production aimed at ever increasing potency of, of means geared towards um even more problematic ends in the form of essentially control, profit and power. Um, SO that's, that's kind of what I was trying to get at with this term, data totalitarianism.
Ricardo Lopes: So you mentioned briefly the influence of cybernetics, cybernetics over transhumanism. So let's talk a little bit more about that, particularly how, how cybernetic framing of the mind influences the way that transhumanists think about the human mind and the potential consequences of that.
Alexander Thomas: Yeah, so, so I, I think it is, it is cybernetics that enables transhumanists to conceive of life as just information processing. So that includes minds and everything else. Everything can be abstracted from its actual material instantiation and converted to a code which uh which kind of stands in for it. And that potentially enables any form of being to exist in a different instantiation or sub-state and for us to edit and control it. Um, SO for example, we can imagine decoding our conscious mind and turning it into code and uploading that into a virtual system. So. Essentially this, this creates a kind of um a material information hierarchy. Information is more pliable than material reality, so we just have to learn to speak the code of life. Uh, DNA for example, can be seen as the code of life. We can edit it, copy it, make it a new. Information is conceptualized as potentially separate from the material world. Um, SO, and, and, and as a result, intelligence becomes this magical force that allows us to process information more effectively. Um, ALL of life becomes just a question of increasingly potent intelligence controlling information processes. Um, SO, you know, transhumanist discourse is absolutely full of language which expresses humans in machineic terms or, or like computers. So humans, for example, are suboptimal systems or bug ridden code, right? So these, these metaphors, they're appealing to transhumanist because if humans are just code, we can be upgraded and fixed, um, the, the, you know, and the world as well with all its limitations can be expanded as well. We can have limitless resources and time. We just need to write the code, rewrite the code. So by making things quantifiable and readable, complex interrelational aspects of reality which defy reductionism are simply removed from consideration. And, and that, you know, but, but also along with that, that goes, um, the kind of it means that questions of meaning disappear too. Everything is just intelligence and information processing in an abstract sense. So even human being becomes something that can be fused with or transferred over to the digital realm. Um, YOU know, so we no longer can say what what being human is in the. Existential or philosophical or ethical sense. Those questions no longer have meaning for transhumanists, you know, the human is simply copyable, changeable, replicable, controllable, and potentially immortal. Um, BUT of course, the material world, in reality, does not just, uh, disappear. Life is not just code, and you can't escape ethicality and questions of meaning. By pretending everything is just like a computer and the implications of doing so are very dangerous, but, but that is the kind of cybernetic influence on, on transhumanism, the fantasy that um life is just information and we can process it however we will by increasing our intelligence. You know, intelligence becomes the cardinal virtue, I guess, of, of um. Of that kind of thinking.
Ricardo Lopes: Mhm. So let me ask you just one question about artificial intelligence. You've already alluded to it, particularly AGI or artificial general intelligence several times during our conversation. But uh how does it tie to transhumanism and what are the expectations and claims made about it by accelerationists and transhumanists?
Alexander Thomas: Um, YEAH, I mean, I think I've, I've touched on this a bit, like you said, I mean, AI, I think is fundamental to transhumanism. They, they think of it as a form of magic. Um, YOU'LL hear them saying things like we're making Sam think, um, and they talk of alchemy and stuff like that. So they, they don't all agree on. Whether, you know, super intelligence or artificial general intelligence is a threat to human existence, they don't even even agree whether that matters or not. So some, some of them would like to see humans replaced by what they call mind children, which is digital artificial intelligences, essentially. So um. Yeah, so, uh, you know, intelligence is, is, as I say, it's the kind of the cardinal virtue of, of up politics, because if life is just information processing, then intelligence is what enables us to order information more effectively. So that's why AI is so important to transhumanism. So, um, but. You know, it's worth pointing out that artificial intelligence is a very narrowly conceived conception of intelligence, really, right? Um, YOU know, it's, it's abstracted out of embodied being and situated context and into machines and made subject potentially to exponential growth. So this is the magic, the, the kind of the explosion, the trick, the possibility, um, which kind of goes beyond the imagination, so it leads to the singularity, the event horizon and the black hole behind it which we simply cannot know. um, SO. You know, to some transhumanists, it is effectively God, uh, you know, and, and they, they lean on, on AGI as the thing that will solve all problems, including all ethical problems. Um, SO yeah, AI absolutely, it's, it's, it's key to transhumanist ideas, um, and, and it underpins their fantasies, and likewise transhumanism has become key to the AI industry and, and the kind of the moral underpinning of, of what they're doing, including risking. Um, YOU know, potentially causing existential risks for humanity.
Ricardo Lopes: There is also the term posthumanism. Is it the same as transhumanism? And if not, what does it mean and what would be the differences here?
Alexander Thomas: Um, YEAH, OK, good question. So, so, um, posthumanism is, it's what comes after humanism. So it is, it's the rejection of certain enlightenment humanist assumptions, essentially. So unlike transhumanism, which is really about the human becoming something else, posthumanism is is about humanism, not, not the human, if you see what I mean. So, um, they, they do have certain things in common, so posthumanists agree with transhumanists that the human condition is fundamentally changeable. Um, AND, and both posthumanist and transhumanists are very interested in this process of technogenesis, this dynamic co-evolution of technological and human development. Um, BUT posthumanists undermine the Enlightenment notion of the human. So, so they, they, you know, don't accept the idea that we're all individuals, separate from nature, with a glistening and powerful rationality that places us hierarchy. ABOVE the rest of nature. Um, Rather, they argue, um, that that is a, is a construct, essentially, it's a, you know, a historically contingent construct, uh, uh, so it's not what the human is, it's what we imagined it to be, um, through Enlightenment humanism. Instead, they would say what humans actually are, uh, is inextricably embedded in the natural world around them. Uh WE'RE defined by our relationality, we can't escape it, so we are just part of the web of nature, if you like. Um, SO posthumanism for that reason, aims to think ethically beyond the human. So they, they emphasize responsibility towards the wider nature of which we're just a part. So, um, Rosie Briotti, who's a a famous posthumanist thinker, she sees, she says that we are bonded by the compassionate acknowledgement of our interdependence with multiple human and non-human others. So it's, it's very much a relational, compassionate kind of um ideology. So whereas transhumanism, you know, maybe you could characterize that as a will to power, control, domination, even colonization of the, of the human and space, the escape of all earthly limits. Posthumanism emphasizes our embeddedness in nature, it emphasizes compassion and that we look beyond ourselves. It's a kind of ecological thinking, not just in the environmental sense, but also in, in, in terms of the intraconnection of all things. It therefore embraces complexity theory, complexity science. And calls for humility, uh, because we cannot possibly understand all that, that there is. Complexity science tells us to understand anything completely, we'd have to understand absolutely everything else, and that was obviously impossible. So, so it embraces a kind of humility, the limitation of human reason. Whereas transhumanism tries to limit complexity and make life tractable to human will, so it's fundamentally hubristic, you know, it's they're, they're kind of opposites in that regard. Which is why, again, when we talked about political polls, there's this claim. That you could see posthumanism and transhumanism as kind of um uh transhumanism as up, posthumanism is down, political polls uh against each other in, in this new um political imaginary, essentially.
Ricardo Lopes: Can there also be religious undertones to transhumanism? I mean, obviously earlier when I asked you about AGI you mentioned that some people in among the transhumanists even think about it or talk about it as gods, so that's already pointing to religious undertones there, but more generally can transhumanism be a religious movement in any way?
Alexander Thomas: Yeah, I mean, absolutely. So I think there are many dimensions to this. I mean, one thing I've, I've already mentioned, I think is that there are actually religious transhumanist organizations, so that's one thing, you know, there is the Christian Transhumanist Association, etc. um BUT I think more importantly, transhumanism, it can be seen as as a kind of secular religion. Um, IT promises in many ways the same things that traditional religions have, but, but, you know, it's grounded in secular science. Um, SO the most transcendent versions of transhumanism, where, you know, consciousness or maybe just intelligence kind of leaves the body and exists in some kind of virtual manifestation, that's just an updated version of the soul, right? I mean, that's, that's what it is, um. Uh, YOU know, I, I, I think, uh, there's two thinkers that have written quite well on this in particular, Beth Singler and Megan O'Gieblin. I think, I'm not sure I'm pronouncing that right, but those two have written a lot about AI and and transhumanism and its links to religion. Um, AND, and, you know, I think, um, Megan points out the great irony is that science is it's supposed to replace religious fantasy during the Enlightenment. Um, AND this is what transhumanism celebrates, but transhumanism then comes along and restores these fantasies and calls it science, right? So, um, you know, it can't make these promises without slipping back into those religious myths, uh, from which science was supposed to liberate us. Um, Beth Singler also draws attention to kind of, uh, patterns of religiosity, especially in AI discourse. She, she points out, for example, the singularity is, is a godlike, uh, kind of infinite knowledge, you know, so it's like, uh, the oracle, it's, it's godlike. Um, AND also it's coming about is sometimes seen as the rapture or the end of days. There's an apocalyptic element to it as well, you know, it, it's an existential, existentially risky thing, so we're all going to be judged by it, essentially. Um MIND uploading, of course, is can be seen as escaping the flesh. Of the mortal world. Um, uh, Bessinger also points out Rocco's basilisk. I don't know if you know that, that kind of thought experiment, but it's, it's an updated version of Pascal's wager, essentially. So AI threatens eternal damnation for non-believers, as well as heavenly promises of, of immortality for, you know, it's apostles, etc. um. You've also got profits in the form of people like Kurzweil and Altmann and Musk, although Musk has got to be the most bizarre profit there's ever been. But, but, you know, uh, what Kurtzweil promises in particular is a kind of a technological and universal heaven. You know, by the end of this century, he thinks that human consciousness will essentially be bodyless and we'll be able to travel around the universe taking on. Uh, ANY physical manifestations in the form of, of nanobot swarms. So effectively he, he's using religious fantasy as a kind of rhetorical device, um, promising the spiritual and and material benefits of religious salvation, but in a transhumanist discourse, you know, which again is it has that irony of um uh retreating back into the religious myth that science is supposed to free us from. So yeah, I think that's some of the things we can say about religion.
Ricardo Lopes: Do you think that the transhumanism can lead to dehumanization? If so, what kind of dehumanization would we be talking about here and what does dehumanization mean in this particular context?
Alexander Thomas: Um, YEAH, OK, so, as I've mentioned, transhumanists tend to have a kind of narrow conception of intelligence. Um, SO they see it as the ability to solve complex goals. And when you frame it that way, it avoids questions of context, questions of meaning and purpose of life, it avoids deeper ethical questions about our relations with each other and the rest of nature, you know, which can be seen to have um dimensions of emotional intelligence or whatever else linked to them, which we just exclude from, from these questions. um. So that narrow vision, what it enables is a very quantifying and kind of hierarchical notion of intelligence, um, uh, you know, and, and therefore humans are, are also hierarchically graded, uh, based on their intelligence, so for example. Um, AND, and, you know, as I've mentioned, many transhumanists, they often point out that AI will soon automate most jobs away, um, leaving no purpose for most humans, uh, uh, certainly those that don't own the machines, um, which will of course replace them. Um, AND with our lack of intelligence, we won't be able to compete. So they ask, well, what do we do with the, with the people in, in this context? And um, Elise Bohan, who's another transhumanist, she argues that, you know, things like universal basic income are not acceptable because, as she points out, almost half the human population has an IQ below 100, so she says, you know, they just can't be trusted to decide what to do with their own lives. So, so her solution for this problem is not, it's not wealth redistribution, it's not universal public services, it's not that fully automated luxury communism, none of that. Instead, it's, it's an updated version of the happiness producing drug from Brave New World, Soma. So we end up back with the Huxleys actually. Um, SO she stays. THAT kind of, uh, she says that low status, low skilled, low IQ and unemployed people have the weakest buy-in for reality, so we need better drugs and better virtual worlds. So, um, you know, essentially she's arguing that in the 21st century it would be necessary for humans to spend more time in virtual reality. And, and actually this is an exact copy of uh the neo reactionary thinker Curtis Yarvin. Uh HE states that virtualizing the masses is mildly preferable to the profit maximizing solution of converting them all into biodiesel. So, so in transhumanism, there's, there's these questions of what do we do with humans that no longer have much purpose for us essentially. Um, AND, and, you know, as a result, you can argue that transhumanism can be seen to have a concerning proximity to necropoli politics, the indifference to death and dying of those structured outside of the technocapitalist bubble of progress. And in fact, the the the transhumanist thinker Steve Fuller, who I mentioned earlier, he calls for the construction of a new republic of humanity, he calls it. And um that would be exclusively for entities that should be regarded as having political rights. And for Fuller, humans can be in this, but so can animals and so can machines. All of them can gain entry and all of them can be expelled. And what he states is that it's it's, it's your capacity for for self assertion against a countervailing force um that marks you as worthy for rights. You don't simply capitulate or adapt, you leave your mark. So for him, any entity that fails to leave its mark um should become susceptible to what he calls necronomics, um, and economics of death making, which aims to generate the most societal value from death making. Um, SO in other words, if you are not at the top of the technohuman hierarchy in control of our kind of future human evolution or posthuman evolution, you have no right to an existence at all. You compete or die. And so it's a radicalization of the expulsions and concentrations that we've seen are inherent to capitalist logics, you know, um. And you know, some transhumanists acknowledge the, the many crises of our times and um but they always tend to locate the failings in human biology instead of structural, social injustice, frankly. So, um again, Elise Bohan calls us eight-brained meat sacks, and you know, we're, we're ill fitted to the modern world, in other words. Um, SO, you know, the, these ideas, this doesn't feel like a kind of participatory, democratic, inclusive, pluralistic project, because most of us are considered just too stupid to be trusted with the human future. Instead, it's a kind of like a hyper rationalist ordering, uh, with techno solutionism, kind of IQ fetishism, uh, and, and, you know, as we've mentioned a a problematic eugenics essentially kind of those logics are baked into this. Um, SO, you know, I think transhumanism, it, it, it too often it, it functions as a kind of, uh, a disorienting discourse. It, it stops us thinking ethically. It celebrates the, the amassing of, of knowledge and power, um, but doesn't enable us to think hard about what, what the techno human evolutionary implications are. Um, IT, it kind of, yeah, it tends towards the abstracted, uh, the utopian and the hyperbolic, you know, um, uh, but it relies on things like super intelligence to kind of realize ethical outcomes or, or a benevolent form of, of eugenics, as Elise Bohane would put it. Um, SO it constantly displays this false belief that, again, I've spoken of earlier, that scientific progress necessarily leads to ethical progress. And er you know, it's that failing that opens transhumanist aims to charges of, I think, dehumanization. The failure to ensure an inclusive future for for humans and non-humans who share this planet.
Ricardo Lopes: Finally, then, uh, how should we as a society deal with all of this? I mean, what can we do with all this information about transhumanism and the other related movements and how should we approach ethically a potential transhuman future?
Alexander Thomas: Um, YEAH, I mean, that, that question is huge. I think it's, it's probably too, too big to deal with at the end of like, um, uh, you know, uh, an hour and a bit podcast. But, um, to even begin, I think, to do justice to that, we probably need another, another couple of hours. But, but I think, um, in the short term, you know, what we absolutely need is, is, uh, global resistance to tech authoritarianism and all forms of fascism, and a rejection of, of this kind of um. Uh, ESSENTIALIST concept of, of human hierarchy and um the rejection of the, uh, the worth of all of, of all people. Um. In, in my book, I, I develop a kind of meta-ethical framework for the future, which I call virtual relational anthropoporea. But that's a probably a story for a different day. It's a bit, bit much to unpack right now. But the book is open access, and it's free to download or read online for anyone who might be interested, so they can go and check it out there. But essentially, you know, it, what it does is it calls for systemic alternatives to kind of technocapitalism. Aimed at doing uh less harm primarily, um, and more cautiously and lightly directing, uh, kind of our capacities, with a focus on leaving space for pluralistic ways of, of both human and non-human being. So that is not an, uh, an outright rejection of technological development in all its forms. I'm not calling for attempting to halt technohuman evolution. Um, I, you know, as I've said, I think we've always evolved alongside our technologies and, and we'll obviously continue to do that. But we do need a reassertion of values. Um, SO I think philosophy, the humanities, and the arts, as well as kind of non-Western perspectives of human meaning, need to play a much more pivotal role, I think, in our culture. Um, WHICH would allow us to, to challenge the, the kind of the dominance, the pervasive power, and the corrupting force of the drive of a few Silicon Valley billionaires to accumulate endless capital and, and to essentially escape with the spoils of this technological development, uh, whilst rejecting responsibility for the, the kind of social and environmental wreckage that they're leaving behind. So, so that's what I think we need to think about. Big challenge for, for, for us, and, and I think it should be the starting point of any serious transhumanist kind of um thought really, but it obviously isn't.
Ricardo Lopes: Yes, let's end on that note, and for people who would like to learn more about your meta ethical approach that you live at the end of the book, they can read it on, or they can search for it online and it's again the politics and ethics of transhumanism, techno human evolution, and advanced capitalism. I'm leaving a link to it in the description of the interview and Doctor Thomas, apart from the book, would you like to tell people where they can find your work on the internet?
Alexander Thomas: Yeah, sure. So, um, I'm on Blue Sky, um, uh, uh, also I've got a website, Posthuman Futures.com. And, uh, I've actually got my own podcast that I, I, I'm, I'm nowhere near as, uh, as regular as you, I'm afraid, Ricardo, nowhere near as prolific, but there's a, a little podcast called the A to Z of the Future, which has a little series on the Anthropocene, a little series on transhumanism where I interview a lot of the, the, uh, the, the transhumanists I've mentioned today, uh, so you can check out that as well. And I'm, yeah, that that's good.
Ricardo Lopes: Great. So thank you so much for taking the time to come on the show. It's been a very fun and informative conversation. Thank you. Thanks
Alexander Thomas: a lot, Ricardo. Take care. Cheers.
Ricardo Lopes: Hi guys, thank you for watching this interview until the end. If you liked it, please share it, leave a like and hit the subscription button. The show is brought to you by Nights Learning and Development done differently, check their website at Nights.com and also please consider supporting the show on Patreon or PayPal. I would also like to give a huge thank you to my main patrons and PayPal supporters Pergo Larsson, Jerry Mullerns, Frederick Sundo, Bernard Seyche Olaf, Alex Adam Castle, Matthew Whitting Barno, Wolf, Tim Hollis, Erika Lenny, John Connors, Philip Fors Connolly. Then the Mater Robert Windegaruyasi Zu Mark Nes calling in Holbrookfield governor Michael Stormir Samuel Andrea, Francis Forti Agnsergoro and Hal Herzognun Macha Jonathan Labrant John Jasent and the Samuel Corriere, Heinz, Mark Smith, Jore, Tom Hummel, Sardus Fran David Sloan Wilson, Asila dearraujoro and Roach Diego Londonorea. Yannick Punteran Rosmani Charlotte blinikol Barbara Adamhn Pavlostaevskynalebaa medicine, Gary Galman Samov Zaledrianei Poltonin John Barboza, Julian Price, Edward Hall Edin Bronner, Douglas Fry, Franca Bartolotti Gabrielon Scorteus Slelitsky, Scott Zachary Fish Tim Duffyani Smith John Wieman. Daniel Friedman, William Buckner, Paul Georgianneau, Luke Lovai Giorgio Theophanous, Chris Williamson, Peter Vozin, David Williams, the Acosta, Anton Eriksson, Charles Murray, Alex Shaw, Marie Martinez, Coralli Chevalier, bungalow atheists, Larry D. Lee Junior, Old Heringbo. Sterry Michael Bailey, then Sperber, Robert Gray, Zigoren, Jeff McMann, Jake Zu, Barnabas radix, Mark Campbell, Thomas Dovner, Luke Neeson, Chris Storry, Kimberly Johnson, Benjamin Galbert, Jessica Nowicki, Linda Brandon, Nicholas Carlsson, Ismael Bensleyman. George Eoriatis, Valentin Steinman, Perrolis, Kate van Goller, Alexander Aubert, Liam Dunaway, BR Masoud Ali Mohammadi, Perpendicular John Nertner, Ursulauddinov, Gregory Hastings, David Pinsoff Sean Nelson, Mike Levin, and Jos Net. A special thanks to my producers. These are Webb, Jim, Frank Lucas Steffinik, Tom Venneden, Bernardin Curtis Dixon, Benedict Muller, Thomas Trumbo, Catherine and Patrick Tobin, Gian Carlo Montenegroal N Cortiz and Nick Golden, and to my executive producers Matthew Levender, Sergio Quadrian, Bogdan Kanivets, and Rosie. Thank you for all.