RECORDED ON APRIL 25th 2025.
Dr. Anna Ivanova is Assistant Professor in the School of Psychology at Georgia Tech. She is interested in studying the relationship between language and other aspects of human cognition. In her work, she uses tools from cognitive neuroscience (such as fMRI) and artificial intelligence (such as large language models).
In this episode, we talk about language from the perspective of cognitive neuroscience. We discuss how language relates to all the rest of human cognition, the brain decoding paradigm, and whether the brain represents words. We talk about large language models (LLMs), and we discuss whether they can understand language. We talk about how we can use AI to study human language, and whether there are parallels between programming language and natural languages. Finally, we discuss mapping models in cognitive neuroscience.
Time Links:
Intro
Language from the perspective of cognitive neuroscience
Language and human cognition
The brain decoding paradigm
Does the brain represent words?
Large Language Models (LLMs)
Do LLMs understand language?
Formal competence and functional competence
Using AI to study human language
Programming language
Mapping models in cognitive neuroscience
Follow Dr. Ivanova’s work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello, everyone. Welcome to a new episode of the Di Center. I'm your host, as always, Ricardo Lobsson today I'm by Dr. Anna Ivanova. She's assistant professor in the School of Psychology at Georgia Tech. And today we're talking about topics like language from the perspective of cognitive neuroscience, large language models, the language of programming. And related topics. So, Doctor Ivanova, welcome to the show. It's a huge pleasure to everyone.
Anna Ivanova: Thank you. Thank you for inviting me. So,
Ricardo Lopes: uh, let me start by asking you this. How do you approach language from the perspective of cognitive neuroscience?
Anna Ivanova: Well, So I'm interested in language as a cognitive capacity, and I'm interested in the human brain, and so to me it was kind of natural to say, hey, how do these two work together? How does the combination of cells that we have in our head, the brain, carry out language? And these days we have uh some really cool useful tools that allow us to measure brain activity. For a few decades now, we've had EEG which can measure electrical activity from outside the scalp, and then since the 90s, we have functional MRI or fMRI which allows us to measure blood flow to different parts of the brain and so essentially the parts of the brain that are engaged in a Particular cognitive process use up oxygen and glucose. Blood comes there to um bring more of those nutrients and so that's how we know which parts of the brain are responding to different kinds of tasks. And so in principle, what we can do is we can put somebody in an MRI machine and give them some sentences to read or to listen to or ask them to talk and then see which parts of. Their brain are engaged when they are listening to language, reading language or producing language.
Ricardo Lopes: And how does language relate to other aspects of human cognition?
Anna Ivanova: Yeah, so that's where the difficult part comes in. So we can put the person in the scanner, give them a task and uh Just because we have brain responses to the task doesn't mean we've isolated a neural signature of a particular cognitive trait. So if we have a person reading sentences, then of course there are all kinds of other processes that get involved for, I mean, most basically vision, right? You need to see the letters in order to respond to them and so of course if you're reading you. Would have your visual cortex um active and working as well. And, you know, I'm basic kind of maybe, you know, neural mechanisms that require you to pay attention and stay on task and not just wander off. So, uh, language obviously is interrelated to all kinds of other cognitive processes and so we want to be able to Separate out different components of cognition and also to figure out how they all work together. So, the most basic thing we can do is we can say other parts of the brain that respond to language during reading and say during listening, right? So reading involve engages the visual cortex, listening involves the auditory cortex. Is there something shared between them? And it turns out, yes, it turns out that there is a set of regions in the brain which we call the language network that is responsible, that responds to language during both reading and listening and speaking, and we have some um early evidence that even sign languages activate that network, so it's not just about spoken languages as we know them. And this network, uh, as far as we know, responds to. Um, ALL the languages that a person might know, right? So it's not going to respond to a language that you don't understand, but if you speak two or three or more languages, that network will respond to all of them. So this is evidence that um Uh, my PhD adviser F Dorenko has been collecting for the past 10 years and uh we've put out a review paper about the language network recently suggesting that hey, you should treat languages separate from other aspects of perception, right, like vision and audition. And we also say, hey, we you should treat language differently from other parts of higher level cognition as well. So, some of the uh theories um a decade or two ago were saying, hey, language is the symbolic system that allows us to compose symbols uh referring to abstract entities in the world, and that all makes sense, uh, and so people thought, hey, those regions in the brain that respond to language, maybe they're just going to respond to all kinds of abstract compositional systems. That means not just language, it also means math, and that means music, musical notation. It turns out that the language network doesn't respond to those other kinds of abstract symbolic systems. It seems that whatever computations it performs, um, they are specific to natural languages. We have other parts of the brain, other networks that are involved in other kinds of cognitive processes. For example, we have the so-called multiple demand network or the front parietal network that was that is engaged in mathematical reasoning, basic arithmetic, logical reasoning, may uh maybe executive planning, planning out your steps, um, working memory tasks, and so that multiple demand network. Uh, CAN work together with the language network, let's say I am, I give you a math problem that's written in words, you would first need to extract the meaning of the words, then solve it, so these networks can work together, but they're not the same thing.
Ricardo Lopes: And this language associated with all cognitive tasks in the human brains or not?
Anna Ivanova: Well, You can imagine me giving you a math problem in words, or you, uh I can uh give you a math problem. And just, you know, symbols, plain mathematical notation 2 + 3 equals question mark, and um if the problem is not formulated in language, then it's not going to activate the language regions. So it really seems that these language areas are specific to language processing specifically, but if the task doesn't involve language explicitly, then uh you don't need language.
Ricardo Lopes: Uh, AND for example, this is something I read about in your work, is the language system recruited in feature-based categorization? And what, by the way, what is feature-based categorization?
Anna Ivanova: Yeah, so the. Before, um. I joined a lab, uh, she had already established that uh that language does not share activations with math and music. But there was another domain, uh, which is where I do most of my work, uh, including now, which is semantic cognition or basic world knowledge, so knowledge of concepts and properties of um objects, actions, abstract ideas. So let's say, um. Is a is a pencil long would be an example. So in order to answer that question, you need to retrieve something about the pencil in this case its shape and and say whether or not it's long. Um, ANOTHER question would be, can it be commonly found in the kitchen? Uh, AND there you need to know what kind of things you would typically find in the kitchen and the pencil is not really one of them. So, these are questions that you can ask about an object. Um I can say pencil like I did right now. I can show you a picture of a pencil and then show you, uh, ask you all the same questions and you'll be able hopefully to access the same concept and uh answer those questions as well. And so If I'm telling you the word pencil, and then I ask, can it be found in the kitchen. Then yeah, the language network will obviously respond to the question itself, like as you are hearing the question, and it will respond to the word pencil. But will it um. Respond specifically when you're doing conceptual processing, when you're accessing the concept of a pencil and um trying to answer the question. So to do that, we compare, uh we looked at uh people's responses, not to the word pencil, but to the picture of a pencil. They get the question beforehand, then they see the picture of a pencil, and they just press a button yeah for yes or no. And the question was, you see a pencil, you're thinking about whether it's found in the kitchen, you say yes or no. Do you need language to do that task? Turns out not really and um essentially you can measure how strongly the language network responds and it responds uh pretty much uh like at baseline levels and not at all. So we from that evidence, uh, conclude that the Language network is not engaged in object categorization, and in this particular case, we're talking about semantic categorization.
Ricardo Lopes: Are there any commonalities between how language, how written language is processed in the brain versus how spoken language is processed?
Anna Ivanova: Yeah, I mean, the language network is the one that is uh. Shared across with written and spoken language modalities, right, as I mentioned earlier. So, uh, the perceptual stages are different, but the majority of linguistic processing is shared.
Ricardo Lopes: Right. And I mean, uh, you talked about the language network. That means that language in the brain is, is distributed, right? I mean, it's not really localized because traditionally speaking, people have, uh, Talked about or associated language, for example, with Brocazaria and verni Azaria, but that, that's, uh, I mean, thinking about languages, something that is localized in the brain is outdated, right.
Anna Ivanova: Well, Language. Is localized in the brain. In the sense that um there are parts of the brain that seem to be processing language and not other things, right? So localization in the sense of specialization of of function. It also is true that language in the brain. Uh, THE language network tends to occupy specific portions of the brain. It's not just found anywhere in the cortex. So it is, um, it often includes areas that are. In or adjacent to what people would call Broca's area, so that's, you know, inferior frontal lobe and vernicus area which is posterior temporal lobe. So these areas are or anatomical areas in these whereabouts are um usually part of the language network, but they're not the only ones. So for example in the temporal lobe which usually a whole strip here of activity that we see and in the frontal area there are also um. A few activation hotspots. So they, so the language network language processing is localized, just not the one specific part of the brain. It is distributed but pretty stable. So you will often be able from just looking at the pattern of activation, say, oh, that must be the language network. It has a characteristic shape even though it's not in the exact same spot from one person to another.
Ricardo Lopes: All right. Uh, WHAT is the brain decoding paradigm in the neuroscientific study of language?
Anna Ivanova: The um. What I've been talking to you about so far is. The language network responses, right? So how strongly or weakly it responds to a particular task. You, in principle, we can go beyond that. Uh, WE can say what kind of information does these activation patterns contain? Can we decode what words or sentences can um the person is listening to or reading at any given point in time? And so, what you can do is you can essentially train a regression model between a representation of your stimulus, like a word or a sentence, and brain response in order to establish a corresponding mapping. You can predict brain activity from that stimulus representation, that's called an encoding model, or you can try to predict which stimulus the person is seen from their brain activity. That's called a decoding model. And so that's just one step beyond simply looking at how strongly a network or a region responds to a particular stimulus. Now we're trying to get into the specifics of what features it responds to and what kinds of stimuli characteristics drive that response.
Ricardo Lopes: Does the brain represent words?
Anna Ivanova: The The brain decoding paradigm can be used to say, look, the brain represents words because you can read them out. Um, SO that's kind of the most basic idea of representation. But of course, representation is a very um. Theoretically heavy concept that's been discussed a lot in philosophy, what does it mean to represent, so let's say you, um. Um, ARE looking at a map, uh, and then you're saying what the map represents the country, um, which might, which, which is true, but the the representing really happens in your mind. You're the one who's taking the information, transforming it into something meaningful. The map itself isn't doing any cognitive processing by itself. And so these basic decoding approach is, uh, when you trying to decode words from the brain, they're kind of like a map where the brain isn't doing anything in this particular decoding paradigm. It's as the scientists are establishing that mapping. So whether the brain itself is kind of weird doing any cognitive processing any representation, it's kind of like irrelevant here. Now, the question that people usually care about when it comes to representations is not that, right? It's about, does the brain internally represented uh represent the stimulus for the brain's sake, not for our scientists sake. And so there then the question would be, can other parts of the brain use that representation, that information that is encoded in a particular brain area? So maybe the information is there and scientists can read it out, but if other parts of the brain can't really use it effectively, can't access it. Properly then it's not really a representation from the perspective of the brain. And so a nice way of thinking about representations in neuroscience today is thinking about usability. Can other parts of the brain use that information to perform additional computation that then leads to say action?
Ricardo Lopes: So in recent years, particularly in the world of artificial intelligence, people have been talking a lot, a lot about large language models and they have become very prominent. So what are large language models?
Anna Ivanova: Large language models share a lot of similarities with the brain conceptually and so a lot of the problems that we've talked about the separation between language and cognition or even the issue of representation, we started talking about them in neuroscience and now naturally the same question. ARISE again in the world of artificial intelligence and so as a um neuroscientist now I find myself doing quite a bit of AI work um and you know trying to figure out what are our language models, how do they work and asking some of the same questions I've been asking about the brain. Um, LARGE language models are, um, Language models. Language models means they are models that generate language, so they are trained on um texts from the internet, usually a lot of texts, and the uh training that they undergo is trying to predict the next word in a sentence. So the um children went um to the park too. Like, for example. And so the model starts out with having really no information about language, it knows nothing, and um the um it starts out by essentially guessing randomly, right, picking out any word from its vocabulary. And then what happens is because we have the actual text, we know exactly which word comes next. We know the right answer, and that the model can receive the right answer and adjust its predictions a little bit so that next time its prediction will be a little bit closer to the right answer. And so it keeps doing it over and over and over and over billions of times and uh in the end, its predictions become pretty good. So there is no theory about language built into the model, it learns everything from the text. The architecture of that model is called a deep neural network, which has some inspiration uh from neuroscience and biological neurons, uh a little bit far removed these days, but that's also a parallel with with neuroscience there.
Ricardo Lopes: But do large language models understand language and what does it really mean to understand language?
Anna Ivanova: Yeah, that one is a million dollar question, and it gets really philosophical really fast. So, as I said, uh, these models are trained on next word prediction. Right, so that's all they all they're trained to do. So the question becomes in order to predict the next word, well, do you need to understand it or are you just picking out different patterns? And yeah, these models are very good at pattern matching. So, um, for example, um they learn all kinds of things about English grammar, the children are, they know that that's a more likely uh phrase than the children is because R needs to be plural to match the children. Does it mean that they understand grammar? Well, It depends on your definition of understanding, right? They don't, they don't need a deep awareness, but if you can use that information, if you can use that knowledge, you effectively understand it, right? It's usable information. So we get into deep philosophical territory here that if the system acts as if it's. It understands is that good enough for us or not? And I would say for a lot of practical purposes when we think about AI and how to use it in the world, probably it is good enough, right? We don't need to get too philosophical to say like if it already acts as if it understands we can effectively assimilate it understands.
Ricardo Lopes: But there's a distinction between formal competence and functional competence. Formal competence being knowledge of linguistic rules and patterns and functional competence, really understanding and using language in the real. WORLD, right? I mean, tell us more about that distinction, why it matters and whether it applies to these debates surrounding, uh, AI or large language models understanding language or not.
Anna Ivanova: The formal versus functional competence distinction is a distinction that my colleagues and I proposed when large language models started coming out, we um started working on that paper. Around the time or even before GPT 3 came out and uh that was 2 or 3 years before Chad GBT and then we finished the paper around the time that Chad GPT was coming out and so all of a sudden it attracted a lot of attention so large language models became kind of interesting for scientists first and we were able to. Catch on to that and kind of start thinking about how they work before they took over the whole world, which is what we're seeing today, and we thought that it might be useful to think about these models from the perspective of cognitive neuroscience, knowing what we know about language in the brain. We know that in the brain, language processing takes place in its own separate network, the language network, and this. What the language that does is different from what other systems do, so the multiple demand network, for example, is the one doing reasoning and basic math, and so they can work together, but they are separate. And so it seems like when we talk about large language models, it is useful to separate out how good they are at language from how good they are at different kinds of reasoning and factual knowledge and the ability to empathize with the person with the user, like all kinds of things that you might kind of put in the term general intelligence AGI. If we say, look, it's useful to actually separate them out because in humans different systems do them. In these models, you might also end up different uh with different subcomponents being responsible for these different functions, and the formal competence is knowledge of language rules, language grammar, language, um, the lexicon, so everything that in humans, the language network does. And functional competence is essentially everything else. It's the umbrella of all the other systems that are not language specific, as we discussed, you can solve a math problem without involving language at all in the brain. But you need those systems if you want a language agent that uses language effectively and uses it to do things in the world, to communicate effectively with the user, so the a large language model, if you give it a math problem in words, it needs to be able to both understand the problem and solve it, and so you might imagine that system needing those different subcomponents to do both formal competence and functional competence.
Ricardo Lopes: So it is, it is one thing for us to try to study and understand how AI and particularly large language models uh work and how they process and possi. OR understand language, but it's another thing for AI itself to be used to study human language. So in what ways can we use AI to study language?
Anna Ivanova: So AI systems are. Um, REALLY good. Pattern matchers, right, so yeah, if we're talking about kind of the the main takeaway, right that people should know about what. AI systems are today, they are pattern matches. They match a lot of patterns very abstractly. If they've seen a problem similar to the problem that you're giving to them, they might be able to solve that problem by kind of by analogy by extrapolating that pattern that they've already learned before. And uh it turns out that this pattern matching capacity capability. IS extremely helpful for learning formal competence. These days, large language models produce outputs that are grammatical, at least in English and in languages where they had a lot of training data. So if the language is not very uh prominent in the training data, that's when you might start seeing issues with um even kind of grammatical rules, but for uh for something like English, um, you. Never these days hear complaints about the model making grammatical mistakes, and that's really remarkable. It's something that people don't really think about, but for decades, computer scientists have been trying to build systems that generate language that is grammatical and coherent and uh. They haven't been successful up until now, so it's really a a remarkable advance, a remarkable improvement. And so it helps resolve some of the debates uh in the study of language in linguistics, because linguistics for a while there was this idea that it's actually impossible to learn language from uh learning patterns that the That linguistic information has to be genetically encoded in our brains in the form of something called the universal grammar, and there is no way to learn it from the uh environment that it has to be built in. Well, nothing like that is built into large language models and yet they learn grammar successfully. So they serve as an existence proof that it is possible to learn language from the input. Now there is a problem. The problem is they learn from much more input, much more language than a human child would ever be exposed to, right? So in theory, in principle, it's possible to learn language, but in practice so far these models just need much more data. Uh, TO, to do that, uh, and so now there are some efforts underway, for example, a competition known as the Baby LM, uh, Baby Language Model Challenge, uh, where they try to train those models on the amount and kind of language that a child would receive over the course of learning. Now when it comes to functional competence, that's a whole different matter. Functional competence seems to be harder, formal competence, you need a lot of language but not kind of insane amounts and functional competence you need much more language, much bigger models. They're still not very good. You need additionally um. What's called fine tuning, which means specifically train them on problems and the right answers and give them feedback. Maybe you need to pair the large language model with an additional module like, you know, a calculator or a computer code uh shell in order to make them better. So functional. It's still kind of this big open frontier that these models aren't very good at yet, but formal competence, they've seen, they seem to have mastered fairly well. So now we can go in and kind of study them and see what they've learned and add these representations inside these models similar to the human brain.
Ricardo Lopes: Does the language of programming have any parallels with natural languages?
Anna Ivanova: The language of programming. Is Something that um serves kind of a like a benchmark that is similar to, say, mathematical reasoning, uh, so a lot of the time people think about it as. Um, Something along the lines of math, logic, STEM education, computer programming, and uh that makes sense. But then programming languages have the word languages in them and so they share some of the same. Some of the important properties with natural languages. They have individual symbolic units, um, the like variable names and function words, and then they um They're composed into more complex statements. And Then you have. You know, the brain trying to interpret both of them, and even, you know, a lot of uh programming languages use a natural language script, so the analogy extends that far. And so, uh, in our neuroscience work, we actually asked our programming languages, similar to like natural languages to the brain. Does the brain use the language network to process programming language, just. And we found out that, not really, we found out that um it actually seems to be the multiple demand network that does math and logic and um also seems to be involved in programming language processing. So the view uh. That Programming is more aligned with things like math and logic does seem to be correct or kind of faithful to how the brain represents that information. Uh, THERE are some differences we found that uh um both hemispheres uh in the brain uh represent programming languages versus math is mostly on the left, so it's not that it's not the same but it it is broadly speaking. The same network we think that is engaged in processing computer code, so programming languages are not true languages from the brain's perspective.
Ricardo Lopes: So, I would like to ask you about one last topic you've written about, and I have two questions about it. You've written about mapping models in cognitive neuroscience and how people expect them to map neatly onto the linear slash nonlinear divide. Uh, COULD you explain this? I mean, particularly what mapping models are in cognitive neuroscience? And what might be some of the issues with this approach that I just mentioned, there is people expecting them to map onto the linear slash nonlinear divide.
Anna Ivanova: Sure. Mapping models are what we earlier discussed in the con uh as encoding and decoding models, right? So encoding models try to predict brain responses to uh some stimulus and its features. Decoding models try to predict which stimulus the person is viewing from the pattern of brain activity. So both of these together are called mapping models, so they map between the brain and the stimulus and so. What's been particularly powerful with this paradigm in the last few years is that now we can have a representation of the stimulus that's pretty generic, and we actually take it from a deep neural network. So in the case of language, we can take it from the um Large language model. So what happens is we take a sentence, we uh record brain activity of a person reading that sentence, would we then give that same sentence to a large language model, take an internal vector which uh reflects how the large language model encodes information about that sentence, and then we, we align the brain responses and the large language model responses and try to predict one from another. And so if we do that, if we establish a successful mapping model, then we can try to predict how the brain would respond to a totally new sentence. We pass in a different sentence, we get a large language model representation, and then we use the mapping model to predict how well, uh, how exactly the brain is going to respond to that other sentence. And so if the prediction is correct, then our mapping model is good. It has established the right kind of relationship. And then you can use this tool not only to predict brain activity, but to start asking all kinds of questions like which part of a large language model best corresponds to a particular part of the brain, which features seem to uh of a sentence seem to be making the most difference, so it's a really cool tool for them starting to ask scientific questions. And uh the work that we've done uh now a few years back, but it's still relevant, is about what this mapping model should look like, because you cannot just kind of take the large language model vector and the brain vector and immediately um kind of map them 1 to 1 because even the dimensions are not the same. So what people are usually doing these days is to use linear regression to map between the stimulus representation and the brain. Linar regression, simple, basic, um, mathematical tool, uh makes a lot of sense, but it carries a lot of theoretical assumptions and, and so we ended up diving a little bit deeper into those theoretical assumptions to see. Do we want to be using linear regression all of the time, not all of the time? Is it too simple, is it too complex, so just trying to think a little bit more deeply about what, how to interpret these brain responses to stimuli.
Ricardo Lopes: So I have my last question, which is one more question about the same topic, uh. When choosing a mapping model, you, in your work, you talk about choosing them in the context of three overarching rata, uh which include predictive accuracy, interpretability, and biological plausibility. So tell us about that.
Anna Ivanova: So when we think about the mapping model, different people have advocated for. Or against linear regression for different reasons, and so we thought it's useful to spell out exactly what those reasons might be. One reason is one kind of decision as to what the mapping model should look like is, well, we want a model that predicts brain responses accurately. If the brain responses are not good enough. Um, EITHER decoding or uh or either encoding or decoding, right, either predicting brain response or predicting the stimulus or some something else from the brain. So let's say that your goal is to figure out what the person is reading, what the person is thinking, um, um, or you know, maybe you want to be able to predict whether a person is. Going to develop Alzheimer's or has some kind of neurological disorder, right, you can do all kinds of things with the decoding paradigm. And so if your goal is maximizing productivity, building a mapping model that can uh Predict as good as as you can make it, then there is no reason to limit yourself to something like a linear regression, then you might, you just want to plug in the most powerful model that you can, as long as you have enough data and it actually will work and predict well. So if you have a lot of data, then again you can train a neural network uh instead of just doing a regression in order to capture more subtle nonlinear patterns in the data and uh in fact today people are doing that some of the time, uh, so, uh, if all you care, if what you care about the most is how predictive the model is, how well it predicts, then no reason to just limit it yourself and stay linear. But if you are making that mapping for science reasons, if you wanna say, hey, this part of a large language model corresponds to this part of the brain, you want that correspondence to be simpler. So there a linear relationship might make more sense. Because if it's nonlinear, then a lot of interesting things could happen in between in the mapping model and for interpretability reasons you don't want that. For interpretability reasons you want that link to be as transparent as possible and the interesting stuff happening in the LLM and in the brain. And then the final uh reason why people have advocated for using linear models is that that seems to be similar to how the brain might be reading out information. So we talked about representation and representations in the brain, uh being um useful for uh other parts of the brain, for other parts of the brain to read out that representation. And so the argument is, hey, other parts of the brain often will read out information linearly. So if you only are talking about one step below, you're not gonna have a lot of complicated readout information. You want that link to be simple because that approximates how the brain is actually using this information. And actually in practice it turns out that's not always the case. The readout can be much more much more complex than linear neurons definitely do a lot of nonlinear processing. Also, if we are mapping, say, the whole brain to a large language model or like even a huge chunk, then it's not the case that a single neuron or a neuro population is going to read out from that whole part of the brain. So biological plausibility, I would say is some of the kind of Less, it's a less relevant criterion, at least in the models today, but if you want to be building biologically plausible models, that's absolutely something that you also have to keep in mind.
Ricardo Lopes: So just before we go, where can people find your work on the internet?
Anna Ivanova: Uh, PEOPLE can go to my Google Scholar profile and uh my lab's website is called Language Intelligence thought.net. Um, I have a great team of people here at Georgia Tech, where I started about a year, a year and a half ago, um, and yeah, we keep doing work, uh, both in cognitive neuroscience and in the space of large language models.
Ricardo Lopes: Great. So thank you so much for taking the time to come on the show. It's been a pleasure to talk with you. Yeah, thank you so much. Hi guys, thank you for watching this interview until the end. If you liked it, please share it, leave a like and hit the subscription button. The show is brought to you by Nights Learning and Development done differently, check their website at Nights.com and also please consider supporting the show on Patreon or PayPal. I would also like to give a huge thank you to my main patrons and PayPal supporters Pergo Larsson, Jerry Mullerns, Frederick Sundo, Bernard Seyche Olaf, Alex Adam Castle, Matthew Whitting Barna Wolf, Tim Hollis, Erika Lennie, John Connors, Philip For Connolly. Then the Matter Robert Windegaruyasi Zu Mark Neevs called Holbrookfield governor Michael Stormir, Samuel Andre, Francis Forti Agnseroro and Hal Herzognun Macha Joan Labrant John Jasent and Samuel Corriere, Heinz, Mark Smith, Jore, Tom Hummel, Sardus Fran David Sloan Wilson, asilla dearraujuru and Roach Diego Londono Correa. Yannick Punteran Rosmani Charlotte blinikolbar Adamhn Pavlostaevsky nale back medicine, Gary Galman Sam of Zallidrianei Poltonin John Barboza, Julian Price, Edward Hall Edin Bronner, Douglas Fry, Franco Bartolotti Gabrielon Corteseus Slelitsky, Scott Zacharyishim Duffyani Smith Jen Wieman. Daniel Friedman, William Buckner, Paul Georgianneau, Luke Lovai Giorgio Theophanous, Chris Williamson, Peter Vozin, David Williams, Diocosta, Anton Eriksson, Charles Murray, Alex Shaw, Marie Martinez, Coralli Chevalier, bungalow atheists, Larry D. Lee Junior, old Erringbo. Sterry Michael Bailey, then Sperber, Robert Gray, Zigoren, Jeff McMann, Jake Zu, Barnabas radix, Mark Campbell, Thomas Dovner, Luke Neeson, Chris Storry, Kimberly Johnson, Benjamin Galbert, Jessica Nowicki, Linda Brandon, Nicholas Carlsson, Ismael Bensleyman. George Eoriatis, Valentin Steinman, Perkrolis, Kate van Goller, Alexander Hubbert, Liam Dunaway, BR Masoud Ali Mohammadi, Perpendicular John Nertner, Ursula Gudinov, Gregory Hastings, David Pinsoff Sean Nelson, Mike Levin, and Jos Net. A special thanks to my producers. These are Webb, Jim, Frank Lucas Steffinik, Tom Venneden, Bernard Curtis Dixon, Benedict Muller, Thomas Trumbull, Catherine and Patrick Tobin, Gian Carlo Montenegroal Ni Cortiz and Nick Golden, and to my executive producers Matthew Levender, Sergio Quadrian, Bogdan Kanivets, and Rosie. Thank you for all.