 Hello everyone, welcome to team com podcast number two This is going to be a journal discussion of Ramstead at all 2020 and it is going to be a great discussion I'm really looking forward to it So before we start off the discussion of the paper could the other two panelists introduce themselves Maybe Sasha first Sure. Hi, my name is Sasha. I'm a neuroscientist and a free energy enthusiast And even hello, my name is Ron I'm just interested in active inference stuff. Yeah, and today we got to rate some tough book Yes, absolutely a tough book so Does the screen look okay for sharing my slides? All right Here we go So the structure of this discussion will be as follows first. We'll talk about some warm-up questions Then we're going to jump into the paper discussion We'll go through the abstract and the big questions making sure that we really understand how the authors summarize their own work Then we'll look at the roadmap of the paper which are just the section headers to understand how they kind of get from a to z We'll review some of the key claims and the aims of the paper So that we can evaluate later on whether the claims are justified or whether they end up achieving the aims that they set out to to achieve Then we're going to go through each of the six figures of the paper and then If not addressed during a through D We'll have extra time at the end to discuss any other questions about the paper or related topics And we have some other figures that we could put up if we need to So here we go with the warm-up questions first. What is semantics or What does the word make you think of when you first hear the word semantics? And you might want to think about logical and lexical semantics. Any thoughts there Sasha? Yeah, I think this is kind of the Getting into the confusing theme of the paper where you have to be extremely specific with the language that you use and so to me semantics means How we use words and how we make meaning Okay, very nice and the way that we use words the meaning of words is Often what people mean by semantics like syntax is how it's written and then semantics Or at least the semantics of words is about how the words are related to each other And then also we have logical semantics, which is just purely About logical statements and may or may not have any word representation behind it So there's that word representation and this is definitely one of the key words of the paper So maybe Yvonne, what do you think about or what does a representation mean to you? To me as we can say as as we as we are saying Yeah We actually represent in our mind different Different things behind different conceptions So it's always hard to understand Not always, but usually hard to understand what people saying way when they use one Another concept Yep, and that vagueness about the term Representation is really one of the things that the paper sets out to clarify because a lot of times people will use the word representation Assuming that one must exist. Oh if you're thinking of elephants There must be some neural state in your head that represents an elephant. Otherwise, how could you be thinking about an elephant? And then this paper is going to really drill down. What is the relationship between that conception and the neural representation? Next question what is Intentionality who or what has it so just kind of off the cuff because of course this is a big topic But what does intentionality mean or who gets to have intentionality either of you? I think is Became when When someone want wanted to do something yes intention and it's about the future So I will do it in the future Any thoughts there? Yeah, another wonderfully vague word and we often talk about In interpreting other people's actions whether or not we can infer their intent and of course that gets quite messy but With how vague the word is you can say that everything has intention because it You know even a rock knows that it will continue to stay a rock in its sort of thermodynamic landscape Very nice And this is sort of that panpsychist twist which is do physical objects have intention or what about criminal intent? Do humans have intention or how would we even know and Then how are narratives related to semantics? And then I'll just throw on the last one there are narratives real or in what sense are narratives real Any thoughts even or we can move on it's it's really it's really hard Philosophical questions about our narratives real or not Agreed, I mean this is really and it's why it's the last question before we jump into the paper because Narratives are real like Harry Potter is real It's really a narrative But then of course you go into it a little bit deeper and if everybody has their own Interpretation of the narrative well, it's real But everyone has a different reality and then is that real well that's kind of what we want to explore and Yes, it's a different tool than people expect the usual tools that people would expect to discuss these topics would be very qualitative and wordy philosophy arguments and this is going to be a pretty totally different approach involving the free energy principle So here we go The paper that we're discussing today is a recently accepted paper from the journal of entropy But we're reading off of the version that we found on research gate and the paper is called Is the free energy principle a formal theory of semantics from variational density dynamics to neural and phenotypic representations? So a lot of new words and for some people This is going to Rancor them because they think it's a philosophical word being used improperly other people They might not have clarity on what all the words mean So what we're going to do is look at the big questions of the paper Which are thankfully the first two sentences of the abstract and then we're going to run through the rest of the abstract Before we go on to the rest of the paper. So kind of just like you're reading this at home This is how we're going to unpack the paper from the beginning to the end So there's two big questions and again, these are the first two sentences of the abstract Their first question or their first big goal is to assess whether the construct of neural representations Plays an explanatory role under the variational free energy principle and its corollary process theory active inference So there's a lot happening here. I would kind of jump into the middle And it's where you start with a free energy principle as a principle meaning it's a paradigmatic It's axiomatic. It's foundational and then There's a corollary which means like a subsidiary or a secondary theory And the secondary theory happens to be a process theory, which means it's not a state theory It's not about how things are it's about how they relate So another example of a process theory would be evolution by natural selection It doesn't tell you what state the system is going to be in But it tells you if certain characteristics are present in the system like replicators with differential success and heritability You're going to end up with this process playing out So the process is active inference and the paradigm is the free energy principle And then the question that they're going to work on is to assess whether the construct of neural representations, so neural states Whether those have an explanatory role or are going to enrich us when we're thinking about free energy principle active inference And their second question that's tied to the first is if there is an explanatory role for neural representations Under active inference and the free energy principle, which we think that there will be Then they're going to assess which philosophical stances In relationship to ontological and epistemological Statuses of representations is most appropriate So they're going to use active inference free energy principle to explore how neural representations are related to the real world And then they're going to try to translate that back to the classical vocabulary and some of the classical theories from philosophy Okay So let's continue going through the abstract and this is definitely where we're going to want to slow down and make sure that we understand What each of these terms mean from a philosophical perspective? What are the baggage? What are the assumptions? What are the implications? So they're going to focus on non-realist, which are deflationary and fictionalist instrumentalist approaches And they define it in the next sentence a little bit so we don't need to dwell there We consider a deflationary account of mental representation According to which the explanatorily relevant contents of neural representations are mathematical Rather than cognitive. So that's one type of account of representation. They're going to pursue The second type of representation that they're going to pursue Is this fictionalist or instrumentalist account? In this view or this account The representations are scientifically useful fictions that serve explanatory and other aims So looking at this deflationary account in the middle and then the fictionalist instrumentalist account at the bottom What does that make either of you to think about or how do those two ideas differentiate themselves? Or what would be another way that you would restate either the middle one or the bottom one? um, I kind of struggle with Really specifically understanding what the deflationary account means and how it differs because um I kind of imagine that something could be a mathematical representation as well as a useful story but I think this is a common theme that we'll be coming back to for This whole paper is that using active active inference As a framework. It's not just a hypothesis. It's a lens that you apply To how you view things and so viewing things through the active inference lens It seems like these can all coexist in a way that's not Exclusionary Okay, very nice. Yvonne Let me pass now. Okay. I think that you really um said it well Sasha, which is that um From a different lens the non-free energy principal lens These two viewpoints might be uh very opposed to each other. I'm just thinking at some, you know psychology conference first someone says I propose a deflationary account according to which dot dot dot and the next person says I propose a fictionalist account dot dot dot But free energy is this new lens in this new way of trying to approach these classical Problems and the goal would be that we could get the best parts of both of them and use them under the free energy principal So from the deflationary account, um, we could make mathematical representations of neural states That would be awesome and then using a fictionalist or instrumentalist accounts Maybe we could also have those mathematical states be useful. So we kind of have like this this triad. We have neural states I mean the neurons are definitely doing something. We don't know what but they're doing something Then we have mathematical, uh inference Which whatever the neurons are doing you can always mathematically represent it and then there's utility for the scientist and or for the organism itself and it's like You can choose one two or three of them And the goal would be we'd want all of them to be under a common framework So that's really where they're going with this approach is instead of just budding heads Oh, well this part of the triangle is better than this part of the triangle They're going to see if free energy can integrate across these different perspectives So the abstract's a little long. Here's the second part Um after reviewing the free energy principal and active inference We argue that a model of adaptive phenotypes under the free energy principal can be used to furnish a formal semantics Enabling us to assign semantic content to specific phenotypic states Which are the internal states of a marcovian system that exists far from equilibrium So phenotype is being used here quite broadly a lot of times people think about phenotype as measurable components of an organism And that is like oh well leg length is a phenotype or hair color is a phenotype and they are But also the instantaneous representation of the neural, uh system is like a phenotype And so just like a phenotype could be reaction time Another type of phenotype could be when I play a 200 hertz hone tone into your ear Your brain state in response to that tone is a phenotype So here phenotype is just meaning measurable components of a biological system Whether you actually measured them or not and then Um, they're going to ask whether they can assign semantic content So that's meaningful content either lexical semantics, which is how words are related or logical semantics Which is how ideas and propositions are related And they're going to ask whether they can make some sort of a mapping between semantics And phenotypic states within a marcovian system and by marcovian system far from equilibrium They're highlighting two pieces. Um to call the system marcovian means that it can be bounded within a marcov blanket Which is to say that there are states that are internal to the system and states that are external to the system And then the boundary nodes, um where what you're inside looking out It's what you see or outside looking in is what you see and that's this marcov blanket idea that we kind of come back to in a lot of our discussions They're going to propose and this is why I made this font big a modified fictionalist account in organism centered fictionalism or instrumentalism So they're going to say, um, let's be pragmatic Let's uh make a modified account where the organism is in its own generative model making an instrumental fiction about the world Where uh, I'm not exactly sure all the connotations of instrumentalism here But it's kind of like the body is an instrument and then the song that's being played is the niche Um is the fitness of the organism's phenotype to the challenges of its niche They argue that under the free energy principle pursuing even a deflationary account Of the content of neural representations Licenses the appeal to the kind of semantic content involved in the aboutness or intentionality of cognitive systems Our position is thus coherent with but rests on distinct assumptions from the realist position And so here we get another peek at the difference Potentially between the deflationary accounts and the inflationary accounts, um, which is not really used but What does it mean to deflate or inflate in this context? Deflationary it's like if somebody said, oh, it's just a party. It's a deflationary account of the party It's like it's nothing big. Okay. Whereas the inflationary account of the party would be like This is going to be the most important party ever It's going to be so important that all these things happen in this really symbolic way And so the deflationary account of neuroscience is it's just what the brain is doing It's just cells that are linked up to each other And it's just time is passing and click by click in time It's just the physical system But then you get into a little bit of a bind when you want to reintroduce semantics because if you just have this massive agent-based model or dynamical model That's just being simulated forward through time Then you can say, um, well, it's as if it's thinking about elephants But you're never going to be able to go back to the microstate and say this is elephant in the microstate as it's represented Because it's deflationary you kind of deflated the meaning away And um, that's the realist position Um, whereas this semantic is an inflationary account. It's like, yes, it's a brain state But it actually is meaning something more and so that's what they're going to try to take the best of both worlds They want to have a formal semantic system Um, like philosophy does Formal semantics, but they want to ground this formal semantics in how it arises from phenotypic representation specifically neural states And then this is their closing argument that we argue that the free energy principle thereby explains the aboutness or intentionality in living systems And hence their capacity to parse their sensory streams using an ontology or set of semantic factors And so here is where they kind of, uh Reach their synthesis and their proposal Which is that if we can find some sort of a bridge between realism like what the brain is actually doing without any extra fluff just phenotype And a formal semantic theory Potentially this will um, light the path forward for understanding how organisms Parse their sensory data coming in Into meaningful categories. So instead of just saying well the photons hit my retina and within a predictive processing framework My priors are updated and yep, there's Sasha in the window and there's evan in the window Instead of that type of processing forward. It's actually like, um, we get to think about Philosophically what the organism is doing through parsing its sensory stream, but also retain the realism of the sensory stream Okay, any thoughts on that relatively long abstract It's not clear now Okay, that's the only hope If we aren't yeah, I would say the same Good if we're not reducing our uncertainty about this paper and their claims and their aims and their goals, then we're not really in it so Now that we have a little bit of a better sense of where they want to come from And where they want to go to let's look at the roadmap This is going to be the path that we take from the introduction of the problem and its relevance Through the foundations to the claims that are synthetic So they're going to begin in the introduction by talking about the idea of neural representation And the contents of neural representation as well as a little bit of a joke the disc contents like disc Discontented means to be unhappy. So there's some issues with neural representations Um, and they're going to be discontent with them by talking about the problems with their contents So classical joke The faces of representationalism realism and non realism And so here is those two different dueling perspectives on Representing and it's another joke. It's two faces two faces of the same coin. So if you're going to have representation Um, it's kind of sitting just like a coin on its edge on one face It's the semantics. What does the representation mean? You're thinking of an elephant. It means elephant And then there's the deflationary. Oh, it's just a neural state And there's actually no meaning attached to it even though it is a representation of an elephant Then they're going to take a turn towards Anti realism they're going to talk about some of the deficiencies of the realist view And there's a lot of sort of ways to approach this we'll look at the specifics But one example that I always come back to is like most people don't perceive their blind spot Or they perceive that they have color vision in their whole field of vision And so yes, the world really does have wavelengths of different colors coming from all over our visual field But that's not really what you're perceiving in your retina. So it really is being generated by your brain So it's really an organism centered fiction, which is why they're going to come back to that idea later Then they're going to address how representations are Discussed under the free energy principle as a paradigm The second section is where they turn more heartily to the free energy principle And they talk about active inference again as a corollary process theory to the free energy principle as a paradigm And then specifically they're going to bridge two topics here They're going to move from information geometry to the physics of phenotypes Those might be two very new terms or two ways new ways of thinking about Information and geometry a lot of times people think about information theory is one type of science and then geometry You know triangles and squares. It's different. So what is information geometry? And then how is it related to not just phenotype, but the physics of phenotype? So first they go through state spaces non equilibrium dynamics and bears. Oh my again hilarious writing They're going to talk about Markov blankets and the dynamics of living systems. So again a Markov blanket is like this Set of nodes you can think of around a system That is the ideal Perfect clean cut carving up nature at the joint between the internal and the external system And going into the internal system is sense data and leaving the internal states are action states So we'll look more through that soon Then they're going to return to information geometry and the physics of not just phenotype But sentient systems and that's going to have to do with that far from equilibrium nature of living systems And it's why the rock is uh, not the person the object is not alive, but a person Who has far from equilibrium thermodynamic states? They're considered to be living Then they talk about you know type as a tale of two den cities Again trying to loop as many puns as possible into their titles And then they close out section two with the discussion of living models Which is a mechanistic view on goal directed probabilistic inference and decision making Under the free energy principle. So everything is under the free energy principle because that's um our working model And then here they're going to try to go from a mechanistic view So they want to talk about the mechanisms, which are how things are actually happening But they also want to include some things that are not traditionally included in mechanism For example goal directedness probabilistic inference and even decision making Then in section three is where they uh bring together their deflationary and fictionalist account of neural representation First they're gonna propose their deflationary account to neural representation. In the other words brains are just Doing brain stuff nothing more nothing less. That's what friston calls deflationary In the sense don't don't inflate your hopes that there's going to be some magical Secret, you know between the neurons in the brain the brain is just doing brain stuff And it's tremendously complex and it involves multiple cell types, but it's just that nothing more and nothing less Then they're going to talk about fictionalism and how models come into play in scientific practice And then in the closing section four They're going to finally reach their synthesis, which is a variational semantics from generated generative models to deflated content So now they're going to try to deflate meaning itself So when I say that my name is Daniel and there's so many layers to interpret that on it's a string of audio bits Now it's a string of audio bits that is being transmitted through the computer through the live stream It's just exactly what I mean. It's just the signal. I'm sending nothing more and nothing less So how are we going to get this kind of deflationary realism about the world? That's so critical for an unbiased or at least transparent scientific approach We want that unbiasedness that objectiveness yet Because we're semantic entities. We know that we're going to have to engage with the world in terms of meaning So how are we going to square this circle with realism? And what is just there with meaning and all the power that meaning-based approaches can bring to the table Then they're going to um, yeah provide the deflationary account of content under the free energy principle And move from a computational theory That is proper to a formal semantics And then they're going to close out with the idea of phenotypic representations and ontologies Okay, big roadmap roadmap a lot of stops a lot of gas stations a lot of restaurants Any thoughts on that? Okay Good Sasha All right. Yeah, that was really helpful um to talk through the the roadmap and See where we end up Okay So sorry for the wall of text, but I wanted to put one of their key claims verbatim They write and this is uh pretty early on in the paper. I think this is on the bottom of page The actual paper Uh page uh three or four. Yeah bottom of page three in the pdf, but the page numbers can be different for a future version they write There are several well accepted constraints for the appropriateness of representational explanations So we're talking about representations and now we want to think what are the Deserata, what do we want? What would be a great theory of representation and they're going to play by the rules of people who study representation They're going to say these are the community of people who study representation They've decided that these are like the five rubrics if you if you can ace all five of these tests Then you're going to get an a plus for your theory So first it should cohere broadly with actual practices that are used in computational cognitive science research So that means if you have some theory of representation that only applies on a different planet or outside of an fmri machine You're probably not going to get too far. We want this representation model to be grounded with the real things that people are doing In computational cognitive science and also psychiatry Two we want to allow for misrepresentation Which means that the representation has to be able to get it wrong And so if you say think of an elephant and then anything that the person thinks of you're going to say right Well, that's what they their version of elephant is they started thinking about lunch but They can't be wrong because no representation is incorrect That's not going to be super useful. We won't be able to actually parse the world into like useful pieces of information and not useful Um, and that's a lot like saying if you can't um, if you can't Have a model that gives you the wrong answer. It can't give you the right answer You know if it just gives you a one no matter what you put into it It's not useful just because it might be right twice a day like a broken clock doesn't mean that it's actually a useful model Three representation should provide the principled method For attributing Determinant contents to specific states or structures internal to the system um So we're looking for some sort of a bridge for a way that we can uh go from a specific brain state like Your amygdala is having a lot of blood flow and your prefrontal cortex isn't that would be a very coarse level representation Or uh, you could imagine a a one million dimensional representation This neuron is doing that and this glia is doing that we want to go from that Uh specific state representation Which might look like a vector or some other map and we want to go from there to something that is Internal and or external to the system like thirsty or hungry or thinking of an elephant Finally or we want this representation theory to be naturalistic Meaning that the account of semantic content does not itself appeal to semantic terms When defining how the representational capacity is realized by the physical system on pain of circularity and reasoning now um Pretty rich for the free energy principle to call someone else out on the circularity of reasoning And it's uh, it's a complex claim to parse what is being said with number four Because in some deep sense because our um axioms are chosen without evidence by definition Our paradigms are non-explanatory. They can't be explained The question of what's a circular reasoning argument and what isn't um is is not perfectly well defined But I think what they're getting for uh getting out with number four Is like we want the semantic content of the elephant to be an elephant not just the word elephant We don't want to enter into this infinite recursion where like language explains language And then that explains another kind of language and we just stay within this realm of abstracted uh representations but we never actually come back even in a uh sort of unexpected way to the reality of the naturalistic situation which is as evolved entities, we actually have evolved to represent certain external patterns Internally and so if it doesn't come back to that naturalism, then it's just going to fly off into space Any thoughts on this, uh Four claims about what would make a good representational theory um, I found that last statement quite uh ironic ironic and circular um because in a way we're always using semantics to describe the systems that Are semantic but also maybe eventually linked to a natural state for uh, you know a real elephant if you will um and I think uh Yeah, this paragraph was the most helpful in trying to parse out the kind of Things that they're looking for and the arguments that they're trying to make and um, I really I think Part two is quite interesting and then they do get into it later in the paper, but um to me That's always the most interesting part of the free energy principle to address Cool and here just to put it up on the screen one more time. This is from page four in the uh accepted manuscript version Where they uh Return to the two-fold aim of the paper Which is to determine whether or not neural representations Are going to play an explanatory role Or any role at all within this increasingly popular framework for the study of action and cognition The variational free energy principle and then second they want to postulate whether the neural representation theory Is going to be warranted under the principle So this is related to those uh first two things that we put up on the screen the first two sentences the abstract the big questions Okay, so Let's jump into the figures and then we're going to work our way down the paper And sort of landmark at the different figures so that um, we're understanding how they're visually as well as conceptually representing these different topics along the roadmap and then at any point if you have a question about the uh figure Or the text specifically or just a related thought that comes to mind just bring it up So the first figure and the caption is copied below so you can Okay Is a markov blanket and this is very similar to other Images that we've seen but this is just a foundational term For the free energy principle is this idea of a markov blanket So we've talked about markov blankets before maybe with either of you like to give it a shot at what a markov blanket Might mean or not just the total definition But even just one aspect of it that you think is interesting or relevant to remember right now Okay, let's try you know once have its internal internal states and Sensation and action and uh also in uh In space around there is uh external states that one doesn't know about and Doesn't know how How it will Impact on its Its own internal state Yeah, there is a border that Uh Where external states it's out of the border and the internal states and the actions and sensation is in it Perfect. So here the purple node mu is the uh system of interest. This is the project in focus This is like the internal state Um, and then the blanket actually goes one layer out from the internal states and it goes in two different directions This blanket extends it extends upstream Towards the green s's those are the sensory states that are coming in to the internal state Um through these hidden external causes the orange nodes And then leaving the markov blanket are these a action states and action states Are basically resulting from the internal states dynamics as well as other sensory causes So here that s on the left side is just going directly to action So when I was thinking what what does it look like for the sense to enter the markov blanket But not enter the model. I was thinking of like um, you know having uh like a splint on the arm It's like the overall outcome action moving your arm It's going to be related to your internal states as well as some external influence that wasn't really part of you But it's part of your integrated unit of sense and action And so that that that external cause about the world having a constraint on your arm ends up Changing your action states and therefore it makes sense to include its sensory input and action Influencing ability within the markov blanket Even though it's not within your epithelia and that is actually why there's such a natural transition From thinking about this type of markov blanket framework to thinking about extended cognition Exo cortex ecological cognition distributed cognition Because it's the realization that even one person's internal states for example the brain Are in feedback with external tools like the computer and so they're external from the point of view of the epithelium But actually from the perspective of the markov blanket, they're internal to the same Self regulatory cybernetics system that's grasping at these external causes and trying to influence them So i'm going to return just to the paper so that we can scroll through and um See where this figure comes up to but I just wanted to get that first markov blanket in So here we are in this section 1.1 with the realism and non realism And so here's where they talk about realism about neural representation. What is real finally we get to find out This view combines two positions ontologically that neural representations really exist Which means that they're physically something that's instantiated or happening in the brain And then epistemologically That's how we come to know about things That representations are scientifically useful as well. So realism They're saying has an ontological component, which is that they're real. They're actually physically instantiated and Epistemological realism is that scientists should find this realism useful non realist positions Which have not a complete overlap and that's a little bit of the confusion is It's not like the realists and the non realists are totally opposed They do share some aspects of their models So it's not like perfectly everything that realism is non realism isn't so What what does that mean non realists are either Agnostic about the reality of neural representation. So that's kind of just like saying they don't care What's real in the brain or they'll never know or we couldn't measure it anyway There's a lot of ways you could be agnostic right about anything Or they explicitly reject the assumption And so the non realist is kind of like the agnostic The anti realist is like the strong atheist They say neural representations do not exist. So that's the strong claim is anti realism or realism Theism atheism and then non realists are kind of like the utilitarian the agnostic in the middle And they continue talking through several of these different Philosophical ideas or these schools of thought like eliminationism Which is anti realist and it's basically saying The construct should be eliminated So we shouldn't even use it and that reminds me of like the idea of a genotype phenotype map or mapping Some people say yes, there is a map and we can find out about it Other people say no, there's no map, but it's as if it exists. So you can talk about it It's useful, but it doesn't really exist And then there's other people who say you should eliminate the concept of a genotype phenotype map because it just is simply misleading to scientists So that's eliminationism and then instrumentalism or fictionalism is non realism and is saying that they're useful So that is harkening back to the work of denit For example with the intentional stance as well as a lot of other folks research Um Just speeding through this because there's a lot of sections. They're going to talk about parallel distributed processing They're going to talk about some of the issues of realism um and We we're just going to move through this because the specific critiques of realism I feel like they're going to end up moving beyond them. So if you're curious You can read more into their specific problems with realism Then they're going to return to representations under the free energy principle and that closes out section one So I think we could then go to um figured two to look at the free energy principle So here is where we get to figure two um And where figure two is conveying the basic outline of active inference in the free energy principle So would either of you two like to take a stab. What do you see in this figure either on the left side or on the right side? Um, sure. I'll take a stab um Yeah, so this is uh kind of taking that model of the markup link it in figure one and um trying to uh sit it to some of the parameters of um An organism and you know with the image of a brain to remind us what we're doing here. Um, and To explicitly say And mathematically say what external and internal states are representing And that they're linked by the sensory states and active states now sensory states are explicitly going into the internal states and active states are explicitly going out into the external states and so um, and then there's feedback between Some of the the components in this system Very nice, uh, Yvonne any thoughts on this? as As was Was told at the start that we can We can have a mathematical Multimation Expression of of the states and there is such expressions in math so if we have If We state different well, yes, we we can describe How we Proposed to Have it in math Yes It's also it's also have W with a little note down How it's later called I don't know omega omega omega. Okay And yeah, what what is it? I just forget it. It's it's random noise. Yeah So that is kind of like a error term in each of these uh for so this is adding another level of complexity to this figure So here we had hidden external states that are orange sensory states coming in that are green purple internal states blue action states going out and then Influence on those latent causes in the world in orange at the bottom now On the left side we have internal states with blue. That's still the mu variable And it contains some internal function f of mu and b and so b is the blanket and it's basically saying Mu is going to be a function of mu and b So the state of the internal state is a function of itself and the blanket states And there's a noise term. Okay, then we go to the active states active states are a function of mu and b They're a function of the internal state And the blanket states that makes sense Plus another noise term and then you can continue working your way around and you can see that external states are a function of external states and the blanket plus a noise term And sensory states are a function of external states And the noise term External states and the blanket state plus a noise term So this gives us gives us a few nice partitionings. First, we can talk about Doing inference on the variables that we care about by separating them away from a noise term And then also you can already see that sensory states don't rely on internal states or action states Um other than the ones that are in the blanket And so this type of partitioning is moving us towards a very clean way of talking about these different factors and how they influence each other Um, let's do one more figure before we return to the paper. So here is figure three This is going to be another representation of free energy principle active inference So here at the top of the figure We have this magical g expected free energy And the expected free energy now we're like looking inside the brain That's the the inference organ that's calculating expected free energy, which we'll come back to mathematically what it is and The brain or the internal state of the system guides policy selection pi for policy like a p and policy selection is intermediate between the model of the world and the actual action selection So for example, if I see the ball coming out in front of me and my policy is that i'm going to run and uh try to catch the ball It's like halfway between action, which is actually the legs moving and running to catch the ball And the pure internal generative model, which is like wow if I ran as fast as I could I could catch that ball In between is the policy choice And so that's what the brain is trying to converge upon good solutions for and that's why Free energy principle is adjacent to cybernetics control theory and a lot of other areas that actually are action oriented Because in the end it's not just that we go from model to action. It goes model to policy to realized action um, and it's a nuance, but it turns out to simplify the the The system and the model a lot then we have initial Uh prior beliefs about the initial hidden states are in d so on the left side of the Image and time is going from left to right here The left side is this our priors about the initial hidden state like um, I think it's dark outside Then um the actual hidden states are these ends these news I guess um and actual hidden states are uh going to Move through a likelihood mapping a And result in sensory observation So initially in d I believe it's dark outside and let's just imagine that this n1 is actually dark So then there's a mapping where if it's dark no photons appear and if it's light there's a bunch of photons And then that results. I don't know about n. I don't know about a but I do know about s Which is I don't get any photons So that's consistent With it being dark because there's a mapping between it being light and photons and it being dark and no photons um Sometimes it seems like we're explaining things like multiple times and saying it the exact same way But it turns out that by observing um this structure here We can see how these beliefs change through time So then b are state transition probabilities And so then by observing the lack of photons in s1 That is uh confirming my belief that it's dark outside And so we continue iterating through time Where basically the probabilities of state transitions are happening about the external world And then the external world keeps on emitting Sense data to us through this likelihood mapping and then what the internal state is doing is by minimizing free energy Expected free energy across the whole thing. We're going to come to adaptive policy selection Okay, any thoughts on this sort of third representation of marco blankets slash free energy sosh um, yeah, uh That was really useful to walk through because um, I really wasn't sure how to which direction to approach this figure from um, and The way you described it it just makes it very clear and reminds me a lot of um, kind of experimental design where you go from one state One set of uh Hidden states and then you observe something and then you change one variable and you observe something else But it really depends on what your understanding of the um Kind of relationship between you know photons and darkness for example that helps you interpret your next observation um, so I I think yep It just highlights that we're always doing that. We're always trying to link um the observed state to what's causing it or Kind of a hypothesis testing all the time Yep, and it this is a discrete model on the bottom. It says um the form of this generative model is um Basically discrete, but we could imagine it. Let's just flesh out that experimental metaphor So let's just say the initial hidden state is that um a bacteria of interest is present or not in a sample So if it's present there's going to be dna corresponding to this bacteria if it's not there won't be dna That's the likelihood mapping a and so initially we don't know And so we do an experiment we perturb the system by adding some chemicals by doing a pcr reaction And then we get the sensory data output and it it's uh inconclusive and then we we can update our policy until we get a sensory uh observation that's consistent with um What is happening inside the test tube? And that's why we have positive and negative controls because if you just run one experiment Then and you get a negative result Well on one hand it might be that there's no bacteria there But also the likelihood mapping of Uh makes it so that well there could be a bacteria, but then it maps on to a negative result because the pcr didn't work And so that's what comes back to the mind of the scientist Whereas the scientist has to look in a sense At the sensory data at the measurements empirically measured uh results But actually look beyond them to reduce uncertainty about something that's real Like whether or not the bacteria are in that sample because it's not just did the test come back positive or negative That's going to be misleading because that makes it so that the s is the same as the hidden state But it's not the hidden state It's the outcome as mapped by a from the hidden state given your experimental setup All within a policy situation or an experimental research program that's dictated by minimization of expected free energy Any thoughts on that yvonne or we'll jump back to the paper cool so um In section two. Well, let me just look at what the next figure is the next figure. Okay So we're gonna a few minutes. We'll jump to the paper. So, um This is going to be an introduction to state spaces and non-equilibrium dynamics Though i'm not quite sure where the bears come into play This is a key sentence living organisms maintain phenotypic integrity and resist the tendency towards thermodynamic equilibrium with their ambient surroundings Full stop. This is this is what it means to be alive. This is schrodinger's 1944. What is life? He says life is going to be locally thermodynamically organizational It's going to have negentropic characteristics. It's going to locally organize Matter and energy not violating the laws of physics because in the end the global system is disordered Um, but still the living system creates local order That's why you need to eat a few thousand calories of food to build one pound of muscle for example um And usually the fluctuation theorems that generalize the second law of thermodynamics Would tend towards dissipation, which is why uh things tend to break down Unless they're being actively repaired and rejuvenated like living systems are Living systems maintain their phenotypic integrity so they keep their leg at the same length or they keep their brain alive By bounding the entropy, which is also the dispersion or spread of their constituent states And so it's like my arm exists in an organized state right now Um where the bone is bone and the muscle is muscle and the fat is fat But you could imagine that if that were to break down so that just the carbons went floating free Then the organism would cease to exist So organisms exist by virtue of and and by merit of maintaining Their physical phenotypic representations coherent if it's not going to be a coherent phenotypic representation. It's gone To get a better handle on this They introduced two formal notions the state space and non equilibrium steady states So there's of course a lot that could be discussed about state space But a state space is a formalism that allows us to describe time evolution of a system By depicting a trajectory through state space So a physical state space could be like your latitude and longitude And then the state space trajectory is your path through latitude and longitude But also you could have a state space that has other axes that aren't latitude and longitude That are for example neural state space and that's bringing us back to this question of neural representations um The probability density that describes the system at non equilibrium steady states Aka their phenotypic states are aptly called non equilibrium steady state density So my arm appears to be at an equilibrium in the sense that it's not growing or shrinking So why is that called non equilibrium steady state? Well, it's a steady state because in one respect the phenotype is unchanging But that unchangingness is actually an active compromise that's being reached Between the forces of order and disorder and in that sense the total system is not in equilibrium The equilibrium system is the rock at the bottom of the hill That's the water, you know when it's just at the bottom of the river There's nowhere to go from But when you have these two counter regulatory processes of organization and disorganization You end up with this very dynamic Location in state space that's always being re-optimized and the universe gives us the disorder for free. That's easy It's easy to imagine how things fall apart But what life succeeds at doing is maintaining order Even though there is disorder and then um this idea that there's a non equilibrium steady state density Is consistent with far from equilibrium thermodynamics And this is just related uh to again to schrodinger's question Which is like the living systems have to maintain order somehow they have to maintain far from uh equilibrium states The equilibrium state is all of your carbons are unlinked and they're blasted off as co2 into the atmosphere So that's going to be the most well mixed thermodynamic state, but we're not that we're phenotypically coherent So what are living systems um and what allows them to stay organized despite that dissipative force? That is where the markov blanket comes into play It turns out that by using the markov blanket formalism At first just as a scientific heuristic, but we'll see it's actually deeper than that We can make clean distinctions between Organizable internal states which are doing inference on the external states of the world Active states which are the active states that actually move the organism actively into a better realm of state space like a better temperature for you to exist within um And then so we already talked about this partitioning But now we're thinking about this in terms of the internal states being far from equilibrium Um any thoughts or questions on that? Okay, so here's where we're going to get to the information geometry As well as the physics of sentient systems. So this is a big topic. We're not going to have uh 50 hours to unpack it But this is why we're part of the journal club series because these are the ideas that we want to come back to again and again The description of a system the in terms of movements in internal phase space is the system's intrinsic information Geometry closely related to measure theory and statistical thermodynamics So regular geometry would be in the phase space of the piece of paper, you know X and y coordinates geometry is like a triangle has a certain shape And then you can imagine there's a different state space like non euclidean geometry Where the triangle has different rules, but you can still talk about geometry and maybe even things like triangles But they don't necessarily need to correspond to the angles must sum to 180 Because it's about the total systems representation About the rules of the system and about the geometry of the system and the state space the system exists within All those are really well linked. So information geometry Is asking about how we can do geometry not on x y on the piece of paper not on latitude longitude with a map But how could we do geometry on the informational internal states? So It's a it's a big idea and there's a lot to it. But what is that? What is that ring with either of you two? And here's Go ahead Sasha. Um, it's Yeah, it sounds like kind of a different way to phrase um like topology or mapping relationship between different um Ideas or different people. So thinking of like network Uh analysis on these different concepts Yep, and here's that physical metaphor where in euclidean geometry If you move 100 meters, then you'll you'll move 100 meters So there's a one-to-one relationship between the physical movement and the euclidean informational distance However, if you're on a sphere If you move in the same direction, you're going to end up zero And so that's like kind of a modulus operation in mathematics And as opposed to saying well 10 plus 10 equals 20 Okay, but what if you do modulus 10 then 10 goes to zero and 20 goes to zero So those numbers zero 10 and 20 are different in the number line in the euclidean world But there's another system another state space representation where actually zero 10 and 20 are the same point In that state space, which is to say that their informational distance is zero Another interesting thing about bringing it to the concept of information Is that the informational divergence between two things is always positive So you can have um, and that's kind of like um the distance between two things is always going to be a positive number You could say On this number line, you know 10 minus 30 is negative 20 But if you have something on the 30 yard line in the 10 yard line of a football field The distance between them is positive. So the number line is where we get things like negative numbers But within very well specified systems information geometry also can tell us a lot more than other approaches Here I don't think in this specific conversation. We'll have too much time to go into the equations of information geometry Where they use the fissure information metric as a sufficient statistic that corresponds to the expected thermodynamic and external states and internal states Um, but this is let's put a flag here so that we know that we can come back to talk about this when we have Some more colleagues on the conversation Then they get to phenotype as a tale of these densities And they say that the phenotype is a free energy principle is a story about two probability densities The first is this non equilibrium steady state density itself, which is the statistical structure of the phenotype So that's that realist. That's the deflationary realism And the second in the sense that this is the phenotype like your arm is a representation of the arm It's the it is itself The second is the variational density Which is parameterized by the internal states of the system And so that's where we're going to already see that we can reach into that semantic world I'm going to just um skip through this part so that we can get to the The synthesis of the paper and the last figures So here's where we get to figure three So now we're seeing that through this pretty elegant and minimalist structure When the organism minimizes its expected free energy Which is always going to be a positive number because the informational divergence between any two distributions is positive If you minimize g You're going to come to the best pi the best policy selection And that policy selection is going to use input from the emitted sensory states to reduce uncertainty about hidden causes in the world D And This is where they get to the questions about content of representation And so here is where they're going to return to those The desired features of neural representation But here is where they approach it the mathematical contents of a neural representation and where they propose a Based upon egan 2019 a computational theory proper for representation And so the aspects of representation that they're going to pursue Are the mathematical function that's being realized That's like the uh the mathematical component Then there's the specific algorithms or the process theory by which the system actually does compute that So it's one thing to say well the ants are doing a you know n p hard optimization problem It's another thing to ask what are the rules that each ant is uh engaged in such that that optimization is the outcome Third are the questions about Representational structures So that's like not what is the ant colony doing or what does one ant do? But what is the representation of the ants across the whole colony through time as it solves the problem? What are the computational processes That are defined over representations So the representation is that distribution of ants in the colony the distribution of active neural states in the brain And what is the computational process that is defined over representations that links them together so that they accomplish The mathematical function And then lastly is this ecological component, which is really um so critical Which is the recognition that the internal dynamics of the system One through four the math the algorithm the representation the computational component. That's introspective And we're never going to understand function or adaptiveness or functionality without also understanding Ecologically, what is the context that the algorithm is occurring with it? um And that is any thoughts on that from either of you two Cool. I thought this was really a nice way to come from a really well-grounded Definition of what all these different components of representation and then in the end ground it to the real system, which is the ecosystem Okay Now they're going to add this heuristic cognitive content and here we can look at figure four So in figure four, this is the deflationary account of the contents of representation So remember deflationary just means it's just a party. Okay. I'm it's not a big deal It's just a party So here is how they're going to go about deflating representations And they're going to split it into two parts that aids in the deflation It's kind of like, you know, when you get a package and there's the air bubbles and it's partitioned You need to deflate each one of those partitions So they're going to separate the world into two kinds of components cognitive and computational and then they're going to deflate um those two aspects so computationally We're going to draw off of those egan Uh terms that we just went through one through five. So there we have mathematical functions algorithms representations computational processes and ecology those are all computational like in the sense of the ecological component whatever it is It's just that there's no magic in the ecology. There's no magic in the computation and then Over on the left side here the cognitive component It's actually there's just one air bubble to deflate Cognition is just about intention. The intentional gloss is what they Call it here here. The cognitive content is taken Oh wait for that little thing to disappear As an it is taken in an anti-realist sense in a type of an explanatory gloss that only has an explanatory Instrumentalist or remember fiction organismal centered fiction role as the interpretation Given to the neural representation by the experimenter like oh when this set of neurons is activated the mouse wants to drink And so you could say computationally. Here's the mathematics. Here's the algorithm that it's happening the representation the computation and the ecology with the water Those are all just what they are and then there's the cognitive element which we've now isolated So that's really where we've gotten to is we can isolate the cognitive part and say this is the intentional gloss This is the mouse thinking about being thirsty experiencing being thirsty wanting to be thirsty wanting water So that's a really nice partitioning And that's still within the traditional frameworks of computational neuro as well as cognitive science any thoughts on figure four cool Um Then we go back to the paper We hear a little bit more about fictionalism in the philosophy of science And suffice to say there's a long and storied history of fictionalism in science because science is always making models that are as if they're true about the world but Here's where they say they're used by scientists to explain intentional behavior. They're models used by scientists Again, this is kind of a circular definition. It's like saying Cognitive neuroscience is whatever cognitive neuroscientists do. It's like yes, it's true And then we can also look beyond that And they discuss this this idea of empirical adequacy or true enough And that's a key idea And uh, it's kind of like when people say let the data speak or uh, the data are clear Well, the data were clear on epicycles and people were making more and more epicycles and getting more and more Empirical adequacy and it was true enough for the people who were using them, but it was wrong So empirical adequacy always must be interpreted in a really a holistic way where even if the model precision is extremely high Even if the model precision is increasing you still could be just locally overfitting and totally barking up the wrong tree um Okay, finally Near the end of our discussion We reach this variational semantics where we're going to actually bring all these things together We have the generative models active inference and the free energy principle on one side We have semantics and then we have phenotype And we're going to say phenotype is just phenotype the brain's just doing brain stuff The arm is just an arm a frog is just a frog and semantics are just that they're just meaning But how are we going to bridge these two worlds? How are we going to go from just neural representation to just meaning? Well, first they bring up the viewpoint that the deflationary view of representation downplays the role of the fifth ecological component of the computational theory proper So what their claim is here is that um deflationary Representationalism from a computational perspective, which is like mainstream science Discusses the first four points here a lot, but they ignore the ecological component So I certainly resonate with this critique and that's why earlier I said this is one of the strengths of the free energy principle is it reminds us Even when we're drilling down into the thirst center of the mouse How could you ever really extract that from the ecological context where there's water or no water? um the body states where it's uh Dehydrated or not and how could you even extract that from the evolutionary history of water being available or being associated with other sensory cues um Here's where they connect it to the free energy principle They're going to say the formalism that is underwriting the free energy principle licenses or allows us to have a crucial observation Which is that the mathematical structures and processes are defined over a state space remember information geometry And implicitly over an associated belief space or a statistical manifold So manifold is a lower dimensional representation of a higher dimensional uh Pattern so if you have some sort of high dimensional pattern, but you project it into low dimensions It might exist on a line in two dimensions even though it was a cluster in a hundred dimensions And that's sort of the idea of these manifold techniques And another mapping that to bring it one level closer to the paper is like there's a manifold That's a continuum thirsty to not thirsty and then there's so many neural states I mean we have billions of neurons and other cells maybe individual neural states are never replicated Ever because each one is so unique and the brain is always changing But you can project down onto this internal manifold which is thirsty to not thirsty Across your life even as the number of neurons you have changes or the neurons fundamentally change themselves um So we're going to use this idea that the environment is important And that the internal states are mapping onto an internal statistical manifold Specifically an action oriented one and then we're going to bring that to semantics any thoughts before we move forward Okay, cool so It is often noted that one does not obtain semantic content from mere systematic co-variation So that's sort of the uh Helen Keller initial learning approach Which was like she learned famously by having water on her hand and then having sign language spell out water on her hand So that was an instantaneous co-variation between a symbol string of sign language and a sensory experience of water And so that was like the rosetta stone for linking an external event and sensory sequence to um symbols that that's uh lexical semantics delivered as sign language But that's not how we get all of our semantic content Um under the free energy principle There is an implicit semantics at play that is baked into the system's dynamics by evolution So there's something about sugar that tastes good. There's something about warmth at the right level that feels good um That is something that evolution has endowed us with because we're not simply randomly assembled combinations of phenotypes Where we've been ecologically selected to experience a semantic quality to different types of sensory experiences And that's not to say that they're not plastic or they can't be learnt uh or nuance can't be added but there is uh potentially within this uh trajectory of states on the internal manifold There actually is the hope of a formal semantics that could fall out of this system's dynamics And therefore be characterized mathematically But the hope for a formal semantics That arises from phenotype representation is there because the world actually does have order and stimuli Actually do have valence in the world Any thoughts on that? Okay Here's where we take stock in section 4.2 We have we have retained the general description of representational content from the deflationary account So in other words, we got what we wanted out of the deflationary account Which is to say brains are just brains ecosystems are just ecosystems meaning is just meaning Um, we deflated everything else away now. We're left with just what's there We now will use this deflationary model to specify a computational theory proper That leads to a formal semantics via the free energy principle. So let's return here Here's the fallacy um that people were making they would define mathematics algorithms representation and computational processes of a system bacteria or the mouse or a simulation But it was always focused on the system of interest And so without specifying the ecology it was kind of like building a castle on nothing because the internal representations the computational components Were not grounded in the evolutionary or ecological niche of function And therefore the cognitive component and the intentional gloss was kind of open ended It's like, okay, this neuron is activated when the mouse is drinking water. So maybe that means the mouse is thirsty um The missing piece was the ecological component Um, then they're going to they say figure four. I believe that they mean figure five Uh, or maybe they are referring to the previous figure four, but here is where they're going to reformat egan's Uh, uh scheme, which is here on the screen. They're going to reformat that in terms of the paradigm of free energy principle So this is like the big um synthesis piece. This is you know, almost at the end of our discussion where we've really reached their synthesis They're saying look the common framework is computational dynamics Not computational components and cognitive components that you're going to split up But it's all about computational dynamics That's as deflationary as you can get we we separated and deflated Into a cognitive component and then guess what we realized that by having an ecological component We don't even need intention Because intention. Oh, well, maybe if this neuron is activated that the mouse is thirsty Now you don't need to worry about that you can just say the mouse is this ecological agent that has to reduce its uncertainty about how thirsty it is And using that partitioning of the fissure information equations that we kind of passed over earlier We can formalize the ecological relationship of the mouse to the surroundings And by doing so we can get intentional like claims like the mouse wants to drink water It's trying to open up the water bottle. It's running as fast as it can to get water We get these types of intentional like claims Coming out of here from the ecological component Um vis-a-vis dual information geometry And so here is where they unpack each of these components of egan within the free energy principle So the mathematical function that is being enacted is the free energy functional Which is the minimization of expected free energy given the generative model of the organism The algorithm that the organism uses to carry this out is a stochastic Variational gradient descent on free energy Um, which we'll show one little thing after this The representational structure Uh are the internal states of the model reflected by these lower dimensional manifolds And then the computational process By which these representations morph and evolve is active inference And then we're grounding it in the piece that is most lacking from mainstream science Which is the ecological component specifically formalized here as dual information geometry Okay That's the big claim any thoughts on that or how does that strike both of you after like hearing that whole lead up I think this figure sums it up really nicely and draws the kind of uh, parallels that encompass um The setup of the aims so it's really nice and I agree that the ecological component is um is lacking and uh Is quite important in actually understanding what the system is doing cool Yvonne anything okay, so let's return to the paper um Much hangs philosophically on what it means to represent a target domain in terms of the relationship between mental states mind And physical states that realize them brain. So now we're talking about the mind brain mapping Mind is the intentional gloss That's like well the mind wants this And then the question has always been well How is the mind related to the brain and we're not coming at this from the question of how his consciousness generated in the brain or other Areas that also deal with mind and brain duality, but just specifically using representationism as the wedge There's a representation of wanting water and then there's the brain state and that's what we've been all about Is exploring this connection between wanting water and actual neural firing patterns and Non-neural things happening like hormone levels and all these other components that aren't even being attempted to be described by neural network models or other features um There now this is where they philosophically tie it all together and again this friston at all 2020 paper There's a lot more philosophy to delve into and the citations and this these will be papers we explore in future discussions but um of the philosophical perspectives that relate mental and physical states mind and brain Ours is most consonant with functionalism and therefore the multiple realizations that it entails Functionalism is the view that features that characterize mental states are not intrinsic features of a state But rather the function which is the input output mapping between that state and other states of the system So here's where they're tying it to a lot of other previous um ideas related to cognitive science and other areas um Then they return to some of those earlier questions like can you get a misrepresentation? um, and they then talk about some future research where you actually are able to objectively ask um How well does the sensory data conform to hypotheses about what caused them? So that would open up the door to misrepresentation Not that it's a binary state correct or incorrect representation But you'd be able to look at two different representations and ask which one was more accurate You know if you have two maps of the subway and one of them is causing you no surprises when you're using it And the other one there's outdated stops and the schedule is off. It's pretty clear That that state representation of the map is less adequate It's less good of a model and that gives us this key feature of representations being able to misrepresent Which is important for being able to act well But we're also not going to fall into this absolutist trap where there's like a best representation or only representation um And that's actually the last paragraph they basically ask that yes under the free energy principle There are structures internal to an organism that are the bearers of semantic content We can specify these internal structures in terms of deflationary computational theories figure five Um and by virtue of the double information geometries that are in play under the free energy principle That's the part the thermodynamic or the thermo informational partitioning of internal and external states by markov blankets So we'd make this dual information geometry ecological partitioning And then we actually can use our mathematical account to reach an implicit semantics Which is the set of hypotheses about underlying causal factors that the system is parsing in order to make sense of its sensory stream Um, this might be seen as vindicating the structural representationalist account In other words, so if it's deflationary are we uh vindicating the idea that it's just neurons But there's a critical a critical twist the critical twist is that those structures that bear content Are not merely neural representations, but indeed phenotypic representations for it is the internal states of an organism Given the markovian partition that bear this content And this is like a really um Amazing way to close the paper because it's saying yes neurons are involved But depending on what level the cognitive process is playing out on That is going to co-define where the markovian blanket is So for example, if multiple people aren't a team together and the team is engaged in collective cognition Then that team markov blanket is actually the one that is bearing the representational content So multi-part a dialogue is actually being represented at the group level Though there also could be smaller markov blankets that could be drawn around each person Each brain region each neuron each organelle everything's always practical fractal Hierarchical embedded but for utility we can actually draw out these uh partitions as they've suggested So that is the end of the paper. It's almost the end of our time. And I think it's just good if we um Had any closing thoughts before we shut it down, but this was really an awesome discussion So maybe what's one thing that you took away from the paper or one thing that you're still wondering about at the end of it I just uh, I just understand I need to read it again One more time I had I had your presentation and I go through the through the paper with it, but still I need to Deep more Perfect always we can dig deeper sasha Yeah, that was really useful to walk through this whole paper Incredibly dense, but I like what you said about the last closing statement is that it's kind of a Yeah, free energy is the lens to look through and by placing the markov blanket at different Places in the system it can be scientifically useful for us to understand the system As it's useful to the organism that's using that representation Well said well said so That concludes our discussion Thanks a lot for listening everybody who's listening live or in replay. This has been a team com podcast number two And um, we are always open to participants and to suggestions about how We could discuss these ideas better how we could bring it to the people in a more exciting and participatory way So get in touch with us. Thanks again for listening and I'm going to