 Hello, everyone. Welcome to the Active Inference live stream. This is Act-Inf Stream 5.2. It is October 6th, 2020. First time listeners or not, welcome to Team.com. We are an experiment in online team communication and learning related to Active Inference. You can find us on Twitter at inferenceactive. You can email us at active inference at gmail.com, on Keybase at our public team, or on YouTube. This is a recorded and an archived live stream. So please provide us feedback, live participants, as well as viewers, so that we can improve our work. All backgrounds and perspectives are welcome here. And also just as far as video etiquette goes, please mute if there's noise in your background and raise your hand so that we can use the stack to hear from everyone. As mentioned, we are here in Act-Inf Stream 5.2 and here's a little bit about how today is going to go. First, we're going to have our regular warm-up section where we'll do some intros and we'll check in with a few questions. Last time in Act-Inf Stream 5.1, we did a pretty broad overview. Summary of the paper went through the figures, the goals, the abstract. And today, in the follow-up discussion part two, we're going to be going through, first off, any questions or comments that people want to address. And then after we've exhausted that, or if we're excited just to get to the other topics, we'll go right ahead. We will return to the debate about internalism versus externalism, because that is in some ways what the broader scholastic tradition that the paper is involved in is about. We'll talk about Markov blankets. We'll talk about what it means to be internal and what it means to be generative. We'll talk about changing one's mind. We'll talk about free energy minimization. We'll talk about some of the limits of cognition and science. And then we'll hopefully return to the figures, especially figure six and mechanistic differences in different multi-scale systems. So here we are in the introduction section. For this intro section, please introduce yourself and your location. Just say a quick hello and a short introduction if you would like, and then pass it to someone else. I'll go first. I'm Daniel Friedman. I'm in Davis right now, and I will pass it to Stephen. Hello, I'm Stephen. I'm in Toronto right now, and very excited about being here today. I'm going to pass it to Alex. Hello, I'm Alex. Today I'm in Moscow as well as usually. Hello everybody. And I pass it to Iván. Hi, my name is Iván. I'm from Moscow also, and I pass it to Alejandra. Hello, hello everybody. My name is Alejandra, and also as usual, I'm here in Mexico. And I pass it to Sasha. Hi, good morning everybody. I am as usual in Davis, California, and I'm looking forward to this conversation. And I pass it to Maxwell. Hey everyone, I'm Maxwell Ramstad. I'm happy to be joining this conversation from Sunny Montreal in Canada, and I'll pass it to Shannon. Hi everyone, I'm Shannon, and I'm currently in South Dakota. Awesome. Does that get through everyone? Yep. A nice small tight group today. Thanks everyone for coming in. So let's go through our warm-up questions, and we would love to hear from all of you on these. The first warm-up question is, what got you excited for today's discussion? What brought you to the follow-up discussion? And people can raise their hand. I'll start. What got me excited was that second phase of interacting with the paper, which is beyond just at a first pass, what was said in the paper and what the figures represented to tie it to deeper questions and to deeper ideas. Yeah, what really excited me was a lot of stuff that came up last week, so that was really useful, and I got some new insights. But I saw a potential way that it was moving towards this multi-method potential to bring together different ways of thinking and practices. So I'm interested in how the different levels can come together in different ways and privilege different types of scales in ways which are actually quite practical and useful. So that was really interesting. Cool, and we'll definitely talk about how the different scales come to bear. If anyone else wants to raise their hand, otherwise I'll just go to the second discussion question, which is, if at all, how have you updated your beliefs on multi-scale integration since the last conversation? I think that was a question from Sasha, so maybe I'll go to her while other people are raising their hand while she un-mutes. Yeah, that was a bit of a cheeky question, but I think we got into a lot of interesting topics last week. And really getting into the philosophy of science, that was really interesting for me, and it's a topic I hope to explore, especially in thinking about different levels of how to create the appropriate system for studying cognition and where to draw that boundary, something I'm interested for figuring out in my own work with studying neuroscience and development, and just really enjoying hearing everyone's perspective as well. So my update was that philosophy of science is really critical to the practicality of experimental design. Yeah, there's in many of these papers and ideas, there's us experiencing the system. We're apparently only experiencing our individual cognition, but then what does it really mean to be experiencing individual cognition? Anyone else want to have a thought on that? And also I'll just put up the third question, is what would be something that you'd like to clarify or explore in today's conversation? So, Shannon? I'm really curious about this idea that the mind is skull-bound while also giving cognition to extended systems that extend beyond the mind, especially in like being in a neuroscience lab, and like what I am interested in studying and explaining is how the brain is involved with some cognitive process, while also understanding and not minimizing the role that the body or the environment plays. Agreed. Within the Markov blanket of this conversation that we're having, there's some sort of distributed cognition at play, but who or what is experiencing that and what are the dynamics of that type of cognition? Does anyone else want to have any thoughts on these questions or we can just start going through some of these quotes and ideas? Okay, I'm just going to go into it. So here we are in multi-scale integration beyond internalism and externalism. In a 2019 paper last week in 5.1, we went through it pretty broadly. Today we're going to follow up on some key quotations, some key ideas, go through the figures again in this sort of spiraling back at a higher level, wanting to understand what is on the table, what are the implications of what is being discussed. The goals of the paper, just to rehearse them, were to make explicit this description of the boundaries of cognitive systems as being multi-scale. And so when we think that the boundaries of cognitive systems are nested and multiple, and the implication of this to the authors is that cognition has no fixed or essential boundaries. And where that sort of took me as far as the slides ago was what were the implications of taking multi-scale integration seriously, and how does free energy principle and free energy minimization fit into all of this? So first, is there anything that people just want to raise as something to address or think about before we jump into some of these topics? If not, just feel free to write it down and we'll return to it soon. Okay, for the first topic, yes, Stephen, go ahead. Just one thing that I think might be good to mention that came up. Is this idea of how we privilege certain levels and maybe then end up just stuck at that level, like a psychological level or a behavioral level or whatever. There's an implication, I think, in this to think about the levels, which is kind of maybe some way a bit of an abstract thing when across all the levels, and how do we actually privilege in some sort of intentional way the levels that are going to be sort of looked at as the kind of active inference engines that are most relevant. So that's something that came up for me. Yeah, I would actually like to follow up and ask you, we talk about this privileged level of analysis and recall Dennis Noble's paper. What does it mean to privilege one level of analysis, or is that something that is just a priori to be avoided, or do we want to privilege larger systems, smaller systems? How do we give each player in the system its due respect? And where does that fit in with this vocabulary of privileging one level of analysis? You know, I was wondering whether that's where that active dynamic kind of structure that's a play might come in. I was thinking that there's this sense that you've got kind of the overall kind of dynamics of the system, but there are key places where there's something. I think Shannon mentioned this last week in terms of the certain levels which are most relevant in terms of how the dynamics are yielding some systemic kind of act, which is appropriate at the particular spatial, temporal sort of speeds or distances, which are relevant for the study or the experiment or the practice which is being looked at. So I suppose in some ways you have to start to come back from the other directions at some point, assuming you're moving more and more towards practice and say, okay, what is it that the kind of intentionality of the research is? Cool, and we'll return to this idea about are we looking at the dynamics, at the causal structure of the system, are those the same thing? Here we're going to talk for a little bit about the big philosophical question, which is internalism versus externalism. So I'll start with just a definition of the topic from Encyclopedia.com. And here it's about epistemology, but there's analogous dichotomies in other areas. They write, internalism in epistemology is a thesis about the nature of epistemic normativity or the sort of normativity that is involved in the evaluation of cognition. Specifically, internalists claim that the epistemically normative status of a belief is entirely determined by factors that are relevantly internal to the believer's perspective on things, perhaps scroll down, as Shannon said. By contrast, externalists in epistemology deny this. The externalist says that the epistemic status of a belief is not entirely determined by factors that are internal to the believer's perspective. And that with the spy versus spy to represent that they're sort of playing off of each other, they don't totally disagree, these two viewpoints. We can see with this graphic from a 2010 paper by Linnae and Steiner, they represented externalism and internalism. And I just thought this was an interesting figure because it showed to what extent these two schools of thoughts basically agree. They basically agree that there is an organism that is part of a coupling device like the body, let's just say. And through actions, there's action on the environment. The environment provides sensations to the organism and that's the action perception loop. So that's sort of what we agree on between these two sides. Where is there disagreement? The two main things on this figure where we can see disagreement are, first off, what the externalist sees as happening inside of the organism to a large extent is strategy. Whereas the internalist sees the organism as making representations about the world. And then another difference that's reflected on this figure is that perception, the wavy line, is in the internalist's view inside of the organism. It's skull bound, it's happening downstream of the representation. Whereas for the externalist, the perception is coming from the interaction between the organism's strategies and the environmental causality. And that was something I was sort of just thinking, doesn't perception still happen inside even if there's an important role for perceptions outside? And so just looking at this graphic or just thinking about where you stand at this moment, would you say that you're an internalist and or an externalist? Just looking at these graphics, does one of them seem more or less consonant with your stance on how ecologically embedded systems are in the world? Or does there seem to be something that's missing from one of these sides or the other? If and or is a real choice and we can pick and, then definitely have you both. What makes you want to take and in this divide? I feel like I want to answer that question at the end of our discussion. Perfect, I might as well. It's why I threw in the hand and why the spy versus spy are shaking hands because the whole topic and the discussion in some senses is, remember we're moving beyond internalism and externalism. This may or may not be a false dichotomy. And then the question would be, why has this false dichotomy, if it is, attracted so much attention or tension, what is going to be the framework that is going to step beyond internalism and externalism and potentially provide a satisfying resolution to all of the similarities and the differences that these two models present with. So again, the action perception loop doesn't seem like there's broad disagreement on those topics. However, there seems to be a few key issues that are up for debate, such as whether perception happens purely internally or whether perception is related to feedback with the environment and then whether what organisms are doing is best thought of as representational or as strategic. Steven? I think also those diagrams, there is also that piece of the diagram where they kind of make out that they're directly causality and the environment they've got a direct relationship to. You know, like, because in some ways your actions, if we see that as really you've got access to your proprioceptive signals and your sensations are kind of your ongoing stream of data, that link to the environment I'd say is a lot less... I think both of these two approaches rely on there being quite an idea that my action is touching the environment as opposed to my action being a kind of set of proprioceptive signals which I sort of am inferring something from and my sensations are the environment, you know, this idea that I see the world as opposed to having to piece together a whole load of non-linear data over time. So I suppose there is a kind of maybe a challenge to that part of the two ways of looking at externalism and internalism that's been done in the past. Shannon? I suppose that's true too. So you said like there's a sense in which I touch the world, Steven, and I think phenomenologically there's a sense that I touch and I experience the world but if I'm thinking about the brain, the thing that the neuron perceives is not the world, it's the signals that have been translated into electrical and chemical impulses and that's where like I really want to know how the brain works. I got to be internalist about it but if I want to know how I experience the world then there's a lot more externalism that you can allow in. Good points and also we'll return later on in the conversation to what kind of experiments would be differentiating. How will we reduce our uncertainty about the strengths and weaknesses of these two options? Maxwell? I just want to point out that our agenda in this paper was sort of to dispel the isms of internal and externalism, right? So I mean the idea is sort of the, well, cognitive science from most of its, from its inception to basically the heuristically like the late 80s was an internalist program and then externalist approaches started appearing in the late 80s and 90s that emphasized things like embodiment, the fact that cognition is grounded in the operations of a body and then you saw the appearance of theses in philosophy like the extended mind thesis or the extended cognition thesis which is a strong metaphysical claim that says well actually the realisers of cognitive processes spill over into the environment so that if you're going to draw a boundary around the cognitive system you have to include external components, environmental components as realisers of these processes. And the kind of hard line externalist position is basically kind of the traditional inactivist position is the equal partners principle which essentially says cognition is always necessarily a loop between internal structures like the brain and action and external structures like the ones in the environment that we're interacting with and our view on that was that any essentialist position about the boundaries of cognition which is to say any position that says okay well we can define a necessary and sufficient set of conditions such that we can identify whether this or that is a bit of cognition and whether it's internal or external to some, the organism's boundary that any essentialist position of that sort is going to be wrong headed that there's not kind of a one size fits all way to cut up system so basically we're making two kind of ontological claims about the boundaries of cognition which is that they're multiple and that they're nested so they're multiple in the sense that depending on what you want to explain then the relevant boundaries might be around a cluster of cells in the brain or it might be the brain itself or some region of the brain or maybe it's the brain body system or maybe, so it depends and then the epistemological point which is precisely that it depends on our explanatory interests so you know it may be that if you want to explain for example daydreaming or some other kind of phenomenon well you're mostly going to be appealing to brain processes but if you want to explain sport performance then this loop thing is going to be the most important and importantly the point of the article is that the free energy principle gives us a reason to think that we shouldn't take these strong essentialist positions there's a way around them and a way of being more ecumenical and flexible about where we situate these boundaries Very nice, I agree and hopefully that's where this discussion will take people is beyond the dogma towards the specifics of the situation and I also was thinking about like dreaming versus action are these internalist versus externalist stances challenged by action list experience like a dream or is a dream are they going to couch that as some type of experience and you know what enough with that let's just look for a framework that will help us deal with the specifics of the situation so speaking of internal and external and how they're treated under the free energy principle let's talk a little bit about Markov blankets so three kinds of Markov blankets are shown on this slide there's a quote in the paper that the Markov blankets are a result of the systems dynamics and two twin questions that I wanted to raise and think about were what if we compare two systems that have the same components but different dynamics and the sort of inverse of that question which is two systems that have different components but the same dynamics and so especially if we think about the Markov boundary shielding internal states so black box around some level of analysis all were able to see are the emitted states so the dynamics of the emitted states may be similar even though there's two different underlying reality so two different computers different hardware but then they're able to run the same program at the same speed and then the other side of the question would be okay you can have the same computer with the same underlying componentry but then it might be involved in multiple different kinds of dynamics so how does that fit with the Markov blanket what is the Markov blanket really tracking is it tracking the inputs and the outputs of the objective system is it tracking the inputs and the outputs of us measuring the system so one thing to highlight just before we really get started is that I think too much emphasis has been placed on the literature in part I mean in my own work also on the Markov blanket on its own Markov blankets are very common and explanatorily uninteresting just on their own for the most part so I mean a Markov blanket you have a set of random variables you want to see whether this and that one are independent well the Markov blanket is the set of random variables that renders them independent in the sense that basically they're that conditioned on the existence of this other set of variables then these two are statistically independent in terms of like their existence so that's cool but there are Markov blankets everywhere so you can think of the present as a Markov blanket between the past and the future for example it's a very general thing in the context that we're talking here the Markov blankets get their explanatory purchase when you combine it with the other kind of moving parts of the free energy principle which are the non-equilibrium steady state slash generative model thing so basically the Markov blanket tells you which set of states kind of belong to the system and which don't and then the non-equilibrium steady state density tells you well what is the the set of correlated values that these states can have such that the phenotype is maintained and it's the two together that really get you the explanatory power of the free energy principle where the system is and what it is the where kind of answered by the Markov blanket thing and the what answered by the generative model aspect of it and so the Markov blanket itself in the free energy principle stuff you get it from a sparsity constraints on the flow so basically this means I know it sounds technical but it basically means that all of the states of the system are flowing they're changing dynamically they're moving around in the state space the system is moving around in the state space and basically when we say sparsity conditions on the flow it's that some variables don't affect the change of other variables so that the rate of change of one variable with respect to some other set of variables is a relevant kind of way of measuring the influence of one set of variables on another and I mean if the general idea is if conditioned on some set of variables the rate of change of one one variable relative to the other is zero then you can say that there's an independence statistically right so the way that the blankets are constructed is by analyzing the dependencies structure of the flow and then saying okay well the dynamics themselves seem to carve out these kind of independencies so it's a dynamical notion and I'm emphasizing that because the dynamics themselves are premised on the whole generative model aspect of it so it all kind of moves together yeah and you don't get the free energy stuff just out of more cowboy Shannon? Yeah like following up on that I think there's a way to rephrase these two questions and that the first question comparing systems with the same components but different dynamics is asking sort of how the same set of components if you give them a different set of initial conditions can exhibit completely different behaviors or like come to embody different phenotypes in a certain niche and then the second question what if they have different components but they have the same dynamics is a question of multiple realizability where you could have the same sort of computation or something enacted with different components and these questions together I think highlight the importance of if you have this Markov blanket over time sort of like system of interest over time the importance of the way that system is structured and the function between we could say the different nodes underneath the Markov blanket and that's more important than the actual physical stuff it's made out of maybe and back to the first question the role of the environment in affecting the internal states of that system. Cool thank you and Michael stated and tied back to these other questions about multiple realizations. The follow up question to that was just in which ways the blanket is tracking just the pure observations of the system versus the causal relationships and you gave a great example about how depending on the initial conditions that means that the conditional dependencies in the systems may be very different and so in those cases the Markov blankets are different but in some senses this is an implication of saying that the boundaries of cognition are context dependent because it shouldn't surprise us that the Markov blanket and we will get to the generative model and internal states soon but it shouldn't surprise us that the boundaries change because there's times where the cognitive process is happening in different causal ways or different spatial scales. Stephen I think also this might be where that challenge with the environment and the environment because we talk about what mental processes or what environmental processes but what's the constituents that are being put together in terms of like the morphology or the actual, you know, am I engaging my muscles in my hand which and that has the hand is constructed it's my feeling of my phenotype about how I should act it's not like my knowing of my phenotype it's like my actual biological engineering is going to enable certain things to happen you know unlike a chimpanzee or something like that there's certain things I can just do so there's this middle bit which you know we're able to adapt and adjust through our cognition what gets brought online between the environment and the sensor so that's the bit that kind of really complicates things but obviously seems to be really relevant but I think sometimes gets lost when it gets kind of more theoretical. Yep let's look at this very provocative quote which is in this sense cognitive science might be understood as the study of generative models processes it is the business of modeling the correlational or causal structure of actions and observations of the organism the generative model then is not the vehicle of something like content or mutual information instead it is the tool that we use to study cognitive systems as an explanatory model and indeed perhaps more speculatively the guide or path living systems entail and follow to stay alive as control systems so in both of these quotes there is some pretty interesting notions from a cognitive science perspective and also it hints at this way that the Markov-Blenke or that the active inference idea it not only is how organisms are in their environment and so that's referring to this path that living systems follow to stay alive in other words that that's what the animals are doing or that's what the bacteria is doing but also it's the tool that we use to study cognitive systems and I thought it was pretty interesting that it was first and foremost and unambiguously presented as a tool that we use to study systems and then perhaps more speculatively it was a guide that living systems follow to stay alive so Maxwell yeah I'm curious just about that breakdown. Yeah to give you some context this paper multi-scale integration the Schrodinger answering Schrodinger's question and a tale of two densities were all originally the same paper that we split up into different lumps with answering Schrodinger's question being the more kind of theoretical mathematical statement and then these two other papers being kind of philosophical explorations of the consequences of all this so over the past few months it's become clearer to me what's going on here and I have a paper on this now which has been published in entropy with Ineshi Pulido and Carl Friston it's called is the free energy principle a formal theory of semantics and yes is the answer to that question and basically I've been playing around with the idea that there are kind of two levels of instrumentalism that are at play here so instrumentalism broadly speaking in the philosophy of science is the view that well the work of science happens by leveraging scientific models and getting them to do some interesting explanatory work explaining the variance and our data and stuff like that so an instrumentalist would claim that scientific models aren't literal descriptions of the universe but a kind of useful tool that we as scientists use to explain the main causal features of the phenomena around us so there's a first level then that you can be instrumentalist about the free energy principle and I think that on a good day when I wake up on the right side of the bed I'm an instrumentalist about scientific models sciences about comparing models of data with theoretical and formal models ultimately and these can be read in a very very large way so on a bad day I'm a realist or I have a realist bent and I really think that these models are getting at something like the true causal structure of reality but that's I think scientific maybe or whatever but anyway so there's this kind of meta level this philosophy of science kind of instrumentalism level and I think that there's a debate there some of my close friends and collaborators think that the FEP is finally a literal theory of how brains process information so it's not just a metaphor or an as if thing but it's really this is how it happens all this is independent from what the free energy principle itself says and so the more controversial claim but I think increasingly I think this is the only way to really interpret what the math does like coherently with the free energy principle is in a sense is a statement about how organisms are able to exploit, use or leverage the statistical structure of their bodies in motion to generate patterns of adaptive behavior at that point of view regardless of what your position is at the meta level or the philosophy of science level at the theoretical level what the FEP is telling you is that organisms are exploiting a model effectively that they are a statistical model that they embody in order to generate patterns of adaptive behavior so that's to clarify increasingly I think that double instrumentalism I'm all about this nested stuff right so instrumentalism nested within instrumentalism is probably the right way to approach these questions but definitely for sure at the theory level the free energy principle is an instrumentalist theory of cognition regardless of whether we're realists about the ontology that it might entail. Thanks for that clarification and kind of reading between the citations there and I also really agree with that it's sort of like well we've tuned into the idea that it really is about our model of the world and about us in relationship with the world and now people just say well how is the world really and then the free energy principle saying well it's really about the feedback between the organism and the environment and then we still want something to pin down to that's really how it is but that's the instrumentalism of science is whether you use optimal foraging theory or an economic theory or some other theory that doesn't change what the system is doing the bacteria doesn't know doesn't care what version of which theoretical framework you're using or you know what papers you have or haven't read it's doing what it does and the free energy principle is saying that what it does is as you had just described leveraging the bodies in motion though also I have some questions about where thought fits into that and that's really it it's really feedback understanding our environment to guide skillful action and you can't just cut the loop there and say well but then what is just what is really happening it's really what's happening and the real implications of that are as you described so if there's no other comments on this slide I'm gonna go to this next few points about what is internal and what is generative so first was a very interesting quote in the latter part of the paper and they wrote first we take issue with the claim that under the free energy principle the generative model is something internal to the organism so they're dissenting from the opinion that the generative model is something that is merely internal i.e. that the generative model comprises neuronal vehicles or any other vehicles I guess like cars rather the generative model is a mathematical construct that explains how the quantities embodied by the system's architecture change to transcribe that is update beliefs about the causes of the system's sensory observation what should be at stake in the debate between the internalists and externalists is the status of the guess that the organism embodies namely the posterior beliefs encoded by internal states and whether this guess does or does not constitute the limits of cognition understood as the avoidance of surprise or informational homeostasis so my question here was if the internal states are not generative models or another way to say it if the generative model is not something that's just internal to the organism what are the internal states and then the other side of that question is well then what or where is this generative model playing out so I can I can answer that I think it's a lot simpler than it might seem initially so in machine learning if you have a bunch of variables some of which are hidden some of which are observable and you write a joint probability distribution over all of those variables then that joint probability distribution or density is called a generative model so it's called generative because by representing all of the statistical relations between all of the different variables as this big joint density you're effectively able to generate the outcomes or the data that you would expect you know contingent on these causal relations actually holding between all of the different variables that you're talking about so just from a mathematical point of view this joint density is never represented explicitly anywhere in the brain in the free energy active inference approaches typically this whole joint density in all of these kind of more formal models what you'll typically see is like one big probability distribution written on the left hand side of an equation you know P over all of the states that you're trying to look at often these will be states and and all these things and then is equal to some big product of likelihood and priors and so what's actually encoded in the brain has to do with these likelihoods and priors so the idea is that there is a generative model at play but it's only at play in the dynamics the generative model is sort of like the wave of falling dominoes the wave itself is only present in the kind of motion or dynamics of the domino pile falling so what we're kind of proposing here and we'll be discussing this a bit more in the next weeks with a tale of two densities the paper which is specifically about this but the idea here is what body states actually encode are posterior estimates so posterior estimates of the value of states like your body is basically your best ongoing gas as to what's causing your sensory states and yeah the generative model itself is sort of like the point of reference for the dynamics so I mean if you think about it what is it this joint probability distribution thing well it's essentially a kind of surface that the system is moving around over or equivalently that describes the probability of finding the system in a given state at random if you're just sampling it and what it's telling you is the allowable co-variations between all of the values of the variables that make up the system such that the phenotype is maintained so you know speaking a bit loosely perhaps this generative model thing is the phenotype of the organism in this kind of broad statistical sense of harnessing all of the possible you know combinations of states that are compatible with the continued existence of the organism so I mean it's a bit it's a bit hard to grasp I'll admit but at least the mathematics of it kind of look like that thanks for that explanation and it's why we're here is to really dive into some of these key issues I think just reflecting on the pieces that I'm taking away from this is one is that the free energy principle is describing how scientists investigate systems or investigators of any kind and then as was put perhaps more speculatively is what is actually happening in the system but in some sense that's secondary because it's so important to be clear about how we're approaching the system so instead of saying it's an optimal forager it's like no I'm studying the ants foraging within an optimal foraging framework no one can take that away from me and then whether the ant is doing optimal foraging or some other thing it's almost a secondary question because it's so critical to be clear about our perspective and then the second piece is this internal and external debate or whatever it is tension between those two is to be clear about where the generative model and is and what it is and isn't and also it's just a map in territory distinction so let's look at a few more quotes from this paper here's a quote the generative model is a statistical construct aka that means by people that transcribes the expected sensory causal regularities in the process generating sensory states the generative model is used to model the set of viable phenotypic statistical relations such as preferences and action policies that must be brought forth by the organism in active inference in short a model of a viable state of being for the organism through active inference internal states are tuned and this tuning changes its posterior belief and hence the organisms best guess about what caused its sensation that usually include its own actions in other words a generative model can be used to understand how organisms are able to track or infer their own behavior and one key part there is that the best guess doesn't mean the one that's closest to reality per se it means it's the most action oriented or evolutionarily relevant guess so the best guess for a loud noise isn't just doing inference on the location of the noise or on the exact type of object that caused it the best guess when you're doing inference on policy is actually about how the body should change its dynamics and then the question that I raised here is just how does a multi-scale generative model work in the brain or in a social group and can there be a generative model without action or behavior or external states being influenced so heavily maybe things these are things that we can return to later if no one has any comments now so yeah Stephen one thing I've been interested in is if you're looking at someone's generative model what's the best place to look at it like in a way if I was to look at someone's physical states and how that's compatible with their existence in the environment is that a good read on their generative model as much as looking in the brain if the states that I'm in constitute my generative model's best guess of how I should be in the world in a way that is a window into what my organism level cognition is I don't know whether thinking is the right term I agree with that I think well if you look at a lot of the active inference modeling that's based on the partially observable Markov decision process implementation for the most part it's a behavioral modeling framework so what you're trying to do when you're tuning your generative model is to get your behavioral profile to align with some data that you're generating so behavior on some kind of task of some sort yeah and it's the connection to the neuro stuff is a bit more elaborate like you know we rely on like sort of auxiliary hypothesis and other work that kind of suggest that oh these precision are being estimated here these prediction errors are being generated here and so on so like if you want to you can use a fitted generative model to make predictions about the kind of neural substrates that you would expect to be engaged in this or that task and that can help guide fMRI and EEG experimental design to kind of probe into like specific brain regions that you think might be involved so you get into this kind of circular experimental design thing if you want to do the brain aspect but actually these techniques are more easily used for behavioral modeling because you don't have to make all these assumptions about brain structure yeah just one comment on there and this was clarified by reading the SPM statistical parametric mapping textbook the generative models were used extensively to rather than just do descriptive statistics on the very complicated data structure of neuroimaging what was being done in the SPM approach was a generative model of brain states was being used as a generative model for the data and so again that's that instrumentalism of the scientific kind that just from the purely statistical like how am I going to write this paper perspective having a generative model or having the brain regions connected in a generative model that generates data given certain hypotheses about how brain regions are connected that turns out to be a lot more tractable to study and it allows you to do model comparisons in a very very direct way and then maybe the activity of what those brain regions are doing is itself making a generative model or Cartesian theater or something like that it'd be wild and amazing if true and also wild and amazing if the bold the brain oxygen level dependent signal captured that but at the very least and again this is sort of retreating to the scientific instrumentalism at the very least it's a way that we can investigate the changes in the dynamics of those systems absolutely I mean this is the duality of techniques that we use right on the one hand there's dynamical causal modeling which as you're pointing out then doesn't assume that the process that you're modeling is itself an active inference process although it could so you know this is the technique that we use in FMRI data analysis for example and it's also the technique that has been used recently by Carl Friston's group to model the spread of coronavirus which you know you could think that maybe corona itself is an active inference type thing but you don't need to make that assumption to use this modeling strategy and try to extract using the what the corona studies do essentially is take a bunch of raw data like raw number of infected, raw number of deaths from corona and use that as data that you're essentially trying to then write a generative model of and fit the generative model to the time series effectively to kind of extract the causal structure of the disease process in this case that's leading to the data that we're observing. Active inference modeling can be combined with this but makes the assumption that the agents that we're modeling are all performing a kind of active inference variational inference thing. Nice Stephen I think this also links nicely I do a lot of spatial cognition and how so we think about oh I see something when I see something within this direction there's actually a recent paper I saw where people who've lost their here who are death basically the brain region for the vision actually maps out wider and has less resolution in the middle because they no longer can hear things coming in from their sides and they can't perceive things by listening to something that comes into their auditory field so they have to expand out their visual field and so this sense of like how we bring in our sensor because we can hear all around us but only for a certain distance we can see a long way but only in a certain direction and some of those another paper I've heard about where people touch their nose it could be because you're trying to think through smell it's like you know like you sniff out a problem so you've got this kind of like the modality like just when you try and feel your way into a problem and ground yourself you don't ground yourself with your eyes right you ground yourself with your bottom or whatever it is and you hear the field or you see the future or you sniff out wrongdoing or you taste there's a bad taste in my mouth about so this this is kind of interesting because these these are kind of mechanisms that are kind of just pure sensorim but there's also this kind of the way those sensorim relate spatially into the environment and couple you know what sort of statistical parameters are they relating to so yeah I thought that's kind of fits in with what you were saying. Yep thanks for that and just to give one last comment on this generative versus a descriptive model what reality hands us what the CSV that it hands the scientist or the images that it hands us from our sensory observations we could go just from the data and just identify keep on going upstream until we have a model of the world and that's sort of correlation all the way down you can say well it was big and red and you can keep on carrying that correlation between bigness and redness but it's always just going to be correlational and then the other approach would be to go this back and forth between the observations and then an actual generative model where let's just say big and red cause each other or there's a hidden cause that causes big and red to exist and then through expectation maximization EM algorithms or other approaches fitting that generative model gives us a lot of power and it's very computationally tractable to fit the causal relationships that could generate the observed data and so the expectation maximization algorithm is a good starting point for people who want to learn more about this in the scientific instrumentalist perspective. For the next, yeah, Alejandra. So yeah, I was thinking there's, I don't know how to say maybe it's great to say a boundary between this autopoietic process thinking in a cell for example when we can talk about a generative model and allostasis processes on the other sides like only homostatic processes so when you can observe a cell and okay this autopoiesis processes are driven by a generative model or they are just like more reactive and homostatic I don't know but I was I was thinking if there's like a boundary or any distinction or just to make an inference about metabolic changes so maintain this desired states or just react and compensate them I don't know but I'm quite confused when taking this autopoietic theory about living things and then thinking about generative models inside this so I don't know if I'm clear I thought I was pretty clear I should provide the caveat that most inactivists I think vehemently disagree with me about this but my hot take on this is that the free energy principle completely subsumes autopoietic inactivism meaning that everything that an autopoietic inactivist can do within their framework we can do better so autopoiesis is self-production and my understanding at least of this I consider myself a kind of reformed inactivist this was like my main thing for a while until I stumbled upon the free energy principle so inactivists kind of start from the idea of autopoiesis autonomy and then kind of work out like how it works and what you can say on the basis of autopoiesis so the free energy stuff is cool I think because it explains to you how the autopoiesis gets established to begin with I mean I haven't written about this but I kind of feel like what active inference is doing is putting the adaptive loop first rather than just saying hey look there's this Markov boundary like an inactivist would hey look there's this autopoietic boundary that's kind of producing itself the active inference framework is asking what is the set of processes that have to be in play such that a boundary can be maintained over time and then the generative model is really just kind of it's a probability density over all of the states that are kind of secluded behind the Markov blanket so what are the values of these states behind the blanket that are allowable example of this is so let's consider the boundary of my skin as the kind of organismic boundary for a second well one of my internal states is core body temperature and my generative model has a kind of normative density over this state which says 36.5 degrees Celsius with a very small variance and when I'm detecting consistency from that kind of preferred data point I initiate action to bring myself back to that preferred data point so you know it's getting cold now so like you know I might put on a parka if I'm trending in the kind of negative direction I mean the point about allostasis and homeostasis is interesting I think it's increasingly been worked out by people like Andrew Brown and Jacob Howie but I mean like you know homeostasis is all about what I just said right so you have a preferred data vector and you're trying to get yourself as close as possible to that data vector where that data vector is often going to be something like okay well my body temperature or whatever allostasis is about the control of that data vector especially in response to stress and environments that are basically frustrating your desire for certain kinds of data and I actually have a paper that we've been preparing for like a year and something now with Irena Arnaldo and Casper Hess and Andrew precisely on this issue the paper is about a depression as an allostatic disorder so the idea you know if you need to continuously be involved in an allostatic process where you're always away from your kind of default data distribution that you would prefer then it leads to basically a kind of inflammatory condition where you're kind of accumulating a chronic inflammation effectively from having to stay far away from your preferred kind of homeostatics at point yeah just adding something about that and thank you Maxwell actually I was well I am now a reform and activist also and thank you for that paper the two tail densities but I don't know what you all think but is there still any space to reactive processes without inference? Maybe I'll stick to the metabolism of angle on the reactive and on the predictive coding side or not sorry not predictive coding just predictive modeling so we can think about like and I just quickly the one second response to this is like well this here just means tuning statistical structure so it's nothing like you know that paper a tail of two densities tries to make this point I think like the free energy formulation doesn't really it's a bit counterintuitive but it doesn't necessarily require you to explicitly compute and represent a prediction error in the way the predictive coding might for example if you look at the math of it the free energy only ever exists in the dynamics and by that I mean updates right like so you really just need a target data structure and some sensory data and that's enough to engage in active inference if you're able to compute the difference between the two yeah and then it's just gradient descent like if you have a mechanism that's in place that can track and keep like a lid on that discrepancy then you're engaging in active inference like it doesn't need to be like this explicit kind of computation so you know from that point of view the I've always kind of shocked at the response of an activist this kind of framework because I think it's everything that an activist would want right like it doesn't involve explicit computation whatever information theoretic quantities are involved only exist in the dynamics I mean nice and just that last closing point on the metabolism and also to tie it to semiotics so a sugar molecule touching the tongue it's through receptor which transmits through neural signaling that sweetness and that might elicit in a reactive way the secretion of insulin but also insulin gets secreted in advance of a meal as well as in a circadian rhythmic pattern that is adaptive over evolutionary time and so there's this meaning making of the sensory inputs of the sweetness of sugar isn't necessarily related to impending spike in fact artificial sweeteners are example that the sweet taste or the activation of that receptor doesn't necessarily entail a subsequent increase although in our evolved history it did and so there's space in that whole mechanism with the tongue and with the blood sugar and the pancreas there's space for reactive integration of stimuli through semiotics as well as for the predictive secretion of insulin in advance of a meal I'm gonna go to the next slide because we're coming along pretty nicely so here I just found some funny you know memes about changing your mind and mind over matter and stuff and the questions I put on here was first with a quote and then two questions the quote was under the free energy principle the systems posterior belief is refined or tuned under the generative model through a process of variational approximate Bayesian inference and it becomes a tight bound on the true posterior belief it aspires to and so this sort of aspirational way of phrasing what systems are trying to do it's not just that it wants to change its body temperature to be healthy it aspires to it I was wondering what people thought about how might these Bayesian posterior beliefs be related to consciously held beliefs or thoughts and then where might variously things like affirmations, meditation, education or propaganda fit into this so all of these involve sensory as well as thought processes how do we figure that these Bayesian beliefs which may or may not be consciously held how does that relate why can't we just change our mind to understanding the free energy principle or change our mind to be different and these memes sort of suggest that it is possible and there's entire industries built around telling people if you can change your mind you can change your life but what actually enables someone to change their mind so that they can change their life and how should we think about that in this multi-scale cognitive framework Stephen this actually ties quite nicely into some of the work around mental space psychology because if you've got a metaphor like a metaphor can be a construct in a way which is a bit like the body but we have like a mental model of how things fit together and if that metaphor is still the same I can update my beliefs fairly quickly right I can if I'm still in my kind of like scientific chemistry paradigm or whatever I can update my beliefs the challenge is when I have to transform or update my paradigm and reintegrate into a new metaphor so you've got this in the way like you were saying here this idea of being able to do approximate Beijing inference kind of is quite quick and quite easy to do if you're recalibrating posteriors and sensory data in that into you know like the sense that like we're all working together you know and we're all going to try and work this team together and then if it shifts to being like I'm the boss now I've never been a boss before suddenly everyone has to do what I need to tell them now my whole mental model of what it means to be at work has shifted now that can be something then that I have to do like doctors you have to learn your practice and have to relearn what that means because you have to reconfigure and then you can start to work at speed again so I think this is quite a nice way to tie that together yeah these are really just thoughts to raise will move on but just I wondered okay well if the body is some you know if my arm is aspiring to be this length or aspiring to have you know this ratio of muscle to fat or something like that where do our conscious beliefs fit into that and are those upstream of our phenotype are they downstream are they themselves a phenotype how do mental states influence our physical states in action I think it depends what you mean by conscious what do you mean by conscious well so look if by conscious you mean a form of like reflexive access then what you need there is kind of temporally or parametrically deep architecture so by that I mean by temporarily deep I mean that you don't want just a generative model that anticipates the next kind of data point that you'll be sensing but you want the generative model that's capable of entertaining like temporarily deep counterfactual observations contingent on courses of action so you might think that if a system is able to entertain possible counterfactual futures contingent on courses of action then it would be more conscious than something that couldn't do that so our preferred approach recently is parametric depth we've been exploring this since a paper that actually was just accepted in neural computation called deeply felt affect and so in that paper what we do is effectively induce a hierarchical model that is able to make inferences about its own inferences so by that I mean this model takes at a second layer of the generative model it takes as its data for inference like it's observable states the posterior state and precision estimations at the lower level so from that point of view what these systems are able to do is have a rudimentary form of self access where they are able not just to make inferences about what's causing their data patterns but inferences about their own inferences and how much they trust them and so from that point of view that might also get you slowly closer and closer to something like consciousness in the access consciousness sense of like you know having a kind of nested or reflexive access to states of your own being if you're talking about phenomenal consciousness I think all bets are off I have no idea no idea whatsoever it's a very hard problem yep well it's just things to raise let's go on to Alejandra yeah I was reading this image that if you can change your mind you can change your life maybe I can read this sentence the way around if you can change your life you can change your mind in terms in this active engagement with with the world so if you want to think differently behave differently it's not just a desire you have to do it so then considering that cognition is relational so all these Markov blankets and Markov blankets Markov blankets so going up to your consciousness so by bodily interactions I think is the only way you can change your mind and be conscious about it so yeah sure Sasha like that re-statement of that that fun phrase if you can change your life then you can change your mind because that just reminds me the reason we have all these sayings is because it is really hard to enact behavioral change and it shouldn't be easy to change your mind because then you wouldn't have learning and memory and you wouldn't have these patterns that your life depended on and so thinking about what it means to change your mind or update your priors about something like if I want to have a different lifestyle or exercise more or something like that it's not just as simple as deciding that then you actually have to follow through and so how to make that loop easier to complete going from changing your actions to then updating your beliefs it's kind of like taking it's like starting to take the first step before you know which direction you're moving in Shannon so like Vygotsky or Fijer's work where you have your cognition scaffolded by an external teacher role and like once you're developed you're not a child anymore the scaffolding role comes through propaganda or education or affirmations that are put forth by whatever our socio-cultural group is or whatever our social media algorithms are throwing at us and as much as changing your behavior or changing your life can change your mind also changing the way that you interact with the particular the particular regime of attention like the particular social group that you engage with or that you are nudged to engage with through interactions on social media or even just regular news media all sort of like nudge in less volitional ways your like Bayesian posterior beliefs in terms of like conscious higher level beliefs about the world agreed well said it's not enough just to say I'm the kind of person who runs every morning because if you're not running then you're going to be not having your sensory input aligned with your even stated beliefs so it takes actually the action to make that connection Stephen yeah I think this is also like I know if anyone you know when you walk into a cathedral it's like if you want someone to feel a sense of awe and start to take their consciousness up to the heavens you know walk into a 400 foot high cathedral and it will do a lot to you you know and so it's the way that and it's not necessarily anyone saying you should believe in God or not straight away anyway yep let's go through these last few slides because we'll also try to get that figure in so just some brief notes on free energy minimization a quote from the paper was under the free energy principle cognition is what the recognition density or living system does i.e. changing to elude surprises and maintaining informational homeostasis by minimizing free energy and the way one studies cognition so there were again returning to this double instrumentalism and we're going to return to this again in our subsequent weeks discussions on a tale of two densities and the way one studies cognition i.e. what the system does is by developing simulating and analyzing the possible generative models that explain how the recognition density of interest the system of interest changes so as to attain the minimal free energy the mechanics of belief is the only causally relevant aspect of the variational free energy the free energy may or may not exist what is at stake is the causal consequences of the action guiding beliefs of organism and groups of organisms which are harnessed and finessed in the generative model what matters is that organisms are organized such that they instantiate such a prior to guide their actions and it's not a accident that organism and organization and organ are all related and just to show this from a very nice active inference tutorial here we can see that there's a generative process on the top it's a forest I guess and it is moving through true states but those true states are hidden and so that's whether or not it's real that's what's happening and observations the O are being emitted and that's the partially observable Markov model which we again will return to later and then what the organism is involved in are changing its own state through time of its generative model with all these caveats and richness that we've been discussing and the actions end up influencing the state or not and then that in the next time step provides new observations that update the model and then just to look for a second at where free energy fits into this so that like in each of our discussions we can kind of bridge from the experiential components to the practice to the scientific theory and a little bit of the math what is being shown here is that if the organism wants to be minimizing its surprise and Maxwell you can fill me in or correct here if the organism wants to be minimizing its surprise which is on the y-axis and the less surprising the better given a adaptive generative model of the world what it can actually do instead of just on the left side of this function of expectation that's the blue line the way to minimize this is actually to try minimizing the free energy and the free energy is a lot more tractable to compute and it turns out that it is strictly bounded in a way where by minimizing the free energy you go quite a long way to minimizing your surprise because they're very closely related in a mathematical sense well I mean we have to keep in mind that what's going on here is that the organism is trying to sample the environment such that the new data that it's receiving is consistent with its preferences about data so it's not just a retrospective kind of let's try to fit a model to this data it's about a kind of forward looking kind of sampling strategy this is why in this other paper that we'll be discussing next week we argue that this whole framework is inactive fundamentally basically you can cast any action whatsoever in terms of trying to bring about a preferred kind of data structure if you want to wax a little philosophical you can even relate this to a phenomenology I mean I said I have no idea how it relates this act of inference stuff relates to phenomenology but my suspicion this is peer speculation but my suspicion is that phenomenal properties kind of correlate with the observable states so that it's really like the patterns of observation that you're receiving at any time kind of feel like something and then there's like an interpretation of that feeling which is more like the access dimension so that might go some way towards kind of resolving some of these views but yeah from that point of view like the organism is trying to get to the phenomenology that it prefers which I think is kind of amenable to you know some cool like properly phenomenological kind of accounts of what's going on I mean you know Dolores makes this point and I forget which one of these but you know like what is desire right well desire is the desire for a certain constellation of things right it's not just like for a thing at least in Dolores's thing like it's like you know you're looking for like you're looking to an active inferences you're looking to sample a data structure that's sort of consistent with the way of being that you are you know yeah and if you people want different things and yes agree it's a constellation not just obtaining that singular object Stephen yeah I could ask you a quick question Maxwell I was wondering is there like a a shift from like active inference dynamics to as a kind of a way of thinking to active inferences sense making like does sense like is it active inference all the way up and sense making all the way at the organism level or is there a transition when you'd use it depends what again it depends whether you mean this more kind of explicit sense making or this more if by sense making you just mean like interacting intelligently and by that I mean like adaptively appropriately or whatever with an environment then it's all the way up and all the way down in effect like cells and you know cells in a body and people at a dinner table are trying to do the same kind of thing I think for the more kind of reflexive explicit forms of sense making that you're concerned with practice you would need some kind of layered system so sense making in that sense has more to do with our narrative stuff that was discussed a few weeks ago it has more to do with creating these kind of overarching inferential structures that make sense of your phenomenal flow effectively actually that makes sense in the sense that I've one thing that bugs me at the higher level not higher levels but these people just use the word sense making they're using the media at the moment to talk about wiki problems and all of this but what does sense making mean and it's not an actual thing that someone's doing you either have to say like narrative sense making I'm taking narratives and making sense of it tends to just be this word now that's become trendy in fact some inactivists are even pushing back on it to say that well it doesn't really mean very much Dan Huto and Eric Nien have these really nice critiques of like the sense making paradigm Sasha my follow up question to that is what isn't sense making oh well again it depends I guess on how we want to define it so basically active inference describes more specifically the behavior of systems that are actually driven in some non-trivial way by the beliefs that they encode about an external environment and you might see this as a continuum thing but I mean there's a sense in which like if a rock is trying to predict its environment and act in a way that you know minimizes this divergence between well it's not a very good model right I mean it doesn't do very much so I mean this might be a graded thing in the sense that like you know a like things that are not sense making are things that don't seem to reflect this kind of intelligent you know adaptation to the constraints of an environment like a rock right but you know for people like Steven who means something very specific and I think like more kind of psychological and meaningful and loaded if that's what we mean by sense making then the vast majority of what we would call sense making under the other definition doesn't fall under it so like you know you really need a very specific kind of system that's able to do engage in a very specific form of inference unfortunately I'm going to have to leave at this point so I'll see you soon. Thank you very much everyone this was very fun. Bye. Steven, a thought there? Also there's the sense of sense breaking or sense finding so it could be that sense making is okay this is and it works makes sense at the shorter time scales okay participatory sense making is like this ongoing knowing how to act but then you get to this higher levels like sense making really is more like you've made this retrospective sense making which is really what ends up being what people think about sense making at scale is I how do I make sense of how I should have acted but if you can get someone to that normally is it because so that either means someone's just doing something habitual or tacit knowledge where they're in the flow and you're trying to unpack the sense making that allowed them to do that but that's not accessible because you're so in the flow so you need to have some form of sense breaking where some and then there's a sense finding which I would say is more like active inference because you're more consciously trying to repurpose your posterior beliefs and your sensory data to try and assemble some sort of understanding of what's even going on it might be that point of stuckness you know they're talking about stuckness so stuckness is once you've kind you're trying to get to sense making but while you're stuck you may need to go off and do a bit of foraging it's interesting cool let's just go through these last few thoughts and then we'll just go to figure six because we had talked last time about seeing it so on this slide we are going to be asking are there limits on the kind of cognition that we can study as humans because of our own individual cognitive processes and where does collective cognition play into this which systems are we in a position to understand and how would we know how would we do meta cognition as individuals or groups about what approaches for which systems will be valuable and then in the future experiments and implications of okay we've moved beyond internalism and externalism all these other isms what is that going to mean for our practice as scientists or as investigators of cognitive systems and what would be the implications of knowing I'm just going to raise them just so we can get them on there but maybe for another time or in the closing thoughts and then there was this idea in the paper the relevant boundaries of cognition depend on the level being characterized and the explanatory interests that guide investigation so we can return to this general question where are the relevant boundaries for cognition and how would we know if we're too internal or too external and then in the paper they wrote in other words drawing the bounds of cognition means defining the recognition density of the system of interest and identifying a generative model that explains changes in that system that follow variational Bayesian inference and I believe it was Sasha who asked isn't this the same argument that is used for holism or reductionism for example they'll say something like we'll know when we've gotten to too large of a scale of analysis or too small of a scale of analysis because our model will break or will be inadequate or intractable so how do we get beyond that paradigm of we'll know it when we see it or can we ever get beyond the paradigm of we'll know it when we see it guided by a certain framework because we are in feedback with our models so just things to think about and then I think let's just briefly look at figure six and then have some closing thoughts so figure one was about Markov blankets figure two went into the action perception loop and introduced some of the equations of FEP those were unpacked a bit in figure two B figure three was operational closure in graphs figure four was a schematic illustration of auto catalysis which is like auto poesis for metabolism figure five was that fractal Markov blanket showing that as you zoom into the broccoli you see that it's Markov all the way down and here we are in figure six so if anyone wants to have any comments on six because we had said that we were going to approach it in this discussion they can just raise their hand but the way I would walk through this figure first is by reading the caption and what the caption suggested was that it's going to be something like a developmental morphogenesis scenario so these are not naive cells it's really important to take that evolutionary perspective like the organism is a arrangement that's a developmental outcome that isn't just an arbitrary developmental state it's one that has lasted through many generations been fine tuned acutely so these aren't just naive little cells wandering around spontaneously coming to this arrangement that's shown in figure F it's rather a very carefully selected set of expectations about which extracellular target signals like diffusable morphogens the cell expects to see and so it knows that if it just pursues its expectation of the target signal extracellularly it will be taking part of a larger developmental outcome that is evolutionarily adapted so the cell doesn't need to have a cognitive awareness of that but we can think about this like an anterior to posterior gradient of some diffusable morphogen and what's happening each of these traces on figure A and B and C are through time on the x-axis showing like what one cell ends up experiencing whether it's its expectation of a morphogen again that's the sensory input in this model or it's action reflected by its ability to move then in figure D we see the free energy of the system first slightly rise but then immediately start converging down down down down down which reflects overall that there's an increased precision in the ensemble in the aggregate of these cells as they sort of jostle into place and come into an arrangement that minimizes their surprise about what extracellular target signal they should be perceiving and what we see is that even though there's no higher level coordination per se the agents end up converging on this solution that reflects the target morphology in this case it looks like I don't know kind of a upside down light bulb or something not sure what shape that would be but you can see how each of these different colored cells is able to be uniquely determining where its position is like the green cell it can be thinking something like I want to be close to blue but far from yellow and totally far from red the blue cell might be thinking something like I want to have an intermediate level of the green and the yellow signal and then the red cells might be thinking something like I want to have mostly red signal and then in decreasing order yellow blue green and then depending on what the instantaneous perception is of those diffusible morphogens the cells can either update their generative model of the world through movements that's the morphogenesis component or by changing their generative model which isn't actually happening like there's no learning and adaptation within this model but over evolutionary time it will end up being the case that developmental trajectories that robustly enable high fitness morphologies to arise will be selected for so that's sort of my short take on figure six any comments or thoughts on that cool just because we're basically out of time I'll just say thanks everyone for participating and we'll have a closing thought as well we're going to provide a follow-up form for live participants I'll put that in the chat in a second we request feedback suggestion and questions from all of our viewers and our participants and we just hope that people stay in communication with us and keep watching next week and the following week we're going to be talking about a tale of two densities which is the paper that we mentioned a few times today and yeah it's going to be a great discussion next two weeks as well so if anyone has any closing thoughts on the discussion while I post the link in here they're free to go ahead if any thoughts otherwise that was a great discussion thank you for the discussion that was great in that case I am going to terminate the live stream thanks everyone for participating and for watching we'll see you in the next two weeks for number six a tale of two