 Hello and welcome, everyone. It's June 15th, 2022, and we're here in Actonflab Live Stream number 46.1. Welcome, oh, and mute the stream if you have in the background. Do-do-do-do-do-do, thank you. Welcome to the Active Inference Lab. We are a participatory online lab that is communicating, learning, and practicing applied active inference. You can find us at some other links on this slide. This is a recorded and an archived and transcribed and edited and published live stream. So please provide feedback so we can improve our work. All backgrounds and perspectives are welcome and we'll be following good video etiquette for live streams. Head over to activeinference.org to learn more about live streams and a lot of the other projects that are happening in the lab. So we're here in Active Stream 46.1, and we're in our second discussion of the paper, Active Inference Models Do Not Contradict Folks Psychology by Ryan Smith, Maxwell Ramstead, and Alex Kiefer. And we're gonna go, I expect, in many fun and interesting directions today. So thanks, everybody, who's joined the panel live and also for those who are watching live. Thanks for adding your comments in the live chat. We'll begin with some introductions, and then I believe that will introduce many interesting areas we can continue from there. So we can start by just introducing ourselves and then passing it to someone else who has not yet spoken and we'll close with the authors who are here. So I'm Daniel. I'm a researcher in California and the dot zero was very interesting with Jacob and Dean and Ryan, and it raised many questions that I was excited to discuss today. Just to provide one would be perhaps the relationship between formalisms of any kinds and natural language and just regular concepts that people have. How do we draw the connection or evaluate connections that are drawn there? And I'll pass to Ali. Questions I'd like to ask if you're not. Oh, sorry, Ali, wait, wait. Sorry, just the audio is set up a little bit differently. So if you could just start, you can just reintroduce, but go for it. Thank you. Okay. Is it okay? Yes, yes. Thank you. All right. Yeah, I'm an independent researcher from Iran. And as I said, I also have a couple of questions. I'd like to ask if they're not addressed during the discussion, especially when we get to section nine of the paper. And yeah, I'm very glad to be here. And that's it from me and I'll pass it to Dean. Morning. My name's Dean. I'm here in Calgary. I really like this paper, which I kind of indicated in the last live stream. My curiosity, because I really don't know, is because of my background in education, when I'm looking at that sort of qualification, quantification piece, and I'm thinking about the desires as something that we can see as point one and point nine as a distribution. I often thought of some of my previous conversations around things like Likert scales. And so I know this paper doesn't address those things specifically, but the idea of being able to translate into sort of a numeric form has always been interesting to me. And that's something that maybe today, or maybe in the point two, we can think about the implications of that. And I'll pass it to David. Hey, David Kelton here. Thanks for having me on my first time here in active inference. I go by duvet as a streamer. I'm a civil engineer by education and practice by background linguistics and physics and psychology. I spent many years in Yashiva rabbinic study and I've been a practicing Hindu. I'm more on the mystical side of consciousness and studying it actively streaming for a few years, but also the mathematical materialistic side. So this paper was very interesting, especially dealing with the sense perception related to motor control, which my perception of dualism is something that the material does actually control. So I like the model, I like the paper I was interested in looking at some of the more philosophical and even the mystical precursors. What could actually be explained? What are the assumptions being made in this paper is definitely a good start. Thank you, David, and to Alex. So feel free to give any introduction and then provide any context on what brought about the paper from your perspective. Yeah, thanks. So I'm Alex Kiefer. I'm one of the authors on the paper. I'm affiliated with Monash University and Yacobo, these cognition and philosophy lab there. And I currently also, my main sort of gig is at versus research labs in LA. So we've been doing some really interesting research there that plugs into active inference, among other things. So as far as context for the paper, so this issue of how and whether active inference and theories in the vicinity like predictive processing and all that, how those things map on the full psychology is something I've been interested in for a long time before I spoke to Ryan and then Maxwell about this paper. And I won't try to give Ryan's side of the backstory but basically he was, I think, planning on drafting something along these lines for independently. We got in touch through Carl, Tristan, I think, and it was pointed out that I was drafting something on the same thing. And then I ended up just hopping on board to work on certain sections and to sort of help carry through the argument. So that's a little bit of background. Awesome. What sections we can look at the roadmap? Did you work on or also if you have any thoughts on this overall structure of what sections do you include and why did you order them in that way? Yeah, I mean, I guess, I mean, really we share credit for all of it. Like, there isn't anything, you know, I, we contributed to the whole paper, but the sections that I was more, that I leaned more on maybe and that I contributed more to were the thing about quantifying the motivational force of desires. Section seven. And then we also had a bunch of back and forth about the last few sections, free energy principle and direction of fit. Yeah, so, I mean, Ryan, being the first author, definitely, definitely did a lot of the writing. So, but those are the sections that I had more to do. Awesome. And then just one more question, Alex, so that we can go back to our open list of topics. What would you characterize the big question of the paper as? What do you think would bring somebody to this paper because they were wondering X? Yeah, I mean, so it's pretty, I think it's pretty clear in this case, right? Fortunately, it's pretty easy. It's just the question is whether the active inference formalism, particularly a certain strain of that, the sort of the street decision, model, flavor of active inference is consistent with or compatible with folk psychology and the BDI model in particular. Great. Okay, and then also what would you characterize the BDI model as? And then we'll open up for more questions, but just to get some of the prerequisites and you're right that it is great that it is clear. Many papers have much more esoteric big questions and this one is very like down the middle. Yeah, yeah, one thing I liked about working with Ryan is that he sort of wanted to just make the title of the paper a declarative sentence that says, this is what we're claiming. And I really like that. I'm trying to follow that in my own work. But the BDI model, I don't think I have a whole lot to add here either. It's just the idea, fundamentally that beliefs and desires interact to form intentions. And that that's, I guess maybe also that that's a legitimate way to explain behavior for some classes, systems, intelligence systems, people, agents. All right, awesome. So Ali, Dean or Duvid, if you have any questions you'd like to raise at this time otherwise we'll go into one of these pieces we've raised. Yes, Duvid. It just, I'd never heard of Hilton or the folk psychology BDM model. Is that a prominent model you purposely chose or could this be a fill in for basically any non-scientific or introspective or even dualist model for consciousness or as if it's specific, this Hilton model. Are you claiming that it fails as a materialistic model? Yeah, so I think part of the reason we referred to the BDI model was to just be, to sort of bracket for the purposes of the paper, although I'm happy to discuss it here. A lot of questions that minorize about folk psychology and what exactly is meant by that. So if we just say we're talking about the idea that beliefs and desires interact to form intentions, then that's a clear target. And then, of course, there are many different conceptions of maybe there are many different folk psychologies and views in that vicinity. But I guess, yeah, the BDI models is sort of maybe, supposed to be like an essence that you can distill from many of those different perspectives. Now, maybe that's not true. We can talk about whether the BDI model is really common to different conceptions of folk psychology. Cause I have to think about that. I kind of assume that it is. I'm curious about the materialism, dualism issue. I don't see, I guess, directly how it plays into things at this level of description because I guess we're not claiming necessarily that beliefs and desires are material or not at this stage. But anyway, happy to get into that. Yeah, feel free to unpack that, David. Yeah, I mean, just classically, you obviously people have had minds for as long as human existence, maybe if you believe in Julian Jaynes and evolution that there was an origin to a consciousness in the mind at some point. But historically, most of the Western tradition up till a few hundred years ago, believe that these things originated from a non-material source, whether it's soul, however you want to explain it, but as the controller from the Greek model of the soul ethos, logos, pathos, or any model that beliefs and desires that would be the controller, that would be decisive in these cognitive functions don't arise from the material. So if you have a materialistic model where you're saying that these things do, in fact, arise from the brain, you could say that the ancients were just incorrect either in ascribing non-material origins to things that have material origins, or you could say they were completely incorrect in an unlimited perspective that there is no such thing as belief or desire or there is no such thing as a unified eye. So as a way of you men, Hilton has some sort of material division, even Freud, like even going back to Freud just a hundred years ago, the question of the psyche and the subconscious, whether that has material origins or not is unclear. Right, yeah, so I guess my perspective on this, I'm sort of a methodological materialist about these things. Like I think that you can, in so far as, so in terms like the scope of this kind of explanation that we discussed in this paper and active inference in general, works with, I think anything that you can explain about the causes of intelligent behavior, I think you can probably put a material gloss on that. But I'm not, personally, I'm not really committed to the claim that you can explain everything about people in terms of input, output, mappings, you know. But in so far as you can conceive of some function, whether cognition, perception, whatever, as a sort of input, output mapping in space of behavior, then I think you can explain it using these frameworks interpreted as descriptions of brain processes. So it's interesting that you kind of contrast or you suggest that maybe there's an argument for a limited to this, if you were thinking of mental states as having a root or a source and something non-material, maybe then if you reject that idea that you'd be prone to a limited to this. But so I'm very much, so coming back to the idea, I'm starting from a sort of methodological materialist standpoint, but I'm definitely not limited to this. Actually, the one thing I wanted to get into today is I really don't understand, I don't understand how aluminumitivism is a tenable position with respect to folk psychology if you're a materialist because I think that there's just, well, I'll hold off on getting into that, but I think I guess I'll just one last comment on this. I think the question of whether beliefs and desires and intentions in other mental states have any non-material aspect and the question of whether they exist and whether they should be eliminated from our ontology are distinct. Thank you. I think this is a fine place. We'll have a lot of time to explore other avenues, but it would be awesome if you could describe the limitivism and also there was a quote in your paper where it says, if the worry is founded that there's a tension between active inference and folk psychology, specifically the BDI model, it could be taken either to imply that active inference models are implausible because reasons or if and to the extent that the models are successful to imply that desires should be eliminated from scientific theories of psychology. So what is a limitivism and how did you interact with that topic? Right, so limitivism is, as we're using in here is, and I think this is the mainstream sense, is just a view, well, I guess, okay, the most general version of the view with respect to anything is just that that thing should be sort of eliminated from serious discourse or serious theory, maybe scientific theory. And I mean the idea that the churchlands have in defending a limitivism about folk psychology is that it's like a failed theory. So the basic idea is if theories, I suppose survive on the basis of their predictive and explanatory power and some other criteria, theoretical virtues, the ability to evolve and explain to anomalies, apparent anomalies that crop up. So the churchlands basically think that folk psychology has failed this test. So the theory of folk psychology has been falsified and we should just move on. And that's, I would say, to be sort of precise, that's a version of the limitivism applied to mental states, but that's the view that we're really talking about here. Okay, yes, Steven, please. Yeah, I guess there's different schools in limitivism, but if you want to take that to consciousness studies in totality or questions of free will or like I mentioned the ancient dualistic understanding of consciousness is you could take a limitivism to consciousness in totality, that the man is a mechanistic computer. And even as a mystic, I would argue possibly against the strong interpretation of active inference that's a atheistic, a limitivistic model. And you might not, for philosophical reasons, not want to be limitivists and therefore try to defend it, but then what are the mathematics? What are your claims actually say? But yeah, I mean, the strong levels say that there is no belief, there is no desire, there is no unified eye, and we're just an automaton that has some sort of threshold interaction with the environment and whatever perception or a head that we have voices in our head that might make us think that we're conscious beings, might make us think that we have self control or beliefs or even free will, are essentially illusions of the mind that have nothing to do with our actions. I could just, how do I do follow-up? Is it the author's page? Author's page, yes. Great. Yeah, I would just, I think it's a good question how consciousness fits into this picture. And I'm not sure whether the version of a limitivism that I took myself to be targeting here includes consciousness or not. It's mostly been a view that's focused on the idea of discrete mental states that play these functional roles, beliefs and desires and so forth. So yeah, I definitely think there's further questions about that. And I can see, yeah, I can see how you might interpret active inference, not just active inference, but almost any mainstream sort of, you know, theory and like neuro psychology in that vein as implying a form of limitivism if you think that these mental states aren't by definition not material or not entirely material. But again, I'm happy to get into that further. Yes, the idea that any kind of modeling by making it specific is approaching or on the road or slippery slope or already reached the point of eliminating the need for appeals outside of that description. It's also possible to consider that an entire category error and say, we're describing brain processes, which is I believe what you said earlier and no description of a sunset is the territory of the sunset. Math is not the territory as we hear often. And so how can one be a limitivist about a process with any kind of description? Whether it's predictive processing or active inference or any other kind of model. Yeah. I raise my hand here. I'll just raise my actual hand. Yeah, I mean, another thing I just don't feel you just said that I should bring up here is that I mean, strictly speaking, I suppose active inference isn't committed to these being directly descriptions of brain processes. They're sort of computational models of decision-making processes with beliefs and all that. And of course, if you take the, there's this whole background of the free energy principle and all that that links this to physics and the physical nature of systems. And I tend to read these models as pretty transparently mapping onto brain processes, but it's not necessarily the case that you need to be committed to that interpretation of these processes as material processes when you use active inference to describe some system at the psychological level. And there's of course still a lot of question about how the psychological description maps us onto the physical description, which I have views about, but it's an open question. Awesome. Dean and then Ali. So in the paper itself on page seven, I think there's a paragraph that kind of refers to all of this that we've been talking about. I'll just quote, if the relationship between conscious and unconscious computation, therefore remains an open question. That said, some specific DAI models have been used to describe higher level cognitive processes that can generate conscious beliefs and verbal reports. And then it says who the authors are. In these cases, the inferential prediction error minimization process can still be seen as sub-personal or unconscious, but the resulting beliefs and decisions themselves are assumed to enter awareness. As a general rule, however sub-personal processes in DAI should not be expected to map one to one with consciously accessible or personal processes. Great. So I've copied that up on the slides. And Dean, what do you think that spoke to or what made you want to share that? I just think, I think the first author and the other authors were essentially saying, let's make sure that we don't try to conflate things here that we focus in on what we're trying to do here, which is point to the potential of these formalisms explaining something if a person can self-report. If they can't, there's still aspects of this that might apply. But I think they were just trying to clarify exactly what they were pointing to and not pointing to. And Ryan Smith has made clearly a line of research as well as multiple live streams and exhortations on social media about this distinction that one can take a cognitive modeling approach of report behavior. Did you see that or not? And that can be very clearly, although subtly distinguished from the actual awareness of seeing something. It's not the same thing to have report behavior as it is to have the experiential underpinning. They're potentially adjacent to questions or there's a space that they're linked in a meaningful way, but it's quite possible to blanket literally one's uncertainty about the question of experience, even in a visual or a multimodal way, from the question of, well, let's just say we had the animal or the computer program in the chair. And now we're making a cognitive model of its report behavior. Different question than can the report behavior model be one and the same as what is actually happening? Because there could be an interleaving stage where they just choose to lie or all these other kinds of interesting things happening. Ali? Yeah, and just as a brief side note, I think the term folk psychology here can also be related to the distinction made by willful sellers between manifest image and the scientific image of human beings. And the manifest image being our understanding of ourselves as personal and social beings. And obviously the scientific image speaks for itself. So I think the term folk psychology can possibly be equivalent to this manifest image of ourselves and yeah, that's just a side note I wanted to share. How do you see that, Alex? Yeah, I think, I mean, I'd almost agree. I would just say maybe I don't think I would say folk psychology is equivalent to the manifest image, but I think it might be a part of it, which is probably nitpicking, but there just might be more to the manifest image of ourselves than just folk psychology. But anyway, I basically agree. Yeah. Hmm, yes, the question. Dean, anything you wanna bring here? No. This reminds me of with the conversations we've had on linguistics, how there's one view on language as a entity like out there, body of knowledge. And then there's also the person's self-constructed sense of language. And they only have a partial subset of the words or they use them in a certain way. And there's an analogous distinction where there's folk psychology, the academic field of versus people's inner sense and how do people generate their inner sense of manifest image, if that's appropriate. And how can that be connected and mapped to, again, any descriptive model or disciplinary or transdisciplinary model on Alex and then do it? Yeah, so I really should take the opportunity to say that my conception of how folk psychology works is like thoroughly cellarizing. It's like, I think he got it exactly right. So basically, and it's been a long time since I've tried to recite this, but basically just the idea that's there's sort of verbal behavior, right? That people enact and engage in that we end up sort of associating with other things that are going on internally. And that through this sort of process, we end up sort of inadvertently in the course of evolutionary history at some point. There's this whole cellarizing myth about this stuff, but basically the idea is that folk psychology sort of arises and introspection, things like that. And I may be putting a bit more into the cellars than he actually says right now. Again, it's been a while, but basically the idea that introspection and our sense of self-awareness and stuff like that sort of could have developed out of first outward behavior. And then we sort of become sensitive to correlations among these outward signals and that develops sort of an inner sense in that way. So, but the idea, the fundamental idea that's relevant to this discussion, this paper, I think, is that folk psychology is just the name for this sort of thing that naturally developed in the course of human history. I gotta say, of course there isn't just one folk psychology, right, for one thing, people speak different languages, but I do think that the piece of it that I think is pretty safe to say is true, or I think, and that we meant to capture, this was my take on why we were using the BDI model, is that, I mean, pretty much anyone, this isn't entirely true, there are different, you know, dialects and things. But basically, if you wanna understand why somebody did something, you can say, oh, they believed X and they wanted Y, right, and they intended Z, and that's the level at which I mean to sort of endorse folk psychology, is just, I don't see how that sort of psychological, and I gotta say, it's not, for me, my motivation for sort of wanting to defense folk psychology isn't really anything to do with introspection or consciousness, it's just, we obviously use these terms to explain behavior and we do it very successfully, and so I don't think it could be the case that these terms fail to carry information about the states that people are in. So, you know, this is why I don't quite get the limit of this argument. Anyway, that's gone a bit far field from where we started. And then anyone else? Yeah, I think, I mean, as a, say a spiritualist, the limitivist argument is saying that's the natural conclusion of what you're saying in your drawing from the dualistic tradition, when you use these terminologies, so you know, say that all of the thousands of years of Western tradition that believed in the soul, and then you wanna borrow the terminology from a materialistic point of view, it may not be accurate and follow the evidence and think, you know, Jerry Fodor and the modular mind or Benjamin Liebitz experiments that, you know, what is active inference actually saying? That's a threshold that is reached that causes activity in, you know, like Liebitz famous experiments where the threshold is reached and it could be measured in the brain before the, you'll call the active conscious part of the person recognizes it. And if it was like a modular mind that you have multiple conflicting parts of the person that could have beliefs and motivation, but what we would call the, you know, the module of the mind that creates the narrative for us is only one of those. And that's just what active to the person. So when I interact with myself or you interact with another person through language, you only have access to that one part of the brain that creates a narrative, but, you know, said the Liebitz experiments in, you know, more recent experiences demonstrated that that's not likely the part that controls action and your active inference didn't exist then. I mean, Bayesian mechanics did, but you're saying it's a threshold and there's all sorts of things that cause a person's threshold to be reached. And active inference could potentially model that. And then if you, you know, even you put in your paper that you could include things like, you know, if you want to call beliefs or desires as higher order rules. So even at the point, there may not be a mechanistic understanding of what's the origin or how these higher order rules operate, but it would be some sort of, you know, free energy principle, some sort of a mathematical reduction of a threshold being reached where the sum totality of conscious and subconscious factors cause the action and even the person, ourself, through introspection isn't gonna have full access to that. If I can just respond here, I think first of all, just to be respectful to my co-authors, I should say, because I might get into my own views here, which they don't necessarily share that at all. But I should say, I think this paper is really targeted at explaining the connections potentially or clarifying the connection between folk psychology, whatever its implications, right? Whether folk psychology carries implications of soul type things because of its roots, maybe it does. But anyway, it's trying to clarify the connections between active inference as just a sort of formal theory of decision-making and folk psychology. So anyway, it's neutral technically on the issue of whether active inference is true or is an exhaustive or true description of what we are and what we do. Now, maybe there's still an issue there. If you really think that folk psychology, that the terms, very terms of folk psychology carry dualistic implications, and if you think that active inference carries materialistic implications, then there's still a problem, I guess. I guess I don't really know if I agree with either of those, but I guess I don't see how active inference as such implies anything one way or the other about whether there are non-material origins to behavior. I guess, okay, I can see, let me try this, see if I can. I'm just trying to see if I get your argument, David. So it's like, we're saying that active inference or let's say active inference as a framework says that this is how decisions happen. You do a search over policies, which are these discrete things, and you figure out the expected free energy, which has to do with comparing sensory data to expected sensory data. And I guess if you think all of those components are pieces of a material thing, right? They're all beliefs or desires or something like that, which you take to be. I mean, actually, I think I can answer this, sorry. I realize I'm not doing the best job of this right now, but at least I'm getting, this is difficult. I'm glad you're bringing this stuff up because this is actually, this gets quite challenging. But I think, so active inference is this theory of, well, DAI, is this theory of discrete decision making that appeals to these distributions or things that are like belief distributions, right? Belief distribution and a desire or preference distribution. I don't think it says anything about where the preference distribution comes from. And in particular, I don't think it says that it doesn't come from the soul or some deeper thing. So that's switched from me trying to rationally reconstruct your argument to me trying to rebut it, sorry about that, but I'm just trying to make some progress. Let me try to walk through some of these steps because I agree it's an important distinction. So the focus of the paper, the regime of attention of the paper very clearly written to remain on a line because there's many open threads related to, for example, different cultural folk psychologies, various folk psychology models, various metaphysical claims. The paper was focused on identifying one possible area of incompatibility, which would be the relationship between the BDI model entities, the nodes in the BDI graph and assignment of terms for specific parts of the active formalism and identifying that will come back too. Now, then as has been pointed out today, one could ask if folk psychology carries a dualist implication in the sense of there being material and non-material types of things, whether implicitly or axiomatically or historically, but if folk psychology has a dualist implication and active inference carries a purely materialist implication implicitly or explicitly, then there could be another like battlefront of tension. Which would not be based around mapping the nodes in the BDI model to the nodes or the formalisms in the active formalism, which is like a theory-theory ontology map question, but this would actually be a metaphysical incompatibility. And then Alex, you walked through some steps that said, if active inference is a framework that says, this is going to be a model of how decisions happen, there's a search over discreet policies, there's the calculation of variational free energy and or expected free energy, and then you added as part of a material thing, then it is going to be incompatible with anything in material. However, one could also simply say, Actinth is a descriptive framework saying, here's how I will model how decisions happen. As a search over discreet policies, and as the calculation of FE, then there might be one, two, three, four kinds of things in the universe or it could be anything. But this is where the materialism enters is actually as a clause in the assumptions of that string, rather than materialism arising as a consequence of the neutral descriptive formal basis of the framework. What do you think about that, Alex? Yeah, thanks, thanks, Daniel, for particularly in that solo. I think that's basically right. And just as a specific example, even if you go so far as to think that you can map this formalism onto brain states, you don't have to think that brain states get their status as desires or beliefs just from their material constitution, right? I mean, you could, if you believe that free will arises in microtubules, for example, or something. I don't know how that works, that theory, by the way. I don't mean to disparage it either, I just, I don't know how it works, but I'm using it as an example. Maybe you think that free will kind of somehow bubbles up through this free will at smaller scales or something, right? So I just don't think that even if we take this as a description of the brain that we're committed to anything so heavy as materialism. Thanks, Alex. Thank you very much, David. Yeah, Matt, I appreciate your research. And I know you're to like debate dualism here. I'm just saying that if we could recognize what are the assumptions or axioms. So you have questions. And that's why I like your focus on motor control connected to sense perception. So if there's your perception of a unified eye and even Daniel and his research on ants or something like that, okay, there's David, Alex, Daniel, all the panelists, Ali and Dean that are talking to each other. We might refer to ourselves as a name, but is there a unified eye? And it's okay, that's an axiom that is based on a dualistic tradition that may or may not be accurate. And if you say, well, there's a sum total of forces that are limited through the free energy principle that we refer to as unified eye that makes decisions, but that's more a lexical misnomer. And even if you think Daniel's research in ants or something like that, that a question like, is there a unified ant that avoids death that tries to survive? Why does the ant have some sort of will to live? If I argue that that's also non-materialistic the ants will to live or that there's a unified force. So I mean, you could have a materialistic explanation like higher order rules and just leave it that I believe that these higher order rules stem from genetics or some sort of parallel like called reverse ant or commodification of machine learning where an ant has some sort of will to survive and therefore, if you try to step on an ant it's gonna move the other direction or something like that but is there actually unified force in the ant that is telling it to survive that causes it to act in a unified way or is it just the free energy principle and a reduction of all of the various sense perceptions and so if we are using these axioms from a dualistic perspective like specifically are called the unified eye that makes more sense as a non-material force that chooses the eye that's not my body chooses to make this action. And so if active inference is saying no it's a sum total of sense perception some sort of higher order rules that are acquired through a combination of genetics and other machine like processes that we use the language description as eye but the mathematics is really saying that's not really accurate. And Alex to you and I'll also just recall way back when in 2222222 when we discussed your paper Psychophysical Identity and Free Energy which very directly tackles this question of identity, materialism, physicalism various other processes just wanna drop that note and feel free to reply that. Yeah, that's great. Yeah, the reason I get so excited about this stuff is that I care about it and I definitely don't take like a sort of default materialist position on this stuff. So I guess I mean, overall David like my view here is sort of about the self and the self and the self with a capital S maybe in some like Buddhist type traditions. I'm actually a fairly far on the mystical end of the spectrum myself. And for that reason I don't expect now I'm sorry, this is maybe not addressing what you said in all its nuance but basically the gist of how I see this is you shouldn't expect that thing the really special thing that we don't want to eliminate you shouldn't expect it to show up in our descriptions of things because you can't describe it anyway, it's transcendent. So even if the brain is identical with the mind insofar as we can describe the mind the mind isn't identical with the mind insofar as we can describe the mind. It's the thing, it's the big self that we can't ever pin down with words. And actually out of respect for that thing I am happy for it not to show up in the descriptions that we give of how the material stuff works or even how anything works, right? Because insofar as you can reduce it to a description it's not that thing. Thanks Alex and to kind of add some thoughts there the paper clearly delineates two variants or applications of the active inference formalism which is MAI motor active inference and DAI decision active inference. And the two theories differ in at least two important ways or two, these two applications of the frameworks differ in at least two important ways. The first is the phenomena that they are targeting to describe whether discrete decision making and planning behavior or whether motor behavior. And then there's the question of discrete and continuous modeling. And so for reasons of biology and historical precedent the motor models have come from the continuous category while many of the decision making models which often map onto things that are legitimately categorical in the world like a team maze or a planning or a sentence generation task has a discreteness or categoricalness to its basis. And that made me think about linear regression as I really like to think about because in the history of linear regressions no one said you used a linear regression to model the stimuli intensity and the behavioral response. So you think people are a linear regression and this is another mathematical formalism that in some ways although I'm happy to also update my model on this is able to be interpreted but not the only interpretation as having an equal epistemic claim to the territory as a linear model because they're formalisms and the least squares error in a linear regression that which fits which is not the only norm one could choose but whatever function is used as the imperative to fit that linear regression could be playing an analogous role to like the calculation of variational free energy. It could be purely understood as a modeling heuristic that's used to make statistical comparisons within a model. And then also the linear regression framework can be used to make discrete decisions like breakpoint regression where you could contrast two different alternative models and you can say is it more consistent with one regression or is it possible that there's some breakpoint a two parameter model that we're going to identify? And so I try to always be thinking like we're on the freeway and here's the action framework and here's the linear regression framework and if there's going to be some claim about active inference or free energy principle at all about what the formalism means I always like jumping from car to car and thinking would people or do they or have they said that about linear aggression? And if they can't there better be a good reason why they can't. For example, people who are committed to taking an interpretation of the territory based upon anything of the active map do they think that that is something that's true of all different statistical modeling frameworks and they'd apply that to a linear regression or is there something different in the active inference formalisms that legitimize or warrant that kind of a interpretation beyond the linear regression? That was the first point and then the second point was about Alex very beautifully said with like not expecting the transcendence and that which cannot be named or however we wanna think about that space to appear in the imminent and in the specific realizations I think it's very important that the axioms are stated in self-contained scientific argumentation and rhetoric although unstated axioms come in at least two types one are the secret axioms that you made and snuck in like we didn't mention this issue but we're going to use it as if it was an axiom in our deployment of a broader argument later in section nine. Alternatively an issue might not be addressed because the argument is agnostic or orthogonal or blanketed with respect to that axiom. Like you didn't mention what color shirt you're wearing because not you're going to use it and slip it into the argument later but rather that the argument being made is uncorrelated or is not related to the shirt color. So I think that the potentially even aware avoidance of the metaphysical baggage precursor exploration in this paper is not to say that the issues are unimportant as various other work by all the authors actually speaks to but rather that this can be understood as a specific sequence of thoughts and rhetoric and symbols that update belief on the relatively narrow although still quite broad question of whether the concepts categories of B, D and I can map to aspects of the act in framework just like the linear regression slope could be mapped onto the concept influence of X on Y. Not that that still wouldn't have some ability to be critiqued in the future or nuanced by future modeling but you could just say the linear regression model is not inconsistent does not contradict not that it proves but it does not contradict folk regressionology. So active inference models do not contradict folk psychology. Linear regression models do not contradict or negate folk causal inference. Anything that's going to break beyond that epistemic map is going to need to make a special case beyond people have interpreted this way or somebody who wasn't paying attention might make the mistake of believing X. So sorry for the extended piece there but I think there's so many important aspects that this paper like either guides us on and also takes us to a vista of Dean. I'm not going to be able to articulate it as clearly as you just did but here's my thing. So coming at the paper with my background my question ended up being so if we're going to compare something that adds two things up, a belief and a desire and out pops and intention. And then last week with Alex with Brian on here we started moving to the idea that with expected free energy the ability to see the discrete the difference between the ambiguous and the risk was a difference. So it's not an adding up to come to some conclusion. It was an actual knowing that we want to try to move things that are apart closer together. And so from the standpoint of writing the paper now the question is can those things work in concert? Can one approach which says these things add up and then we behave accordingly work at the same time as this other way of quantifying which says there's a variational part that we're trying to minimize and there's an expected part that we're trying to juggle as well. And so I think when you get to the closer to the end of the paper and you talk about the world to mind and the mind to world that isn't necessarily always additive that's still a differential but I don't think you as writers of this paper disqualified what's going on with the well, we can add these two things together and this is what pops up. All I think and it said through paper many times is under these particular types of sins the difference seems to also determine what degree of desire or belief or docs I can't say doxacity might pertain to that expected free energy difference. So again, that's not as eloquent as Daniel or you can describe it but from somebody who's way down here and just trying to make sure that in my own head that image right there that's up there where suddenly the circumstances have changed and I've gone to a desire that says well, I can't change gravity so I'm gonna make the biggest crater I possibly can instead of the outcome which I'd like which is to continue to exist. Maybe my entity, my identity now is how big a hole in here if I can make. So that's a difference as opposed to an adding up. Thanks, Alex. Feel free to respond and also we'll look at expected free energy and the simulation. It'll be great to explore those. Yeah, cool. Yeah, so I think I got the point about halfway through I'm seeing where you're getting at it's interesting. So I think, I mean the first, my first answer was well, okay if we shouldn't think of belief plus desire equals intention is literally involving a sum. I've been doing a lot of writing code lately is for work and I would think of it as like well, we'd define the sum operator for belief and desire as just the thing that active influence does, right? Essentially, right? Basically, belief plus desire means take the kale of divergence but a more informative answer is, I mean, if you wanna think of it as a mathematical type operation like a sum or a difference, I think you're right being that it's much more like a difference. It's much more like intention equals desire minus belief, right? So it's like you have your model of the world. You see what the difference is between the belief and the, sorry between your model and currently, sorry, you have your model of what you want the world to be like rather. You see what the difference is between that and your model of what the world is like and that gives you the expected free energy. But I know you said we get to this in a second, Daniel but I think you could pretty much literally see this in the expected free energy in the kale term there, right? So yeah, so you could expand that. So Q of the divergence between Q and P you can write this out in a way that's where you're subtracting one entropy from another basically. And so you can see it as literally a difference. So if you wanna map it onto the difference concept then it's pretty straightforward. And that's one feature of the logarithm function as a modeling heuristic, not metaphysical truth that division is a division of different terms like the ratio is related to the subtraction of the log of the ratio. So there's various reasons why like log likelihoods are used and it's also a core function that's deployed in the kale divergence as well as entropy terms. And then just yes, do it but just to get through the variables here it's written out in the paper but a kale divergence is describing a divergence not a distance between what are on the two sides of the double line. And this is the Q variational distribution of observations over a time horizon conditioned upon policy which serves also very important function heuristically, especially in the discrete case and the divergence between that observations conditioned on policy and the posterior distribution on observations through time. So just that's what's in the kale term and we explore this also more in the dot zero and we can return to it as well but just wanted to clarify like and contextualize this to some of our early discussions like this is the Y equals MX plus B of this framework and the least squares is like fitting the minimum of something. And so again, to go beyond the interpretation of linear modeling here not that this is a linear model requires sometimes describing this either the specifics of this model and how it differs or taking a vaster potentially extremely untenable position about the naive interpretation of variables in arbitrary equations which I think is fair to say is an untenable position which puts the onus of people making metaphysical interpretations or claims about this line into a place where the specifics of this line must be addressed. Dean and then Dave. Just real quick. I think for me it's hard to determine whether or not a relationship or a diversion or the difference is material or not. And so for me, sometimes that just depends sometimes I can actually see the edges of the relationship and sometimes it's very, very fluid and it's hard to determine whether or not there's something material about that. And that's actually baked into this formula too. So I don't know what others think about that but I mean, there is an aspect of this that is written in but I don't know if we can actually lock it down and say that that thing, that relationship there is actually material or not. So. Thanks, Dean, David. Yeah, I think the activity, you know just looking at it as a statistical model could be used to model any higher order rules with feedback mechanism whether the origins of them are material or not. So we don't have to, you know, go in America around talking about what's the origin of these higher order rules. And I was just thinking the, you know the phenomenon of folk psychology applying to humans and the belief, desire, intention model if we were looking at a simpler model, you know I think like obviously, you know, Daniel studied ants but at what level is there sense perception and motor control and does that need a controller? It was just a pure feedback mechanism like action reaction and their decision is made based on higher order rules that, you know I don't know if you'd consider an ant if you just simplify it like eat, survive. And even, you know if an ant doesn't have reproduce if even you eliminate like an ant colony where only the queen reproduces and possibly in humans you could say the only difference between a human and the ant as you add the reproduced but you have these higher order rules of basically survive and you know maybe that's maybe there's only one higher order rule or maybe you wanna give a whole bunch of higher order rules and then you define that in your free energy principle like, okay so you have the sense perception and then you have the movement and you know, so how you define the variables in the feedback mechanism, you know the active inference mathematical framework could model it no matter how you define it and you know, this thing also I mean, you obviously were focusing on humans but it might be more simplified to focus on you know, the most simplified species even plants maybe that they don't have a survive instinct that they get them and they have some sort of survive instinct but like, you know, insect that could try to survive if I try to squash an insect it's gonna move in order to try to survive and it's a clear example of sense perception the insect senses me, you know trying to kill a cat forbid and then moves out of the way of what exactly caused it and so you're just gonna hypothesize well of course it has some sort of survival instinct and the sensory perception feeds in to its motor control movement versus if it's just seeking food or like a Maslow hierarchy of needs and you know, I like the terminology you know, if we're anthropomorphizing, you know the ant or are we reverse anthropomorphizing the machine that the fact is in terms of sense perception motor control, we understand how the robot computer works much better than the mind and so are the active inference equations really more suited to machine learning robotic automation and then where you reverse anthropomorphizing that to the human mind but you know, certainly the equations are valuable because you know, largely they could be interchanged and defined how you want them to. Thanks. I'll connect some ant thoughts to the skydiver. So Dean has returned to this example of the skydiver as being like, well if I can't change gravity my policies whether mental attention policies pray or no pray, you know do this motor behavior or do this speech behavior which is a motor behavior as well conditioned on policy not gonna be able to influence gravity and then once updates their prior that also conditioned upon various action policies they're not going to be able to open a functional parachute then there's this discrete shift as you've described to like making the biggest crater and hearing about the ants over the years it makes me think about folk biology like the notion that people have in terms of their received understanding that there is like a drive to survive or a drive to reproduce or a drive to do various numbers of things it's almost like the BDI of evolutionary biology or of ecology and I'm thinking about the case where a nest mate is out foraging for the colony and then they're being like messed with by somebody with like a magnifying glass or being touching them and at first it's like evading now that behavior looks a lot like a solitary insect that might be trying to evade and survive ostensibly to reproduce and increase like it's individual offspring if one is still within the folk biological realm and then it's like there's a switch where the nest mate is actually like no the evolutionary unit is the colony and so it goes for the biggest crater possible and that's even embodied in the anatomy of some insects for example with a hooked stinger where they can't they're committing six legged death to compose a broader high fitness entity at the colony scale and so that's like a behavioral shift from trying to open the individual nest mate parachute to make the biggest crater on the attack or possible for the good of the colony and that no matter how hard you try to map that to the behavior of a solitary insect will be along a potentially didactic or interesting but ultimately facetious route because the colony is the evolutionary unit for these obligately used social species and so one will be scratching their head at various of the behaviors and that's why the value of a descriptive framework is so important because we're able to describe various computational qualities and not be inconsistent or contradicting folk in its psychology while also remaining absolutely impartial to questions of what ants think for example and so we can have computational representations of ant decision-making and cognition, perception, cognition, action, impact and we can link it to natural language descriptions through the active inference ontology as well as through potentially other ontologies like ant folk psychology and we can take that yes and and other people can add other yes ands if they want but they're gonna have to take a different tack to uncouple the relationship between these computational quantities and try to interject a wedge and say no it does contradict folk psychology that's a different argument than that relationship is in violating some other third belief system that I'm bringing to the table. So hope that addresses some of it and thanks Blue for your questions in the chat there. Ali, would you like to raise anything or take us to a question or area that you wanna focus on? Yeah, one more thing about this discussion is bringing personal semiotics into table I think. Well, active inference actually I believe is a kind of iconic symbol as opposed to index because you see in personal semiotics for index we need to necessarily allow some kind of a causal relationship between the symbol and the index but for icons we only need to have some isomorphic similarities I'm not to say similarity because similarity is a very vague term there's some isomorphism between the symbol and the icon. So ideally active inference is a kind of let's say structure preserving mapping between the behavior and the model. So I guess if we can if we could differentiate between icons and indices some of these at least confusions could possibly could potentially be well resolved. So icons and indices of process semiotics. Thanks for sharing that and it connects to some of the discussions we've had on process ontologies as well. Alex? Yeah, just really quickly I mean, I gotta say I don't know that much about process semiotics but that way of thinking of active inference and of theories in general as like iconic representations is pretty much how I think of theories and how I mean, the most recent thing I published with Jacob Ovi, we basically said that you can think of scientific theories as relating to the world by a process non-process or relation of sort of structural similarity isomorphism being the sort of limit case. So I just, I agree with that and I think it kind of comes right back to a lot of what we discussed. Like as I get older, I start to think that maybe there's something to the essential indexical idea that like I as a special role to play the letter capital I in English. But other than that, I think things pretty much functioned descriptively and I think description functions via isomorphism or similarity. So totally agree. Awesome. That reminds me of the book Anthem by Rand where the word I is not included in the linguistics and then the end of the book, not, you know, spoiler alert that word I is rediscovered. And so it's kind of an allegory that traces some of these questions about the potentially special role of the bodily identity but it's very interesting that these frameworks actually are engaging or with this augmented perspective, for example that Ali has raised that these theories can be seen as already having been engaged in a discourse that people might not expect. Like predictive processing is what the brain is doing. Do you agree or deny? What's the evidence for predictive processing being active in the brain versus not active in the brain is this neural signal recording that we made more consistent with a predictive or non-predictive model? It takes that meso range of what the system is doing and it partitions it as we heard also from Ryan previously into like, well, predictive processing can't be invalidated nor can linear aggression framework nor can active inference framework. Any specific model is now engaged in a specific model evidence comparison and then one can choose what kind of model comparisons they want to make knowing that they're not going to be able to exhaustively consider all possible cognitive architectures but also any invalidation or relative model fit or adequacy amongst different models at this specific level are speaking only rhetorically to the truth or the validity or the adequacy of the higher level, inviolable framework. There's a lot there to explore, do it. Yeah, maybe we could tie this back to the elimitivism in the anthropomorphism. So if you ascribe an eye to the computer you say that's a logical fallacy of anthropomorphism because there is no eyes just following code or even to the ant if mammals maybe but insects communicate if the ant sends pheromones and you want to anthropomorphize an eye to the ant that like the ant is saying, I want to communicate with my fellow ants for a purpose and he's like, no, no, that's not that's anthropomorphism, logical fallacy. The ant is just following genetic code and then you say, well, really for humans also that we're morphizing the mind and that there is no I related to a unified eye related to the me that even right now that I am communicating with you find gentlemen but that's a misnomer and in terms that it's just me following my predictive coding and that just for the purpose of language I'm referring it to as I talking to you but that's just like a computer sending code to another computer or an ant sending pheromones or whatever method of communication it uses and that's just the language and lexicon that we use. Yeah, it's a dialogue that has played out in not only Gertel Escher Bach but many college dorm rooms, many late night sessions, one participant or perspective believes that they have, for example the ability to choose and have agency and the other disregards that position or one believes that there's something and there's a divergence in their perspective. I think this is just the heart of the map territory debate in the distributed cognitive setting as all cognitive systems ultimately are even what is called individual behavior is just referring to collective behavior at another scale. For example, one person can say this conversation can be modeled using active inference or this conversation can be modeled using predictive processing just like it could be constructed with a linear regression or family of linear regressions and somebody else could say this conversation is nothing but linear regressions, nothing but predictive processing, nothing but active inference. Someone might believe that and if they believe that it would be true that that is their perspective. I don't know if that would make it true but they would be stepping into a territory that the person who remains with both feet in instrumentalism, I'm choosing to model it this way or I'm testing a portfolio of different models around how we could model our difference in perspective. Maybe one can say that they have been too timid and they haven't actually engaged in the real question which is what's really happening not with how you're modeling it but that's a whole another debate. Alex and then do it. Yeah. I just think this might tie into the stuff in section nine that maybe we'll get to. We don't have to jump to it. I'm just saying basically I agree you could raise these questions for any of these systems people, machines, ants, protons, whatever and I think there's an argument to be made that there's not a queer place to draw the line there but it could go either way with respect to all of those. I think you could be anthropophysing in all of those cases or none of them it's not easy to, I don't think it's easy to draw a line somewhere along the sort of evolutionary continuum or anything like that to say although there's an argument about temporal depth of generative models that's interesting that touched on a couple of papers here and there but I think the cool thing about active inference and the free energy principle stuff is that it applies across scales. So I think that to me that means that you're faced with a similar problem at whatever scale you consider it. I just got to throw out there I probably gonna have to leave around 12, 30 Eastern so I'll say that now so that I can just abruptly close my window. So we're in the summary of main argument section nine. So Alex, what is the main arguments and also in the contexts and audiences that you've presented it in how do you feel like it lands or what cognitive updates are you aiming to achieve with this argument as constructed and communicated? Is this section nine? I think I actually may have meant section eight but I'm sorry. Okay, section eight desires and affective states. No, no, no, let me see. I meant 10. I don't want to totally bypass your question. I just think nine is kind of a summary of the argument thus far. All right, how about 10? Let's talk about direction of fit. Yeah, so I mean, so the, sorry, the sort of the point here was just that there's a, this certainly is relevant to a lot of what we're discussing. There's a straightforward way to generalize this beyond systems that you would necessarily intuitively describe as having beliefs and desires. And we can just talk about any system that's described by active inference, by this framework or that's describable accurately by this framework. You can talk about the difference between Q and P and this I think also ties into some of the work that Maxwell Remsted has done on just the idea that the generative model is sort of representation, or it's sort of a representation embodied by the system if it's a representation. We've still disagreed about that. Anyway, the generative model and the approximate recognition model are sort of always there, even if you apply these frameworks to describe very simple systems. And you can see something desire-like or belief-like in those. And we just remain very neutral on that, in this paper on the question whether there really are beliefs and desires in these sort of extended cases, by extended I mean not necessarily paradigmatic, right? So if beliefs and desires paradigmatically apply to like people and then maybe to ants, and I would say definitely to dogs, then this stuff in section 10 is about what you can say without necessarily committing to beliefs and desires in the edge cases. But you can still say that there's at least an ingredient of beliefs and desires, which is a direction of fit. That sums it up. So how could this be used in one case for a human entity in the folk psychological way and in another case for some entity that people would maybe say that type of folk psychology would be inadmissible for? I mean, as far as how it could be used in the case of people that have beliefs and desires or that we think do, I think it's pretty straightforward, right? It's just saying, well, any kind of folk psychological explanation you can give of a person's behavior, you can, it's consistent with the description of that person in terms of active inference. I guess then that the only question is like in the case of a system that might be not clearly that you wouldn't necessarily think of as having a psychology in the same sense, what kinds of things would you want to say about it? I think there's an example that made into the paper about like a plant reaching towards the light or something or it's sort of these simpler sort of feedback systems, right, where something, there's like a set point and the thing is approaching a set point. You can still think of the prior distribution as having a world-to-mind direction of fit, right? Cause it's the piece of the system that brings it about that things change so that that distribution describes reality. Yeah, so yeah, so you still have those. So that's one example. I don't know, maybe like a bacterium following some kind of nutrient gradient or something. I don't really know about that stuff, but that's another example that might, you could describe directions of fit to that preacher. And I don't, okay, last, my last thought on this, I don't know why it would be useful to do that necessarily. I don't know, there might be reasons in studying these systems that you want to. But if you did want to do it, you could understand it from the framework. Just some brief thoughts and then Ali. So first, there's a paper of Calvo and Friston 2017, which is specifically applying predictive processing and active to the vegetative case. And I think from the perspective of behavioral analysis, there's also precedent in the 1989 paper, Framework for Plant Behavior by Silver Town and Gordon. So suffice to say that plants are included within this cognitivist modeling perspective. And then it made me think about how the surprise, the self-information is an energy function. It's what dampens the spring. And so maybe could you say like the heavier ball with folk, ball psychology. The heavier ball wants to drop faster or it wants to make a bigger crater or all these other things. Just again, it doesn't have the agency to want otherwise, given the initial conditions and the rules of physics in that situation. But it's exactly like a person wanting to make a small hole in the paper or wanting to make a big hole. And then realizing that intention would be analogous to an energy functional being reduced by some kind of purely physical system, which does again, hint towards some of the potentially non-materialist implications, legacy that do slip in to different encultured conversations around mind. Yes, Alex, any last thoughts on that? Just real quick, yeah, on the ball example. I mean, I think there is a sense of course in which this framework applies, but I think if anything, the ball wouldn't have a desire or the system involving the ball might though, right? So, because the term that would pull the ball towards the ground or whatever, it wouldn't really be in the makeup of the ball. But, and so whether that larger system has a proto-belief or desire, I don't know. Ali, and again, Alex, we'll go for the part whenever, but Ali, go for it. Well, yeah, I'm not sure if we have enough time to get into this question. I mean, if not, we can discuss it in the dot too, but I had a question about the aesthetic experience and the pleasure or positive balance we get from engaging with works of art. In particular, I'm talking about unexpected plot twists in movies or novels or different kinds of intentional ambiguities in some artworks. For instance, the works of M.C. Escher and his impossible geometries will immediately come to mind. And the sense of excitement and joy they tend to induce in the audience. In fact, playing around with these kinds of defamiliarizations, as the literary critics like to call them, seems to be more of a rule than an exception in any creative endeavor. On the other hand, outside the safety of the world of arts, we generally don't like to be encountered with too much unexpectedness, as we've seen. So I wonder whether or how these kinds of aesthetic reactions can be integrated within the formalism of active inference. Are they special cases of information seeking drives or there's a whole other story going on here? Alex, go for this. And then thanks again for your time here. Yeah, yeah, yeah, I'll check out after this. Thank you so much. I love these discussions. I knew it would be an interesting crowd and interesting topics. I got to say, I don't know much about what you just asked about, Ali, but I love this topic. I actually come from a background of aesthetics as well, I should say. But anyway, I know there's a lot of work on this. So I know that under the banner of predictive processing, there's a whole lot of work in aesthetics these days on. I think suggesting something very similar to what you're saying, that there's clearly a relationship here between surprise all and unexpected and predictions and the joy that we get out of and other emotions that we get out of artworks. And I would love to know more about this. I haven't had time to look into it, but I think Daniel has put a. This was also predictive processing type paper with a incredible title. Move me, astonish me, delight my eyes and brain and then enter the subtitle, a sequel title. Perhaps in the dot two, let's sketch this out. Let's look at the formalisms and let's do some mappings. So someone said, I believe that Mondrian is an amazing painter and I want to go to a space where I do see beautiful art and I want to have this kind of experience or something like that. But let's explore that in the dot two with some of the BDIs and the aesthetics of encultured experiences. Thank you, Alex. Sure, if I could just really quickly, I think that the challenge there is going to be that sometimes prediction error is fine and sometimes it's not. We need to figure out, right? There's probably some complicated way of saying what the difference is between those cases. But that's as far as my intuition goes. Yeah, I'd love to discuss it. Thanks again and see you later. Thank you all. Bye bye. Great. Dean? Yeah, I just wanted to kind of jump on that point that Ali was making. I think that in the, I just want to make sure I've got the right section of the paper here. Sorry, the free energy principle under direction of fit. Yeah, sort of to tag on what Ali and Alex were saying, there's a statement here that says, holding the value of the observation constant in this equation, f decreases of the approximate posterior distribution q of the state approaches to posterior. The qs term therefore has a straightforwardly belief like mind to world direction of fit in that it's value changes to accommodate new observations. I think that the Escher and the Mondrian and every time I go to a sporting event, my openness, my accommodation to what is perceived as an uncertain outcome is the thing that draws my attention. And I think Daniel's point of, let's map that out. Let's see how long some people are able to hold that ambiguity space and not necessarily see it as a risk is both immaterial until it materializes and is a process that I think a lot of people want to go through because there's something to that isn't just attention, it isn't just information seeking but it may also satiate certain desires. So I think that is a point to mapping that we should maybe spend a little bit of time on because I don't know that when you open a space up for something to eventually happen, you're imposing what the final material form should take, you're simply leaving something open in the hopes that something interesting, i.e. an Escher painting might end up being in the next room. I mean, this is what happens when I go to museums in Spain is I'm not looking to find the particular dally, I'm looking to go into that space and be surprised. So I wonder how that actually does map out using these formalisms. Yes, it reminds me of when in the dot zero, Ryan described like the different reasons and the different nuancing, the types of information seeking, that's very relevant. I'm navigating through this museum to reduce my uncertainty about where that one piece that I came to see is the target-driven epistemic foraging from a more general like, I don't know what is in that other room. That is going to influence my probability of moving room to room. And certainly with the eye-siccating, that is how we maintain a visual field that appears to have roughly equal resolution via our generative model of vision and ocular motor-siccating to regions of maximal information, distinctly uncoupled from only what regions are most rewarding, though people's gaze does tend to linger in certain regions that might be perceived as rewarding, that isn't the only imperative. And that's why it's so special and relevant to have a cross-system and multi-scale behavioral modeling framework like active inference that helps us understand the pure static field eyes trying to latch on to anything mode from the gaze on a candle flame during meditation or one region of particular beauty in another visual scene. Dean? Yeah, just real quick. And I think that also, Ali, thank you, you raised semiotics and I'm a Pierce fan. So there's a huge piece of that involved in this as well as to what is the signal? That very much, there's a slide in here that we didn't get to, but when the ice cream cone is looking off into the distance at that ice cream truck, where do we place that in the foreground or in the background? And that is a huge, I think that plays a huge part in the role of the third person now looking at the observer looking at the observer and getting that sort of recursive aspect involved in this as well. So again, I don't know how that will borrow Daniel's how to be able to map that as a nine one or as a three seven and then maybe put it on the Likert scale because I think all of that kind of is a potential way for this to sort of make a sense. I wasn't bringing up with Alex the idea that like literally beliefs and desires add up to intentions. I was sort of saying that tongue in cheek just to be able to show the difference when something is discreet and how we try to bring them together. And this observer of an observer thing, although it's not, I don't think it's Alex's and the other two authors primary goal here, but I do think they lower the bar in terms of bringing some of the more folk aspects of this into the formalism aspects. Yes, and it was a very clever and interesting response which is it depends on what is meant by addition. What does it mean for Dean plus Ali? Does that mean they're both gonna stand on a scale and we're measuring their mass? Does it mean that we're going to stack them head to toe and then consider the length? Does it mean that they're going to interact in a certain context? Does it mean we're gonna blend them up? I mean, there isn't an intrinsic definition to what the addition operator or any operator means in a formal system. And so it opens up the idea that we could have whether specific symbols or just here's how we're using this symbol here and use that to refer to relatively potentially complex and multi-step operations or operations that can be phrased in multi-step. For example, a multiplication on the processor might happen in a different way than somebody is cognitively thinking about it, but that is as the title hints, not contradictory. Not that one validates or encompasses or proves the other, but there's so many things that are not contradictory. An active inference may be contradictory of certain things. Let's make a list. What is active inference contradictory of? What is folk psychology contradicting of? And we can make all of that mapping in the right knowledge management system. And the claim of the article, as I guess Ryan's imperative was, was to make this claim that as they've operationalized both, active inference models do not contradict folk psychology. And that's quite an interesting claim. Dean? Yeah, just real quick. I think Daniel, your point is, I'm rephrasing it in my own, from my own view, which is you raise a lot of different joint probabilities that by their nature are arbitrary. And I think that's important to keep in mind. This leaves open the possibility of looking at things a variety of different ways. Yeah, and just to speak to the joint probability density. So the joint probability density in the Bayesian modeling approach is like where there's a comma. So instead of A conditioned on B, it's like the joint distribution of A and B. Why A and B? Why not B and Y? Why not these three? And so there's always something like a periphery that the model comes from. And we can be blanketed off from it. And that actually serves as a scientific and really meta scientific way to have rigor and accessibility potentially of the models to respect when we're in the regime of attention of a model, what kinds of claims can or cannot be supported, what kinds of contradictions are or are not tenable to make and then also have the yes and with respecting that that joint distribution was not handed down from on high. It is not the only possible joint distribution that could have been selected. And so we can separate off. How did these real researchers in this real situation come to focus on that phenomena and pick this joint distribution from conditioned on the joint distribution, what was the actual conversation and consequences and modeling they performed? Duvid? Yeah, it's all fascinating. I guess I had two separate points I wanted to express and there might be more general if this is my first time on this stream than it's related to this paper, but specific to active inference in general and the interdisciplinary studies. What my streaming partner, Jennifer Shariff calls herself Church of Entropy. We've been three years now talking about consciousness. And I showed her the free energy principle and she immediately said that isn't that just the least action principle? And I mean, it's not directly related to this paper but I could see that in terms of my second point, what I'll call goal directed behavior in general and if you want to say that there's only one motivating force in active inference and that's the free energy principle which is basically a restating of the least action principle. And then at a second point I went last week in Metro Detroit, they had the International Automation Conference and they had the latest in robotic arms and technology for manufacture and they had on display a lot of what we'll call sense perception and motor control, specifically robotic arms that don't just repeat the same action or even have like a sensor embedded when but have an embedded sensor of a camera that will recognize the object and then create a set of rules in order to tell the robotic arm the best trajectory in order to pick up the object and then in terms of like old school manufacturer where it would just been taking an object and setting it in one place where it actually has some level of machine learning intelligence where it could intelligently design where to put the object in terms of like stacking boxes where it could the algorithm of the computer could determine what's the best way to stack the objects and then control the robotic arm to do that and if it's like so for humans just on the peer level of sense perception has goal-directed behavior. How do I, you know, if we have a hundred million cones and rods in our eye, how do we direct our physical capabilities of sense perception to perceive the environment in this thing that could be modeled by these equations and we'll call it the free energy principle or the released action principle and then the decision-making that the goal-directed behavior like the automation computer it has to have a goal code in order to act and so it could action reaction, sense perception, feedback mechanism in order to action and then the equation are describing goal-directed behavior and that's why I'm a fan of the active inference model because as a dualist, I mean, as a scientist I could let's describe these impurely materialistic things and we don't have to get too much into the theology or what's the origin of these higher order rules even if you wanna assume that all higher order rules are material in origin but I think the equations in general could model it no matter what their origin is and it's just a good framework to understand all these theories, the goal-directed behavior at all aspects and then to put on the last spin on it like the Maslow's hierarchy of needs and what is belief desire and that I would have introspection or education that would give me morals, ethics that okay, like my body, like I put the epithee and the expression that I see, the heart desires, the mind plots and the limb carry out that I have the capability that I see my heart desires but my mind stops me and says that's wrong and even though my sense perception saw it and my desires told me that I should do it somehow there's this higher order principle of the mind that tells me that's wrong and I'm not gonna do it and there's that control factor before the limbs carry out. Thank you. I'll just make some short points and then we can just close off with what we're excited to explore in the dot too. I really just appreciate what a lot of people shared and I think over these discussions and the years we're finding some clarities around where are different implications and consequences what is entering into active inference as a field and a filter and what is coming out of it what's happening on the playground and so there's just a lot to explore then live stream 45 just in the last few weeks and not to rehearse outside of this paper regime of attention but the free energy principle as it's rehearsed in that paper is establishing the particular partition that's Markov blanketing off the figure from the ground the thing which is able to undergo repeated measurements in the quantum sense from the thing that is not that thing that's the particular partition with the sensory active internal external partition the autonomous states the blanket states from sparse coupling unpacking the implications in terms of Bayesian inference in other words seeing the flow of those states as doing something that is slash can be modeled as Bayesian inference and then finally understanding paths in terms of a principle of least action which has been starting to be explored from at least the early path formalizations of active inference years ago and I think in the last two weeks to two years taking a new view on that through a lot of the work on geometry and analysis by Dalton Maxwell and other collaborators so indeed this path formalization is very relevant and the principle of least action which isn't the principle of least motion it's not the laziest thing it's not the one that always works it's none of those things but it's a framework for analyzing behavior and with that what would people be excited to explore next week or what would be something that they didn't have uncertainty or a motivational drive to explore before today but it has come onto their horizon through this discussion Yes, Ali, then Dean, dude. Well, it was a really fascinating discussion and I thank you all for bringing such interesting topics onto the table but I'm very much looking forward to that too and I also had a couple of questions I'm interested in exploring the phenomenological aspects of folk psychology especially belief and desire and also the relation of DAI to abnormal psychology which is just briefly touched on in the paper but not in detail and yeah, I hope the next session will be at least as fascinating as this one. Thank you Ali, Dean. I already said what I'm hoping we can look at it as a point too but I'll just add this I think it's really interesting that the term active inference I'll just say in this calendar year alone we've had a couple of papers that have started typing active inference we've had outside the skull or extended active inference and now we've got this DAI and MAI and I think there's probably going to be more and so depending on how many authors are available next week I might be asking them a little bit about that too like does that seem like a natural rollout when you have these scale-free ideas that you need these scale-friendly qualifiers just so that it continues to make sense? Thank you, David and then I'll make a point on that Dean. Yeah and because this is my first time so I had a lot of general stuff on active inference in general and I tried to stick straight to this paper I think it'd be interesting to cover more I asked the question, has active inference even explained one qualia and that there's actually a model here in the equations could we describe a single process a single sense perception to decision making and so like the human model to a simpler life form or even say that is in essence the automation like the machine learning algorithms for robotic arms is that in essence using these same active inference computers as I spoke to Brack online who I guess is an entrepreneur thinking like real estate but is it, what would the application of this be to autonomous cars are more complicated but like something the most simplified version which is probably robotic sensors and controlled movement based on machine learning and then said could we try to explain a single qualia in terms of what happens when I sense something and make some sort of movement and then I was talking what would all of the axioms necessary to in terms of the higher order rules or goal oriented behavior if they're not beliefs desires intentions what are they and how are they modeled into the equation what happens when I perceive something and that's translated into action of what axioms do we have to put into the model to make it work as opposed to or in comparison to the robotic arm that's coded for that does that Thank you to all those returning in first time and our second applied active inference symposium which is going to be in July 2022 is on robotics so we can absolutely address a lot of the applications and also adjacencies and the sense that I got maybe just thinking of that robotic case is like a little wind up toy on a table we want to build it and this discussion because of how it is constituted and enacted is helping us understand not just all of the implications that go into that wind up toy but exploring a lot of the consequences of the wind up toy and then there's people saying but wasn't the table constructed and wouldn't a different table have resulted in a different wind up toy behavior and who placed it on the table and why are tables designed that way and isn't table just like a functional consequence of anything being put on it and what are we doing here? How did we decide on that? How should we decide the next wind up toy and it just takes it in so many amazing places so thanks to everybody in the chat and joining live and of course to the authors for the paper see you in 46.2 Thank you Jets Bye