 Greetings everyone, welcome to Act In Flav, live stream number 23.2, today's June 15th, 2021. And we're going to be having our .2 discussion on embodied skillful performance, where the action is. And we'll see who else joins us live as we go through the slides and the paper. And also of course, if you're watching live we'd welcome any questions or comments. Welcome to the Active Inference Lab everyone. We are a participatory online lab that is communicating, learning, and practicing applied active inference. You can find us at the links that are on this slide. This is a recorded greetings day. This is a recorded and an archived live stream. So please provide us with feedback so that we can improve our work. All backgrounds and perspectives are welcome here and we'll be following good video etiquette for live streams. We're again on June 15th, having our second group discussion on embodied skillful performance, where the action is. So this is sort of our jumping off point and let's see if we can make it a skillful one. Today the goals are really just to be discussing and learning, to be hearing the questions and points that people found really interesting when they were reading the paper as well as see where our discussion can spontaneously go as of course participants join and leave. And hopefully we get some fun questions from the chat. And we're just going to walk through the paper and the slides go where we need to go to hopefully connect some dots. So with that being said, let's just go to the introductions and warmups. And we can each just give an introductory thought on the paper if we would like or something we liked or remembered, something we're looking to resolve, reduce our uncertainty on by the end of the discussion. So I'm Daniel, I'm a postdoctoral researcher in California and I'll pass to Dean. Hi, I'm Dean, I'm in Calgary and I'm becoming a more embedded personality in this active inference lab world. And I'll pass it over to Steven. Hello, I'm Steven, I'm based in Toronto. I'm doing a practice-based PhD through the UK. I'm interested in participatory theater, community development and ways to understand how people interact with each other in groups and individually. So this paper is quite relevant as it opens up some new avenues in that. And I will pass it over to, is Dave here? Dave, hello, Dave. Yeah, Dave, if you can speak otherwise, no worries. Maybe you're just listening in. So maybe just a quick question, Steven, where do we see the connection between skillful performance, which is often thought of like motor and bodily? How do we connect that to social practices which are amongst bodies and use verbal behavior or symbolic behavior? Yeah, I mean, this is kind of a question, I think there's a rise in more and more as people are trying to question psychology or community psychology, which works on the individual or sociology, which works on this sort of mass of people. But when you want to work with intersubjectivity and sort of small groups and understanding the dynamics that are going on there, you could kind of ask phenomenological questions about what it's like to be in that state, which is kind of a bit more direct than say someone's psychological perspective on it, but it's still not quite working. So then you say, well, let's so get into action here. Let's get into what's actually happening in and amongst people. And often if you then jump into action, that's another abstracted area. But if you're in the area of practice, you're kind of in between. It's something that people are doing, something that people are doing between each other. But you're still able to talk about how it's happening, how it feels. So I think that this question of practice then takes you into the sort of areas that get used a lot in arts, arts practice, theater practice, all these types of areas. So I think it's quite a nice nexus that's normally a bit impenetrable. Cool. It makes me wonder, do groups have skill or is skill something that only bodies have? So you see some football team or basketball team do some very coordinated play or two dancers that are improvising. Is that able to be reduced entirely to the skilled embodied performance of individual bodies? Or is there a way in which we could talk about a group embodying skillful performance, especially because so many performances, I mean, except for what, standup comedy, a few other things, they're a group. And then even those individual performances are in feedback sometimes with a group. It's just different roles. That's a good point actually. I mean, I think that's really the exciting field of practice that's gonna be growing over the next while. And I sense that it's almost impossible to get that from a snapshot. You have to have something that's been progressing over time to get, or let's just say, I don't know whether that's true, but I sense that there's more use in understanding how things change over time and getting the dynamics than there is trying to analyze a snapshot. Dave, if you have any comments, you can unmute and I'll see it, or Dean, any sort of just opening thoughts? How does this dot two meet you today? Well, I'm gonna come at it from a little different angle. So I think that word between is a huge, is it's the thing that befuddles us, and yet it's the thing that we probably want to know the most about. And in this idea of embodied skillful performance, I was gonna actually ask the author some questions around how she relates it back to sort of the parts of the brain that resolve these problems. So is it a spatial relationship thing like what the hippocampus tends to provide us some insight on or is it a caudate nucleus thing which says that we sequence everything out and we turn left, we turn right, we release the ball. And I was just curious about which, because those two things work, but when one is on, the other one is off, there's kind of a binary relationship between those two parts of the brain. And so I wanted to kind of figure out how that toggling back and forth is part of this skill that we are witnessing. Nice point. There's the more cognitively driven versus almost the more bodily driven. And then that handoff is where we see skillful performance and that's where we see choking. It's where we see all these dynamics between cognitive processes and action processes. Okay, we see another Dave. Dave, if you wanna give any thoughts, just feel free. Yeah. I'm just listening in, thanks. Nice, right on. That's the real, that's the real OG way to listen in in the jitsi rather than in the live stream. Yeah, so we'll see who else joins in today, but that's a great question about the brain pieces. And then I always think about other species like if insects can have embodied action and that seems pretty fair to say, can they have skillful action and how would we know? And then if they do, the layout of their brain and body is so different that it wouldn't map on, they don't have a hippocampus, they don't have a prefrontal cortex, they have other brain regions. So we don't need to get too locked into one implementation of these processes based upon human neuroanatomy when we know that there's other neuroanatomies that can deploy similar processes. So it helps us keep an open mind to the multiple realizations of active inference or of embodied skillful performance because of just the diversity of evolutionary systems. Let's take an overview at the roadmap. This paper has a form that we've seen several times before which is in the first half, taking some non-active inference body of research. So in this case, it's skillful performance and optimal motor control theory and investigating with a critical lens this previous body of research to understand what is it really saying or what are the assumptions? So in this case, there's this instructionist assumption implicitly or explicitly that's drawn out through the exploration of previous approaches to skillful performance and specifically the motor representations and motor commands that are implicit in optimal motor control theory. So that's part one. Part two goes from optimal control to predictive coding and active inference. And last time we talked a little bit about similarities and differences as you go from optimal motor control to predictive processing to active inference, what is generalized as you move from A to B to C, what is different, what kinds of issues might you side step or introduce. And then in the last sections of the paper, motor control is positively reframed as this interactive sensory motor engagement with the world which puts us very squarely in the domain of active inference rather than in the instructionist domain of optimal motor control theory, greetings blue. So that's kind of a general pattern that we've seen before where the first third of the paper is a background on previous approaches in a critical lens. The second approach shows some sort of a transition from theory A to B to C and then by the third part there's a positivist argument as to why active inference is more in line with how things are or might be more useful. Steven, go ahead. So yeah, further to our conversation earlier, I think this actually, this showing the instructionist and then the interactive approach is really useful to sort of have a plausible way for affect and social dynamics to interplay with action because when it's under an instructionist model it is kind of hard to imagine how that kind of mechanical almost message passing type of approach can integrate into our sort of social dynamics whereas the interaction in this approach does give that. So I think that's quite good. It can frighten people off because it brings in this other philosophical piece which I think a lot of people would like running away from at first but actually it does offer a more humanizing or biological way to engage. Cool, let's just go there. Blue of course, raise your hand anytime you wanna add something but this instructionism versus interactionism I think is a big piece that we'll walk away from this 23 with it's something that we can take on in the case of embodied skillful performance but I think we're gonna be able to apply it to education we're gonna be able to apply it to team collaboration and so we can definitely return to this distinction between instructionism and interactionism. So Steven, just from your experience with drama or other kinds of work, do you think people are, how do people implicitly use instructionism versus interactionism or what kinds of scenarios support or constrain instructionism versus allow for interactions to emerge? Well, I think what tends to happen is in the models that people tend to use they're kind of either in one camp or another. It's like, so mostly it's almost implicit that there's some sort of interactive stroke flow. It probably goes even further sort of some sort of flow in the sort of performance arts world. They talk about flow and ease and that type of thing. And what's also interesting is I know they talk about this in the Michael Chekov acting approach is there's always this battle between ease and form. So a dancer gets more form, they get more technique, they get more proficient at their technique but they might lose their ease that you have as a beginner. So you're forever trying to add more technique without becoming this kind of rigid automaton. So I think there's that kind of side to it and that whole area they just go off and it just takes care of itself but it becomes a little bit of a silo in a way. And then you've got people working in more straight activity theory, Weikowski kind of much more like what activity happened when and that they are also in a bit of a silo. So I think that you tend to find that and different areas, they are a little bit siloed in their approaches to sort of get rounds at some of these challenges. So I think that the way of trying to break beyond what you can analyze which often you have to look at instructionally because you kind of have to break down things into steps but that doesn't mean you need to keep it in a stepwise form in terms of how we understand what people really do. So there's this kind of, I think there's this kind of gap that this will help to make plausible. Interesting there how we capture the utility and the apparent advantages of instructionism. Some of them are written here like the ability to separate different instructions. That seems pretty important. If you say, well, how am I gonna cook this multi-stage dish? And someone just says, here's all the ingredients and here's all the behaviors but I'm not gonna tell you what order to do what in. If we can't separate and sequence events then it's going to be hard to carry out a performance correctly. You can't just switch the order of musical notes and the sheet music. You can't just do things in a different order. So how do we get some of the features that we're looking for like separability and modularity and take it into a fully interactionist framework? One distinction there that the authors raised is like strong versus weak instructionism. And they'll end up continuing that both of these forms have similar underpinnings but there's sort of a continuum between thinking that the instructionism is completely its own neural representationism. It's kind of like a Mott and Bailey type argument. This is the strong instructionism thinking that it's totally just the neural representations that are entirely specifying on their own the movements that have to be performed versus a more dynamical coupling version of instructionisms. So that'd be like, when you're tapping the gas it's like an instruction to the engine but it's not the entire go faster queue given the background of ecologically normal processes that enable them to play this role. So that's kind of interesting. Blue, any sort of overarching thoughts or anywhere you think it'd be fun to jump in? No, I'm just trying to figure out where I'm at still. Indeed. Well, there'll be a few other places like a few other notes that we wanted to discuss. Dean mentioned one thing, maybe we can even have a dot three and find out what a dot three would mean or a dot two one or something like that, dot 21. So one piece was about the neural elements because in the figures we do see brains. However, these are quite schematic. It's not saying that these are the brain regions where these types of things happen, which can be a little ambiguous because it's not actually an anatomical representation yet it's superimposed on top of a brain. Okay, so one piece we wrote down was about the neural and I think there was a few. Yeah, oh yeah, Stephen go for it. Yeah, just tying this as you mentioned about the streak the weak and strong is kind of interesting because that's kind of trying to give this scientific feel but it doesn't feel very satisfactory to try and have to say it in terms of weak and strong because you're kind of with this, well, what does that really mean and how does it shift between the two? I mean, I can understand what there's been said but it gets into this kind of fuzzy version of instructionism. So I think that kind of sets up some of these questions that as soon as you move away from a strong instructionism, it becomes harder to hold it together using the word instructionism. So I think that's, it's quite a good illustration of that because it's sort of unravelling. You can see how the second paragraphs are a lot longer than the first one basically. Yep, great point. And it's also a common approach to basically rigorous philosophical argument instead of making an argument against the total category, you introduce a distinction in the category and then you might find that either one of those categories has some feature that the other one doesn't. Instead of being against all wars, well, there's the good ones and there's the bad ones and then here's my distinction for just the bad ones or it could be that by making that distinction, you then can say actually this holds true for both of these categories or subcategories. And so that's how I've made my claim against all X is by first breaking it down and then they do again find that there, of course, are differences between these two forms of instructionism but that they end up taking on some of the same assumptions. Cool, there's probably a few other pieces we can explore. This is the section in the paper that I'm sharing on the screen that says the formulation of sensory motor control in terms of optimal motor control theory heavily hinges on two different but highly interconnected assumptions. So let's see what the assumptions are because it's probable that active inference is going to be contrasting or doing something different. So one is the central specification of descending motor commands and their efferent copies in the form of detailed low level instructions for control of the motor plant specified in terms of an intrinsic frame of reference, i.e. extension and contraction of muscle fibbers. So that's that instructionism. And then two, a separation of forward and inverse models operating on complementary aspects of action planning and executionism. So the first piece is that instructionism that what's being sent descending or towards the body are instructions. And the second piece is a separation of the forward and the inverse models operating on complementary aspects of action planning and execution. So the forward model is like sort of a planning model and then the execution is like, okay, given what I planned how am I going to execute that? And those are two different streams in optimal motor control. And so we've been talking about how interactionism we can explore how active inference deploys interactionism, but in interactionism we have a contrast with the instructionist point one. And then through a unified generative model in active inference and the reduction of uncertainty in a unified generative model we're going to move beyond this separation of these two forward and inverse models. Stephen, and then anyone else or any questions in the live chat? Yeah, this is quite helpful. It gives that realization that optimal motor control theory it could, it can be very precise. You could imagine someone making a robot because you can measure everything you've got access to all the data and you could make something within like a thousandth of a millimeter precision which humans or animals can't do but it does have a bit of a limitation in terms of being able to integrate. And I think that, I don't know if you've seen any of the papers around the idea of fields being around the robots joints. So I have like a field around the hand, a field around the, and just move into the field rather than trying to work out what the actual must, what the actual joints should do. And they outcompeted other robots within like two or three months. So the same idea of working with some sort of more fuzzy dynamical field actually and just working out what works most efficiently or best even though you're probably adding a lot more fuzziness into the actual understanding of the measurements actually has a lot of benefits in terms of learning how to do something quickly. Okay, I think there'll be a few more cool points to return to on this notion of field. But I think it's also helpful here to look at how they distinguish the forward and the inverse models. So here a forward model takes a system from an intrinsic to an extrinsic frame predicting the effects of different movements using musculoskeletal plans specified by neural activity. So again, that's that instructionism that's because we're in the optimal motor control. So forward model is basically plans that can be cognitive into implementations. That's the translation into motor commands and their consequences. On the other hand, an inverse model builds motor commands by inverting the causal change. So first that inverse model leverages a value function of states. This is where we get the in optimal motor control the non-negotiable introduction of value as a central heuristic. States have to be assigned value because value is what's used to determine which plans to select. And that leveraged on the value function is used to form a mapping from desired target states which again is built into active inference in a sense through the preference over certain types of states but the desired target state has to be specified in a slightly different way as an actual preferred state rather than over a preferences over states in this optimal motor control framework. And that's in the coordinate system based upon the external consequences of movements. So I want my arm to be at a 90 degree angle. And then that is translated or it's mapped to a set of intrinsic coordinates in the space of muscle fiber activations. Like my arm is at a greater than 90 degree angle. So that needs to map to activation of bicep firing patterns so that the arm can contract so that the angle is smaller. And that's why frameworks based upon optimal motor control are sometimes characterized in terms of force control because all the motor commands specify is muscle forces and joint torques. So that's kind of the macro to micro mapping is we have preferred states. And the question is there's so many degrees of freedom with the joints. So how do we go from potentially a broad set of preferred states or acceptable states into a specific pattern of muscular activations? So how do you go from make your hand just rest where it's comfortable which is a relatively broad sort of plateau of accessible states into a position that each joint is going to be at equilibrium with. And the two of these assumptions that we've been exploring here. So again, one, the instructionism and two, the separation of forward and inverse model that architecture induces the assumption or rests on the assumption that value, valuable states is what causes action. And that's what that first in 2011 paper introduced is sort of flipped that on its head which is that, well, in models from OMCT, sequences of action are selected according to the value function of their states. So value is something that's cognitive and then induces action, but that gets sort of flipped in the 2011 perspective because value is a function of the states, not a cause of them. So yeah, as written here, value is an attribute of states that are caused by movement. It's a consequence, not a cause. And when we get loops, you know, which one's a cause of which, something that's locally A causes B, but then you pull back a level and maybe the relationship can also be seen as sort of the other direction or maybe with feedback systems or embodied systems, we move beyond just any single linear framing of what causes what, but regardless of the direction or whether value is an attribute of states or it's a consequence or it's a cause, the point is there still is this central value parameter in optimal motor control theory and that value parameter, which is familiar to reinforcement learning like learning preferable states based upon their predicted value, that's something that we're going to be able to build on in active inference by pursuing not just value, but also allowing for reduction of uncertainty in a generative model, which can include pragmatic or value-driven elements, but also elements that include curiosity like drives, Steven, and then anyone else. And this ties in with Casper Hess' work with Affect. You could imagine that the affective states are kind of generated on the fly at different scales. So as you start to move maybe slowly, as you practice these value states are kind of almost revealed and not necessarily available and then maybe there's a state but once the whole action's complete and that gives you enter that value state, which maybe only appears as that degree of surprise all is realized and it's always going to be slightly different each time you do something. So this is quite cool. I've moved here to that kind of vector field representation of the difference between value-only learning and performance. Here's our robot who's only interested in kind of taking a ruler, putting it on the hill and then going in the direction of the ruler going uphill. That's the sort of scalar approach to pursuing peaks. In contrast, we have that peak climbing or hill climbing element in active inference as that irrotational or longitudinal current but we also get this solenoidal element. So I'm wondering talking to people about performance, for example, or training in some way, how would it be different to promote this sort of value and equi-value flow model rather than a comparative reward-driven model? Like if a learner is constantly assessing whether it's a multiple-choice test or whether it's baseball pitching or something like that, if a learner is always assessing the value of states and everything is framed as value maximization and you're gonna wanna be doing accurate states, you want to be getting your pitches more accurate because more accurate is more rewarding, I'm just wondering how it changes training or the experience of practice to know that there's times when one is pursuing value or that in the long range, pursuit of value can be achieved through this rotational or exploratory element blue and then anyone else? So just talking about pitching specifically, you can hit the bullseye if you're gonna pitch a ball, you can hit the bullseye, you can hit the target, get it right over home plate, but you never explore what about spinning the ball, what effect will different spins that you put on the ball have on the way that the batter hits it? So the accuracy versus the finesse of it, and I think about that in terms of dance also. If you don't explore what it feels like, if you're only trying to accurately make a movement, you're never exploring the finesse element of it. So there's no room for perfection or ultimate mastery, I think. And where does skill become mastery? It's sort of like, at least in chess, it's the masters who appear to challenge some of the basic rules, like they don't castle early because there's some deeper plan or a jazz musician who moves beyond the framework of regular timing to introduce some other element. So that play element becomes especially important. It's almost like, if you never learn how to do it correctly or normatively, you can't throw the baseball. So you gotta get it over the plate, but then the mastery level after performance has been demonstrated in sense involves inventing a new way to pitch, like there's a fraction of baseball pitchers who throw sidearm or throw knuckleball, all these different sort of unconventional approaches that aren't the value function that they learned initially or maybe it was their biomechanics and that was the value function that worked for them and some other value function just wouldn't have worked with their body and brain. Stephen, and then anyone else? Yeah, thinking from the perspective of coaching because so my supervisor works a lot with systemic coaching and performance coaching is this becomes a more unifying approach. It might be for specific things that optimal control theory is quite useful for just unpacking something, but if you want to look at performance coaching more broadly or developmental coaching more broadly and how they relate, this could be useful. And I'm just thinking about Christian Erickson, you know, the soccer player, I don't know if you've heard he collapsed during the Euros and his heart stopped and all the team were around him trying to and then they had to play the game in about an hour's time. Now, their ability to perform was going to be very hard. It's not a performance coaching question at that point. There's these other dynamics going on. And I think that this ability for active inference to at least unify how regimes can change and how it could be the environment, the system, the emotions, all these things are quite useful. Thanks, Dean. Yeah. Yeah, I appreciate what you're saying there, Stephen. Aside from the step-by-step part that I think the Caudate Nucleus tends to perform when we're trying to climb that little mountain there robotically, it's another factor, I think that's kind of interesting. And that's, if it's embodied in its skill, there's probably three kinds of math at play. There's materialization, the how, sort of the engineering of it. There's the what's in, what's out, what's supliferous and what needs to be an element within the performance. And then I think what this paper does in the interactions pieces, it brings the probably into the conversation. And I think you need all three kinds of calculations in order to be able to get a real grip on what's going on. Maybe not an optimal grip, at least a fuzzy grip on what to do next. I think if you only have two of those elements, you've got the perfect conditions for an instrumentalist approach. When you add the probably, whether it's Erickson, what did he text or email all of his teammates and said, go finish the game, we gotta finish this game. And I think Stephen's right, you can give an instruction to your teammates and say, I'm fine, but you just had a cardiac arrest. And we probably aren't going to be able to wipe that from our memory in that moment. And so it's a little bit more complicated when we add probably, but I think we need probably in order for it to be interactionist. So that team, like in the middle of the game, the personnel of the team changed. In the individual story of motor control, it's like, but that's not between you and the soccer ball. So why should it change performance? But when we have the deep generative model of the individual that includes affect, that's one way to get to the loss of a teammate influencing performance. And the second way would be to actually genuinely allow teams to be skillful performers in the sense that there's a distributed cognitive process on the team, just like there is in the body. And so they actually changed. And whoever steps into that person's position on the field, they're gonna have a different approach. So it'd be like switching out one arm for another arm or one forager nestmate for another nestmate. It's going to change the emergent properties of the group in a way that's an explanation at the group level, rather than requiring to come back to just the optimal motor control, value-driven, instruction-guided of a body. Blue? Yeah, just probably is like what gives active inference its power, right? Like probably if I don't eat breakfast, I'm gonna collapse in the middle of a field, or playing a game or whatever. So when you have that probably, it allows you to guide all of your actions toward a given intention, I think. Thanks, Blue, Steven. Yeah, and I also think when probably, because probably is often, it's still in that value, my world, probably I will get this game or that game. When the word uncertainty is put in, uncertainty is a much scarier word than probability, because it's like, it has that risk component. So it's really interesting that you say that we tend to say probably everything will be okay, but active inference is like, do you don't know? It's all unknown to some extent. Here's another nice piece from the paper that I think can continue this discussion about probability and maybe also how active inference reframes it. So the physics of flow embedded in active inference. Now, it's an ongoing discussion exactly the formalisms that get us to a physics of information flow on sense, cognition, and action. So other live streams, other days, other years, but let's just roll with this physics of flow. It's accounting for sensory motor behavior to show that motion in a biologically realistic state space irreducibly includes two orthogonal kinds of motion, the irrotational and the solenoidal component. So that's what we're looking at here. Just this is a paragraph that's representing this piece. Instead of just value only, we have value and flow. And again, this is key to reiterate because it's something that isn't included in reward scalar representations. So the former, the first one, the curl free, is what allows the flow to climb a gradient towards more valuable or probable states. Now that's one of the head scratchers in active inference that preferred states are likely states under an optimistic deep generative model. And that is the alignment that allows us to reduce uncertainty and by way of reducing uncertainty achieve our target outcomes. And so when we align the deep generative model in an optimistic way, such that we believe that likely states are preferred ones, that it's likely that tomorrow I'll have a regular body temperature. It's likely that tomorrow I'll be in this preferred body position. Then we actually get to pursue value as a consequence of reducing uncertainty. There's more to say on this, but Dean, go for it and then Stephen. Yeah, and I think Daniel that what that points out is that there's a dependency for people who want to introduce statistical propositions to what's the underlying situation that we're observing. There's a dependency on the derivative math and the accounting math. You can't figure out what probably is without figuring out how and what and having a sense of confidence around what how and what are. So I don't think it's, it isn't just the introduction of active inference. Active inference is also wholly dependent on a strong foundation in terms of really being able to get from priors through that hidden state. And how do we do that? We have to have derivatives and we have to have accounts. So I think there's a real codependency back to the between comment that Stephen brought up real cool. Thanks, Stephen. Yeah, that idea of things going through cycles. I think I'm not going to go into ergodicity again, but you know, it often comes up is that's from what I'm understanding is pretty much not present in optimal motor control theory because it's about you go there and then now you have a vector and that vector is a thing and that thing gets optimized. But you never have that thing. I think like Dean was saying, there's that kind of in between and you're kind of holding that in between but that only makes sense if you keep revisiting the in between. Like you need to have ergodicity of whatever. I mean, that's an open question again for another day, but the some way that you're revisiting is understanding because otherwise you've got no way to make any sort of information gain or information extraction unless you have a revisiting to compare. Interesting. And it's one of the key pieces again that we reduce our uncertainty and achieve preferred states through that process. I'm just imagining, you know, something from organic chemistry. It's like you wanna study up cram for your final at OCam, you wanna pass. So are you going to take a reinforcement learning perspective or an active inference perspective? Well, the reinforcement learning perspective is like saying you'll get the big reward when you pass. So what's the most rewarding problem that we should be solving or what's the most rewarding chapter to review? But you don't know because if you knew, then you'd pass OCam. So you're kind of right back at square zero with value because you framed learning as a value question. In contrast, the active inference learner would be like, oh, sure my preference is to pass. How am I going to reduce my uncertainty about the critical path of passing? Well, I haven't even opened the textbook. Let's go to the first table of contents or something like that. Like reduction of uncertainty gives a first step and then it even allows for, okay, maybe that wasn't even the right first step or maybe that was a good two step path and now I'm ready to go a different way but I'm trying to reduce my uncertainty about how to get to what I prefer, which is passing rather than I've a priori valued this future state and now I'm going to back propagate value at every single step of the line and then I'm gonna stress myself out if this specific chapter isn't maximally rewarding because everything again has to get reduced to value in this value is everything framework but in the uncertainty reduction framework we see sort of this mist of uncertainty and then regularities and especially dependable ones emerge out of that. And so we're planning amidst uncertainty with preferences in mind trying to figure out what to do. That's planning as inference amidst uncertainty which is just a world apart from planning to maximize value through time given what model again of how value is assigned. It's just a recursive question and value learning doesn't give us that higher level answer for how we should actually go about planning to obtain values, let alone critiquing value as a pervasive framework, Steven. Well actually that's a very interesting, talking about organic chemistry experiences at university that I had an absolute nightmare on my organic chemistry. I've given my textbook to someone I've got behind on the organic side and it was this like two inch thick textbook on organic chemistry and every time you go into it it's got so many words, it's long names and all of this and it's going on and it was really interesting because we had this class with a new professor and he got there and there were a few people to go up to the board and it became clear that absolutely no one knew the organic chemistry and no one had seemed to pick up on this so we're now in the second year no one knew what they were doing or I think it was even the third year and he said he recommended a little thin exercise book with exercises not on all the different chemical reactions and all the structures that just the key reactions and just to go through these exercises in this thin book and basically that got me through the exam and got me on track not and so going for the big book and the memorizing and the optimal control it was just work out the five types of reaction and do the exercises and I think that speaks to what you were talking about there which ways do the electrons move and that suddenly really, really changed everything. There's also some fun drilling happening of course all around me, welcome back. Dave, so this section right here it's a nice challenge in increasing the difficulty of speaking but it's fun. This is a challenge for other frameworks. This idea that the two flows together provide a richer framework for expressing skillfulness as a process whose characteristic go beyond a simple gradient of value accuracy. So we saw that reward is enough paper recently and some participants at our lab were kind of curious about writing a response because that's the whole argument is they said, hey reward is enough. The kernel can be reward and that's how you get all of these intelligent behaviors provided that you have these second level tweaks on the model or these bolted on modules and then the contention of the authors here is that value functions and indeed any motor scheme based upon functions that return scalars are not up to the task of modeling the variety of skillful acts describing human behavior, any kind of behavior just human because by construction they cannot account for the solenoidal aspect of flow. So that's pretty interesting that when we have to bottleneck into a single number we lose a lot of the expressivity that allows for explore and exploit behavior almost coexist because we're collapsing onto a number line. So there's extremely limited availability for what do you want for your main dish and your side dish? And you just say seven, you can't express two things with just one number. But if you could say seven and three you could express something that's more in a two dimensional space blue. So I think you bring up an interesting point about any behavior or just human behavior. And I think about there's dolphins that use tools to crack open fish or to crack open not the fish but the sea urchins. So there's dolphins that use tools for eating and then there's dolphins that don't. And so is there, and this is like a hereditary, like it's an example of cultural evolution in dolphins actually. So it's hereditary but only because it's passed on through learning and social learning, right? So you bring up and then we saw the motor plant, right? Like the motorized plant. Like does the plant learn how skillful performance? I mean, is this a skillful performance on the part of the plant? So it's a really interesting question whether, I mean, we think about active inference as applying to all non-equilibrium and steady state systems, but does this maybe apply only in the human sense of movement or what about like tribes of humans? Like what about like in a nomadic kind of population would this apply there also? Cool. Here's another nice piece in this section that I think will be useful for us to learn about. So technically, it's easier to read when it's already been written, when you're getting the drilling experience. That's when it's the hardest to skillfully improvise, but reading is a little easier. Technically, active inference extends popular predictive coding models used in neuroscience where perception, critical only perception is cast in terms of prediction error minimization. So that's the classic Rowan Ballard 99 paper. That's sort of a mega citation and there's a huge amount of subsequent work and predictive processing or predictive coding is just looking at the relationship from sense to updating of a generative model of sense. So it's a signal processing framework and this is something I know that Dave is a big fan of bussy nuts drills and hammers. I know that Dave is a fan of talking about the signal processing and the audio processing roots of active inference. So this is just on the inbound just from sensory information to estimates of that generative model of that generative process of the sensory data. The active inference framework extends this account of predictive processing to model motor control and explains action selection by appealing to the minimization of divergence between predicted sensory data and actual sensory data. So here we have the exact same divergence minimization it will sound better on the live stream my jitsie colleagues don't worry. It's the same divergence that's being minimized which is the difference between observed and expected sensory data. So in predictive processing, it's about sense and perceiving sense. And so you want to minimize the divergence between what sensory data you expect and what you're actually getting. But active inference adds inference planning as inference into the loop and explains action selection also in terms of the exact same divergence goal of predictive processing which is again, to have your observed sensory outcomes come into line with your expected sensory outcomes. Dean? Yeah, one of the things I used to share with people is so today I'm wearing my murder of Crow's t-shirt because I used to study a lot of Corvid behavior. And even though there's still some question as to what sort of verbal cues there are, what instructions Corvids are able to share because they are social animals and they are problem solvers. But you don't have one Corvid standing to the side as another Corvid solves a problem giving them instructions. So I think it's kind of interesting when we're looking at this behavior question whether we're the only species that are actively inferring because I think that there are other species out there that you could make a case that they are doing that probably form of questioning and answering as well. So, yeah, it can be tools. It can be based on social constructs. It can be based on caching, so deep memory. And again, there's a dependency I think for behavior that uses active inference to also have some grounding in how and what. So I don't think that active inference in and of itself will allow us to minimize anything unless we have the derivative and the accounts. So I think it's a virtuous triad here, but I think what this paper is trying to do is get away from the idea that all problems can be solved with only two kinds of math. Cool, thank you, Stephen. Yeah, further to that point around what humans do or animals do in general is you could say optimal control approaches. Maybe it's only really humans that can apply that extra level on top as this kind of constraining kind of deductive idea. And every other organism is really working with this kind of flow. I mean, if you see a squirrel trying to run up a tree and between trees and chase another squirrel around the trees, it's quite phenomenal just how agile they are and the speed they can work at. And so there's some sort of skillful performance that they've got. So there's this interesting thing whereby you've got this, we see optimal control as more straightforward, but basically it's almost really optimal control just isn't implementable in practical terms. It's what you could say, it's a non-ergodic deductive extra that humans bring to the table to explain the mechanical world that they're bringing to bear in some way. So it's kind of interesting. It just happens to be that was the first explanation we had for this type of stuff. Nice, so let's think about what active inference is and then see who might be doing it or what might be doing it. So again, the core idea is that rather than selecting commands, the organism in active inference is inferring what it must or should be doing under this same imperative of predictive coding, which is to minimize the relationship, minimize the divergence between observed and expected outcomes through time. Crucially, this brings perception and action together in the same functional profile. We have them on a current sort of trading market almost, like we can reduce our uncertainty through updating our model and we reduce our uncertainty through action. So updating an action or action and inference are put on common footing. And it also explains some, so this is the sort of computationalist or algorithmic level is we're making a sensory motor model that doesn't just have arrows connecting pieces ad hoc and say, look, they're connected because I drew a picture. We're bringing perception and action together in a common modeling framework. And also on the neurological, it explains some of the similarities between the functions of sensory and motor cortices. So that speaks to Dean's points earlier and to a lot of these brain regions that might have been discovered or initially characterized as having a more cognitive or a more motor role, but then it's shown that they play other roles. And so Q, the endless click bait, oh, this brain region associated with this also does that. Should that be surprising? Should that be salient? Or should we have another view where that is to be expected? Now, while this move from a problem of control, so optimal motor control, that's the action side. And then there's, so pure action side, optimal motor control tends to not concern itself too much with uncertainty of measurement or deep generative models. And then you have the pure sensory processing side. That's kind of predictive processing. This move from a problem of control to one in inference in terms of active inference does not make the problem mathematically easier in and of itself, but it offers a different model of skilled action which respects the neurophysiological evidence. So in other words, we're taking the sort of problem of going from sense data to a generative model, the predictive processing model, and then the forward model from internal states to action selection. And it's easier to, of course, subset to just one of those directions and say that one has a tractable model, but active inference is combining both directions or integrating across these two different ways of looking at sense and action and saying let's have one model of sense and action with a common functional profile as they describe it here. Now, which systems have it slash do it? That's the realism question and the instrumentalism question is which systems can we model as if they're doing this? So I think that the instrumentalism question which as always it's about us, so it's easier to answer is the kinds of systems that have sensory motor input output dynamics. So everything from a self-driving car to different kinds of animals, we could model as if they were getting input generative model output. So the things that people model with OODA or that classical sandwich of cognition, those are the kinds of systems that are apt to be modeled with active inference. The kinds of things that people were modeling with optimal control going from the model to action and the kinds of systems that people were modeling with predictive processing from sensory inputs to generative model, both of those and ones that especially are doing both, those are some of the most amenable systems for active inference. Stephen? Yeah, I was wondering this idea of functional profile. I haven't heard that one before. It might be another one to think on the ontology that's been talked about. How have you seen that being used anywhere else in terms of active inference, functional profile? Good question. It doesn't seem like it's used anywhere else in this paper. And I don't offhand recall what other papers it might have been used in but that's a great point and something to ask about. I think that the notion of just function is a key one to follow. And what, how do we follow function? Yeah, go ahead and see. Yeah, I suppose because this is talking about actions, skillful performance, function starts to, it's thinking about functional as in how to achieve an action of skillful performance. So maybe that's how it comes in here. Whereas a lot of some of the other papers, there's no moving part, so to speak. It's all information based in terms of, but I think it's interesting in terms of what that does or what it's trying to do. It's another way of trying to speak to something which is a bit bigger. That's one thing I've noticed with active inference. Normally things are trying to define what everything is. This one, it gives these pullbacks. They're like, okay, we've got a functional profile now that allows us to do X, Y, Z. But like I say, it does provide that challenge of trying to fit that into these other pullbacks that we seem to be keep finding appearing like the effective charge that came up in one. So yeah, I think that's kind of an interesting question about how that fits in with other types of models or ways of translating between spaces or frameworks. Dean, thanks. Yeah, so I wonder, let's just take that to its next logical place. So let's start with a functional profile. Not necessarily work back, but is there some way for us to be able to identify what's in that profile that would allow for active inference? And should we, is that something that is that dangerous or is that legitimate? Nice question. Here's a little bit where they're getting at this realism and instrumentalism discussion. So even some of the most forthright defenders of the active inference framework, Assemble, have backed away from making such strongly realist claims about the causally efficacious character of generative model. So here is where we see hardcore realism. So it's nice, you know, and as we work towards the terms and the ontology, how can we classify different senses or uses of terms? First in 2013 has advanced the view that an agent does not have a model of its world, it is a model. Now, that's an interesting claim, but the authors bring this up by also qualifying Friston's use of model from a different paper and saying crucially, if we accept the generative models of the active inference frameworks, not sure if this is the, if there's saying, you know, usually it's framework, but will there be multiple active inference frameworks? In line with these proposals, we must embrace the idea that generative models are causal in the sense of an agent's inactive attunement with the world rather than part of the causal efficacious machinery used by an agent. So that's an interesting claim, Blue. So I was thinking about like, you know, that an agent doesn't have a model of the world, but it is a model. And like, I think about that in terms of people that are, that have traumatic brain injury and then have like dramatic amnesia, like can't remember how to walk or talk or eat and have to like relearn all of these things, right? So I think about like, if we don't, I mean, in that case, it seems like the model, like you have a model and then you suddenly don't have a model, but it's like, if you are a model, are you suddenly not a model or is that like a temporal disruption? I don't know, like, so that to me, I mean, and this is an instrumentalist, realist debate, I realize, but where do you, you know, if you are a model, is it like a temporal disruption and suddenly you have to start over? Or what does that look like in those circumstances? Nice question, Steven. Yeah, I mean, in a way, I suppose, when someone has to relearn that the fact that they are at some level, their body is part of being that model that gives them access to either some priors or that they call it the morphological computing capacity, the body's capacity to then form those priors because, you know, if, however, certain parts of the body are like with a stroke, that can be kind of challenging because that maybe the actual way the body moves is compromised. Therefore, the ability to use it to recreate is diminished. I mean, the question of how much it's diminished then becomes, I don't know if anyone could quantify that, but it's interesting. It's a good point an instrumentalist would say, well, yeah, the brain isn't, you know, per se doing active inference, we're going to model it as such. But then when there is an injury to the physical system, then we say, oh, we're still just modeling it with active inference, we're just changing our active inference model or has the actual system itself changed? So will we just always say that even when the system changes in the physical world, we're just going to update our model and just always stay in the instrumentalism land or are there times where changes in the world make the instrumentalism verge into realism as if it's actually describing something about the world? That's going to be a very delicate walk. And they are arguing that, again, if we think about these generative models as purely formal and epistemic tools. So just like a linear regression, you know, you can do a linear regression on GDP or on some other feature. You can do a T test in a double blind study. It doesn't make the people a T test. It doesn't make the GDP a linear model. So it doesn't make something active inference just to use it in this statistical framework. That's what's meant by the formal and epistemic tools. Then this framework is, we argue, compatible with the sensory motor approach to understanding skilled action. It's interesting to me because the sensory motor approach to understanding skilled action itself could have a realist or an instrumentalist approach. The instrumentalist sensory motor would say, I'm not even saying this system has sense and action. I'm just saying we could model it as if it had sense and action. Whereas the realism of sensory motor would say, this system actually does have sensory and active components. And also this is another important note. The sensory motor view reflects a spectrum of ideas which includes simple open loop sensory motor correlations, closed loop systems, regularities given a goal and optimal sets of regularities according to a certain performance metrics. These can be understood using the tools of dynamical models of cognition, capturing the brain-body environment interactions in terms of dynamical systems as opposed to assuming the agent is involved in symbolic computation. So it's almost like we see a lot of separation of, the isolation of agency or the delineation of agents with agent in the niche and then the niche itself. So we see like agent and external, internal states and external states. And now when we think about brain and body, that's one sort of inseparable dyad. And then it is also inseparable from the niche. So how are we gonna take all these dynamical systems which have different physical and causal connections amongst them, brain, body and niche, and work towards a framework that's actually going to say what we want to say and be useful for what we want to do with these kinds of models like reducing uncertainty about how organisms will act or designing environments that work for different sorts of agents. How are we going to do that effectively and drawing from what we know about dynamical systems more broadly rather than bring on assumptions such as that the agent is performing symbolic computation which comes along with the instructionism and the representationism and all these other pieces. Stephen? You know, this is a useful sentence here to show that dynamic that's happening between the kind of participatory sense-making world and the sort of inactivists and the inactive inference world. Because there's these useful distinctions that have been coming up in inactivism and this participatory sense-making yet they struggle to give the ways to actually calculate how that comes about outside of these kind of metrics that are looking from the outside of the system. So this is kind of quite a dance that's going on but it seems to be yielding some good progress for active inference because they're laying some of the ground for active inference to then see what could be plausibly modeled. So that's quite useful. Yeah, another point to that and also what you said earlier. They're suggesting that active inference is going to extend causally linear accounts of motor control, which tend to focus on the contingency between new actions and their consequences. So the loop there, which is hardly a loop, it's actually a line, goes from action selection to a new state. Like I want to grasp the coffee cup. How am I going to grasp? I grasped. That is the domain of many previous models, which is like from some sort of Delta that needs to be accomplished to achieving that action and that's where the model ends. Active inference in fact proposes a complimentary view where predictions of expected proprioceptive states are not just seen as passive reactions to new motor signals, like in the coffee cup graph example, but as also triggering adjustable dynamic reflex arcs to generate new actions. So here, instead of starting from action to a new proprioceptive state, like I want to be in a different state, so I want to select an action. Okay, now my elbow is in the correct predicted proprioceptive state, so we're good to go. Here, we're starting with the updating of our proprioceptive state, moving to action selection, inferring our state. So action and inference, like the two stroke engine, are now working together in a way where we can pull out a little local stretch that has the same functional features as this linear causative model above because there's going to be some stretch where that action was performed and where the functional characteristics were carried out, but we're embedding that in a deep temporal model and the temporal depth of this model confers a more active anticipatory role to proprioception, now seen in a causally circular model in line with the inactive and embodied approach. So that's that sort of rummaging around. That gives an anticipatory and active role for proprioception, not just a state estimate role for proprioception so that it can get relayed to the central headquarters, transmuted and sent back as an instruction. It's not like a central intelligence model of action selection. It's like an anticipatory deep temporal model that allows sense to be doing something slightly different. Steven? Just a question. When it says, so it says new action, so new action in active inference is the new, well, guess I suppose of what the action state is. And then the new proprioceptive state, this is the interesting thing with proprioception is that almost a guess as well, like based on a mix of sort of intraceptive data and other data or data, but other ways. So is it like a different types of guessing or is one of them action ones still being seen as perception in that loop? Good question. I'm sure there's a lot of ways to go about it, but one thought is even perception is inference. That's predictive coding. So our visual perception is clearly a generated inference. There's no blind spot. There's resolution and color in the periphery where we know the anatomy of the eye doesn't directly support that and so on. There's smoothness even though our eyes are constantly circuiting. So whether it's a proprioceptor or whether it's visual input, that is also inferred as part of the deep generative model. And so when there is ambiguous visual input or ambiguous proprioceptive input, for example, that inactive inference is an opportunity for uncertainty reduction rather than getting out over our skis with then saying whether that's a valuable or a less valuable state, we actually don't need to judge the value in a scalar framework of ambiguous versus unambiguous states because we can just pursue uncertainty reduction and then also cast planning as inference and perception as inference. Does that make sense, Stephen? And then clue. Yeah, that's really useful. And I think it's like what you're saying there that also brings in the idea of temporal scale. So there's with vision and with this, there's what's going on below the greater speeds and flux than we can know, like the saccades and the way those saccades then get put together to be this kind of joined up smooth perception of vision. And this is this new action, new proprioceptive states. That's almost kind of at the speed of everyday awareness. And once things become longer and happen over the period of like 10 minutes intervals, you can no longer track that in your working memory or your working phenomenological consciousness. So this is kind of like the Goldilocks zone in a way where we're able to, when it's talking about new action, new proprioceptive states, it's kind of sitting at a kind of an averaged out perceptual awareness in a way that we can engage. Interesting point, Blue. Yeah, Daniel, you already kind of said what I was gonna say right after Steven, but proprioception really is the expectation. Like where do I expect my body to be? Where like instead of asking like, where is my leg? It's like, where is my leg supposed to be? Or where do I think my leg actually is? And you're not always right, right? Like in the phantom limb case. And even like, you know, if you're stuck with a pin, right? Like so, and this is not proprioception. This is just generally like where your nerves are. So some in some places they're very close together and you can infer with great accuracy where exactly the pin was poked into you. But in some other places in your body, the nerves are more far apart. And so you can't tell even if the nerve, even if two pins, you poke a pin here, you poke a pin here, you touch the same place. It's like the pin is here, but you're not always right. So there's this inference that happens in proprioception as well. Nice, it's like there's one mode where you have a very high confidence prior of where your arm is located. You know, it's right by my side, let's just say. And when there's a high confidence prior, data that don't support that prior are only going to move it slightly. Someone's very convinced that something is the case. They say it's 100% the case that this is this way. Well then, you know, there's no point in providing alternative evidence because they're going to see it in terms of their prior. So in that case of the phantom limb, it's like there's a high precision prior that the limb is in a certain location. In fact, it's such a strong prior that sensory data, like proprioceptive data that your arm is in a different place or visual data that for example, somebody no longer has that limb, still doesn't manage to update the experience or the generative model of the limb being there. So that's one case that we can frame in an active inference way. And another case would be like, where's my leg? I have low confidence. And now instead of framing this just as a estimate that there's an uncertainty on and then we're going to have to tag a value on, we go from that estimate with uncertainty to planning as inference, what motor control or what motor behavior is going to reduce uncertainty on the leg? And sometimes that's like looking at it. Other times it, but that's a motor behavior with the eyes. Other times it might be doing a motion that then like kind of gives us some sensor data. So it's kind of, I don't know, like checking your phone and seeing like what notifications are there when you're uncertain. Maybe sometimes like a shake or something like that. It's like just reactivating a bunch of sensors just to pass, say, oh, yep, we're here just checking in reducing your uncertainty. And so the state estimate, even when there is uncertainty about it, we can reduce that uncertainty through action selection rather than just being trapped in a framework of like, okay, I'm not sure where it is and now I'm not sure whether that's rewarding or not. And now I'm not sure which path to take that's going to be most rewarding. We don't go down the reward rabbit hole because there's a deep generative model and for evolved slash created or whatever organisms those priors are fit. So it's not just like we're dealing with a space of all possible generative models. These have been evolutionarily selected to be functional in our physical constraints but underwater or in a different gravity space it'd be a different set of priors that would need to be learned or utilized. In our last piece here, what do you think were some of the big pieces that we leave this 23 strange attractor with as we move forward? Like what kinds of distinctions or insights here do we want to see how they apply in different settings, Dean and then anyone else? Okay, so this is gonna be a strange twist but that's okay. So I came away from the paper and I thought about some of the things that Casper did bring up, Casper has did bring up in terms of, so what happens when we talk to instruction lists like instructors are all around us, there's people that spend a great deal of their life preparing to become an instructor and we come along and say, yeah, that's great. We know what you do but we're gonna kind of, I don't know, maybe bring you down a peg because we're gonna introduce this other stuff that you weren't asked to sort of train yourself up on and I'm not sure if that's a threat or not. I guess it depends on the personality who the idea is introduced to you but if you know the movie's Independence Day and Thelma and Louise, I'm gonna bring those into a juxtaposition here. I think for the instrumentalists, they don't necessarily see that there can be, for the person who's living out the adversarial Greek myth, us against the aliens or us against the people who are chasing us down and trying to get our convertible back. There is no set of instructions per se that those people wanna follow. They're living the interactionist state and you can start out with kick the tires in light, the fires a la Harry Connick Jr. and still end up in a rapid deceleration moment a la Susan Sarandon and Gina Davis and I think for people who are gonna look at this problem in terms of how do we introduce interactionism in a way that doesn't threaten the top-downedness of how most situations where we want people to be able to take up a embodied skillful performance without putting them in a situation where they behave like aliens or behave like police chasing somebody down with a helicopter, we're gonna have to sort of, I don't know if it's a slow rolling, I'm not sure exactly what it is, but those people who are now currently instructors and instructionalists by profession, we're going to have to give them something so that they don't cling desperately to a model that I think the authors of this paper demonstrated requires a rethink. I don't think we throw out instructionalism, but I absolutely do think that it's place on the top of the mountain and now all these little robots climbing up it, that has to be reviewed and revisited. And so, I mean, as a go forward, I don't know how we put people in a place where they're comfortable with that kind of, I don't wanna say it's disruptive, but I think it is. Well, I think we have to be able to come at this in a way that doesn't make them sort of circle the way and solenoidly and try to protect something that obviously is up for a bit of evolution before a revolution. Nice points there, Stephen. Yeah, I like that idea of, yeah, this challenge of having this mountain that we've kind of ascended and placed these optimal control theories on. And maybe it's more like there's a mountain of active inference and there's a little rock structure on the top, which is optimal control theory, which we're able to sort of utilize on top of that. You know, but it's a lot more rickety and it's not the main game in town. You know, the main game is our intuition, our dynamical balancing of all these different free energy parameters. But we do, and particularly humans have this ability to impose this extra shack of tools on the top that we now think is the main game in town. Nice, so to Stephen's then to Dean's point. So as to which one's the mountain, which one's the molehill, which one's the ant hill? Here, Friston says optimal control can be cast as active inference with three simplifications. So as we increasingly clarify our understanding of these different models, we'll be able to see which models are nested within one another or our alternate framings of one another or are, you know, more overlapping than different or what are those key distinctions or what are the variables where you can expand it to three and all of a sudden you've gone from model A to B. So remains to be seen. And by him saying that optimal control is a thrice simplified active inference along these three specific domains, it does suggest that the bigger picture might be active inference and then you get special case optimal motor control with a value function as a derivative model within an active inference universe, even if these other models were developed first, that's often the case that a certain model is introduced and then it's later generalized in several ways so that in the future, it's actually seen as a special case model even though it's actually at one point, it was the only game in town, certainly for a lot of biology models, that's the case. And then Dean, I really like this idea of like, there's a job title called an instructor. And if somebody says, I'm an instructor in, you know, organic chemistry or in tango, baseball, people know what you're talking about. So how could interactionism be so broadly participated in that people say, yeah, I'm an interactor, I'm a baseball interactor, I'm an OCam interactor, maybe the job would be called that or not. So how do we, like you were saying, not kind of threaten or disrupt their modes but give a yes and for what they're working on. And maybe part of it is that the interactor, it's like we're framing their interactions with their student, let's just say, as a cultural scaffolding of an interaction. That doesn't make it a symmetric interaction. It's a culturally scaffolded moment of Bayesian model updating. And so it's like, yes, you do give instructions. That's the verbal term for what you say when you say pull or breathe or, okay, slow down. You can still give instructions. We're not saying that that is not part of your toolkit but let's more broadly look at your interactions with this dynamical system as doing what you want to do. And maybe it'd be enough just to push them to that edge of the hill and just say, interactions are a broader way than instructions. Would you want to reduce everything you do to just barking instructions or sometimes you leave by example, other times you show wordlessly, you don't need to frame your interactions as instructions. Recognize that you're just dynamical interacting systems and this, a lot of people are working on a framework that suggests that this is a broader way of thinking about training and might be able to help you on one hand with a high performance elements and on the other hand with accessibility and in making your work more communicable. Not sure about that of course, but just one way to go, Stephen. Yeah, with the idea of roles and how people look at performance in roles, this really comes back to coaching again. I mentioned that earlier is, coaching came out of the need to supervise therapists and how they could be guided in their path as they were supposedly helping the people that were needy. They needed guidance. And that same idea now of coaching rather than teaching or instructing. And while they have their role, all is given the formal documents that you might need and providing the bigger narrative and giving certain tools. Within that, there's this idea of, okay, what types of developmental or performance interactions interactions can be nurtured and cultivated. And I think this actually really can tie into the kind of pragmatist social constructivist approaches that coaching is trying to work with as in, but it gives it a bit more, does actually give it a bit more meat without becoming too complex, which I think previously, a lot of the active influences may be a bit too abstract. But this is starting to show a bridge there of why coaching at different levels can and positive psychology more broadly can have a role. Okay, great. Thank you, Stephen. Dean. Yeah, Stephen, I appreciate it exactly what you're saying. I think to get to a place where interactionism is no longer threatening, people have to want or appreciate that you can be a subject matter expert and a prediction matter expert. And I'm not sure. I think there are maybe are some coaches who are both. But I sort of transcended that and got into the way faring thing because way faring imposes a prediction aspect to this. I can't predict for you whether that running drum shot that you make, I can't be in your ear. I cannot coach you to take that shot in that moment with that much time left on the clock. In fact, if I do that, I'm probably adding impediments and I'm actually being an optimal controlist, micromanaging you and whether you make it or not, it's not about me. It's about whether the ball and the basket arrived at some sort of outcome that we both want. I think that that's part of how we get people to a place where their dependency on their subject matter expertise doesn't overwhelm the fact that for these performances, there is a predictive element. And so we have to sort of give that equal billing. I don't think we have to overbill one or the other. It's back to the between. And that's a prediction matter expert. And again, I don't know if it's a role being a way fairer because you're in the boat with that other person, right? It doesn't matter whether you're the one saying, okay, well, I'm getting a sense of the direction of the current right now and the other persons looking up at the stars, you're working together. And so everybody brings their expertise. And I think that only materializes if we can include something with the subject matter expertise. Culturally, yes, absolutely, there's huge cultural piece to it. But I think you can be a coach and not necessarily be a way fairer, but I think if you're a way fairer, you have to bring a predictive element into it. Otherwise you're not being a way fairer. Interesting that having an integrated way to talk about goal and process and interaction with a niche, interaction with your co-wayfarers, just like your example with a movie. There's, is the map app gonna tell you which directions to take? That's kind of on the instructionist end versus are you on the run? So anywhere you go, you're just not following instructions. You're just negotiating sequential interactions. And reducing uncertainty in a way that isn't just looking down at the application and seeing which way to turn when. Nice. I'm glad you picked up on the map going out the back end over the trunk of the car. Exactly. Oh, you're reading it. Oh, no. Right. Cool. Well, Steven, yes? Yeah, just one question. Way fairer, is that like a particular sort of approach? Is that an orientation based or? Well, yeah, you're on your path and how are you fair and how are you feeling? Okay. Right? It's not just way finding. It's not just the discovery piece, but it's, so how does that make you feel being in that uncertainty space? Because again, if we're gonna close loop, discovery will eventually loop back around a recapitulation. And then we're open systems that are constantly closing these loops. So how does that make you feel? Right, got you. To connect that also to John Boyk and second order science, as well as a Bucky Fuller in trim tab, when we're the rudder, we need to act in an anticipatory fashion. So that's kind of like first order cybernetics. The rudder has to turn before you hit the iceberg. You know, you gotta find a path. So path finding and that whole notion of anticipatory control comes to the rudder. And the trim tab is the little rudder on the rudder. So it's anticipating even the anticipatory nature of planning because you have to turn that trim tab even before you turn the rudder. And so here we have instead of a sort of mechanical model with recursive levels of rudders, we actually have hierarchically nested priors. And so it's a little bit like higher and higher level priors that in one sense, when you look at one iceberg and you try to tell a story by backfilling what happens, they do act in increasingly anticipatory ways. But then when it's seen as just part of the way finding of the ship, they're all in a cyclic feedback with each other. And it's not like one is really upstream of another, but there are certainly pieces doing different things. Cool stuff. Any final thoughts or questions? I don't see any other questions from the live chat, but another interesting and surprising.2 as usual. Does anyone have any final comments? Well, great. We'll look forward to some awesome upcoming streams. Let's just look at those really fast. Next week on the 21st, we're going to have our symposium with Carl Friston. So if you've been helping to co-organize this event and develop the questions with us, you'll be with us live. Otherwise, we're going to be recording this, not live streaming it, and then providing the recordings on our YouTube channel. And then for the second two weeks of June, we're going to be doing a more technical or statistical paper, empirical evaluation of active inference in multi-armed bandits. Then in early July, we're going to have a Fields and Livin paper, scale-free biology, integrating evolutionary and developmental thinking. Then don't have any papers for the end of July or beginning of August, but we'll figure out what we want to do there. And then we'll be turning to some industrial engineering and some other areas. So as always, pretty fun journey. Thanks, Dean, Blue, Steven, Dave, for participating live and everyone for watching live or replay. We'll see you on a future stream.