 Greetings. Hello everyone. Thank you for joining live or in replay today. It is May 27th. Is that true? Not not even close. Not even close. It's June 8th. It's June 8th, 2021. We're here in Acton Flab Livestream 23.1, and we are discussing the paper in body skillful performance where the action is. So welcome everyone to Acton Flab. We are a participatory online lab that is communicating, learning and practicing applied active inference. You can find us at the links that are here on this slide. This is a recorded and an archived live stream with all of the advantages and constraints that that provides. Please provide us with feedback so that we can improve on our work. All backgrounds and perspectives are welcome here, and we'll be trying to follow good etiquette for live streams. From this short link, you can access the calendar of events for live streams of multiple different kinds. And highlighted there with the blue arrows are today's discussion on June 8th and next week's discussion that will be on the 15th. And they're both going to be on this paper embodied skillful performance. So we had a good time making the dot zero and learning about this and kind of opening up what this was adjacent to. And now we'll look forward to continuing the discussion and just seeing where it goes. For those who are watching live, it would be, I guess, especially appreciated to throw us some questions, whether they're related to the paper or not. The goal for today is to discuss and learn. The paper is embodied skillful performance where the action is by Hippolito, Baltieri, Friston, and Ramstead from 2021. And today in 23.1 will be ambling skillfully, we hope, through the paper again and flagging areas where we want to be talking with the authors next week. Greetings, Dave. Welcome. And just seeing where we get to opening up threads and kind of enjoying the middle part of the dot zero, one, two sandwich. So we can begin with the introductions and the warm ups. Today, there's a few of us so we can go around in the introduction and just give a short introduction or a check in. And we can also then pass somebody who hasn't spoken, taking a look at the warm up questions. So I'm Daniel. I'm a postdoctoral researcher in California. And I'll pass to Dean. Yeah, hi, I'm Dean. I'm up here in Calgary. And I'm getting more and more involved with this idea of sort of building up the active inference profile. I'll pass to David. Dave is there. But if he isn't, then I'll pass it back to Daniel. Dave, you're welcome to join in whenever you would like. I hear you tapping a little bit, but maybe whenever you want to speak. Otherwise, what is the active inference profile? That's what I think we're trying to build out here. I think we're trying to give that some anchors and some leverage. So we haven't figured it out completely yet. It's like everything in active inference, it's a generative model. It's growing. And I don't know what its final form will be or if it takes on a final form even. Are there other areas that have profiles or are there other profiles of note? Yeah, I think there's convention that that's already an established profile. And I don't know that this has got to that place yet where we can say it has a convention or something that's predictable about it. And I think that's what makes it kind of, well, in my mind, it makes it curious. Where is it going as opposed to where it's been and what it's established so far? Nice points, agreed active inference, especially in these nascent times. It's like a strange attractor where people come in from so many perspectives, and then they get slingshot out in a different direction or with a different memory or different view. So Dave, no worries about the hardware mystery. Everyone is welcome to just type in the YouTube chat if they have questions. So I think for the rest of this stream, we're going to walk through again, maybe thinking again about walking as a skillful action and all the fun similarities and differences between motor actions, which are what are discussed here and that sort of space where motor and knowledge come together like with skilled action. Yes, Dave, your different embodiments are muted indeed. So the paper laid out their aims and claims and the central aim of the paper. According to the authors is to discuss critically the limitations of instructionist control theoretic models of skillful performance. So the big dichotomy or dilemma that's going to come up again and again is this difference between interactionism and instructionism. So maybe, Dean, where are you coming at that from or how would you describe the difference between instructionism and interactionism? Well, first of all, even including interactionism as a formality, I think is a huge step. It's metaphorically speaking, it's opening both of your eyes. You have a second eye available, but maybe you weren't aware of it. And now with the idea of turning that into sort of a structured or formalized approach as opposed to just the instructionalism piece, which is traditionally what we've formalized is it's not just a doubling of what we might be aware of. It's actually placing ourselves where we can see the depth of what we're observing or what we're analyzing or what we're addressing. And I think that that would be a huge step if we could get people comfortable with that. I'm not saying that people will necessarily get comfortable with it though, because for a lot of people, let's be honest, just following along and being instructed is difficult enough. And now you're introducing a whole second lens of information. And very quickly, you'll have a bunch of people screaming, no, you're going to overwhelm them. You're going, well, I'm not just because I have both my eyes open doesn't mean that I'm going to be. I hope it doesn't mean I'm going to be scared. I hope it means that I'm going to feel a little like a little bit more grounded and centered. But we'll have a conversation about that today, especially when we include the whole body. Yes, the entire corpus. One thought there. Instructionism being like instructions are passed from A to B, you know, water flowing downhill. Instructionism has a directional context and also the idea that what's being passed is the sort of packet of meaning. Like here's the envelope with your instructions. Now carry them out. And what I think is exciting about interactionism is it's not just a bi-directional instructionism. It's not like, well, you instruct me and then I'll instruct you. That's sort of the low bar for interactionism. But where interactionism really blows the lid off is when we open into that third space of interaction and improvisation. So I think that's what's kind of interesting to draw out. And they are going to highlight in this paper how control theory models of skillful performance or otherwise implicitly or explicitly do have instructionist assumptions. Specifically this idea that motor representations convey or harness instructions about how to perform a specific task. So even if the authors of those control theory papers didn't explicitly use the word instructionism, then they're still playing into this idea that instructions are being passed, for example, from the brain to the body in order to be carried out. And we'll be looking at that from a more formal perspective in a few minutes. And the big claim that they're going to be reaching is that active inference doesn't need to posit that instructionist assumption. There's a way to frame control and skillful performance in the active inference framework that doesn't take on these instructionist assumptions and the content based directional message passing that it entails. And that will be something fun to explore. Also, Dave has a nice comment in chat. Dave wrote, as I recall, Piaget and colleagues counted and described something like 400 distinct embodied concepts used by the walking skills of a typical 15 month old. So that just shows how there's a lot going on, even with behaviors like walking. Cool. One other thing too, the assumption that somehow the muscles and the nerve endings sending signals that are in symbolic form is quite a stretch. I mean, we talked about that last week when we got to that inverse directionality slide. But I thought about that a little bit after because I went back and watched after last week. I know for me, I was incredulous. A couple of times we went through the paper with blue, but even going back and looking at it, you'd have to build a really powerful argument that says that the pressure signals on my fingertips are coming back to me in verbalized form. Exactly. Exactly. It's this idea that, you know, the files around the computer, like what's coming up from the fingertips is pain and what's going down is an instruction. That's sort of the Cartesian dualism mapped onto instructionism is just like body and brain are two disjoint pieces and they pass special symbolic information to each other. Right. And we'll see how. And our brain and our bodies having a conversation. So what is it if not a conversation or could it be beyond a conversation? Right. Right. Nice. Cool. We covered the abstract last time. All we'll point out here is just that they start as good philosophers with an examination of the instructionism and what that entails. The second and third sections of the paper are characterizing the representationism, representationalism that's related to these motor commands, kind of what we were just alluding to. And a special focus on the optimal motor control theory. And then the final sections take it to predictive coding and to active inference, which are formal frameworks for modeling behavior that are related to control theory formalizations. But the goal of the paper is to show that some of the instructionism is left at home, which allows us to go out with interactionism. Here is the roadmap framing a similar outline and also just giving a few more signposts along the way of the figures and the box and whatnot. The keywords provided here were helping us understand what did the authors want to have their work indexable near in the semantic net of literature online. How did the authors want their work to be grounded? And it was in skillful performance. That's in the title. Optimal motor control. That's the sort of scapegoat or foil. That's the contrast. Maybe even bringing us back to contrasting divergences. Instructionism, again, tied up with the optimal motor control. Motor representations lending us into action oriented representations and finally taking us to active inference where we kind of begin and end. So skillful performance can mean many pieces and the authors defined it as opposed to bare movements such as breathing and blinking skillful performances are intelligent bodily activities, which harness knowledge about how to perform certain movements expertly. It reminds me of the eternal debate who slash what is intelligent if an insect knows how to walk or if it is trained to do something versus if it's something that its body provides the training for inherently. What is skillful? So definitely we'll want to ask the authors and anybody in the chat and anybody who joins us next week. What is skillful performance? What is unskillful performance? If somebody tries to do a skillful performance and fails, is that still within this category of behavior? I saved up for this week because I wanted to get to the after the comma the which harness knowledge. I kind of hinted that that's the I wanted to ask a question and I still do want to ask a question. Well, what does that mean to harness knowledge consciously? Or or is the can you get to a place? I mean, blue talked about this. You talked about this. Can we get to a place of doing it automatically? Does that mean it's harness just because we're not thinking about it? And how much time did they spend looking at that piece? Because that that kind of demarcates out what skill is. Is it if it's a dependent, if it's a dependency on harnessing, then what? How do we define that? How do we define what's what's captured and what isn't? And the harnessing brings up kind of like harnessing or yoking, you know, Buffalo or a horse or something like that. And that's kind of hinging on that word. It's like the knowledge is that wild stallion. And it's actually the bodily activities which harness that knowledge because when you quote know that your body could move in many different ways. But skillful performance is when that is harnessed. It's when it's connected to an instrument or whether it's its own instrument and its channels in a way, maybe that's productive or expressive. Well, at a fundamental level, we have to again be careful that something going on at our fingertips is being harnessed by something going on three and a half feet away. Yes, connected, but is that an assumption that's verifiable? Is there an actual harnessing going on even? Because lots and lots of times things happen at my fingertips that I'm not sending. I'm not even aware of what my hand is doing. My wife tells me, stop jiggling, right? Just because I got nervous energy or whatever. I think that's going to come back also to the predictive processing and predictive coding frameworks, which is sort of when the body is getting the signals it expects. Like if you're not consciously bringing your attention to your foot, if it's just your foot in the shoe, it feels like nothing or there's no attention paid to it. And you can kind of shine that light of attention on it and it will feel normal or maybe you can start to zoom in on other sensations like tingling or other feelings. But the sort of default state when you're not paying attention, things that are happening the way that you expect is neutral. That's why that active inference piece is really interesting because in order for it to be active inferencing, there has to be some kind of attention or awareness being paid. And that's like its attention as a resource got to pay the cost of attention or suffer the regret of not paying attention. And Dave wrote, trying and failing, especially if part of the problem is choking, which was brought up by the authors in the paper as well, is likelier to involve more neocortical activation and more access consciousness than smooth performance of the same routine. So trying and failing, there's a lot more inner dialogue or thought process like awareness happening. Whereas when something is performed skillfully and smoothly, it just sort of goes off without hitch and just, you know, you whip out the instrument, make your shot or your swing or whatever it happens to be. And then maybe you can narrate and retrospect, but in the moment, you're not having as much awareness. So this is really interesting to me because, again, is it skillful to be in the Peloton and be pulled along by what we would describe as the intention of those in front of you? Is that considered to be a part of this? Because there's a lot of things that we're aware and conscious of and some things that happen in these skill performances, they're just because they're part of the collective, they're caught up in the inertia of the moment. And how do we factor that in when we're talking about active inference? Because I think that's what instructionalism doesn't attend to. There's many, many things that are going to get dragged along here just because they're drafting. And how do we address that when we're talking about skill performance, skillful performance, and interactionism? Because interactionism at least says, I can be caught up in an inertia moment. I don't have to explain it. Absolutely. Instructionism has to pitch and hole everything into that symbolic information theory. What is the instruction from the front of the pack of the flock of birds or the Peloton of cyclists? What is the instruction that's getting passed back? And so we have to go from what we know is just happening, which is like the physical inertia. There's also a narrative inertia. There might even be just an error current going over the person that actually stabilizes them in their spot. We don't need to fall back to referring to those as instructions when they're clearly interacting phenomena, different types of interactions. You could have somebody breathing heavily next to you and that could give you a signal, but it's about the unpacking of that interaction and then where that takes the collective rather than again just thinking about everything as connected with wires and passing secrets. Right. Cool. We can look again at the active interplants. So this was just asking, where are the limits of skillful performance? If the niche and the fitness to the niche is what counts for different organisms, our human cultural niche has all these awesome kinds of skillful performance and otherwise. But what about other creatures? What about other phenomena? The kinds of things that we might want to also model with active inference, but just don't quite have that same level of maybe thinking through other minds as we have with people. Whereas we try to put ourselves in the shoes of somebody else. Do you think Daniel that that plant has got an embodied technical skill? Welcome, Blue. Hey, Blue. Hi. We're just wondering whether plants have the. This plant. This plant. Have a skill now that it has the tools to be able to move from side to side. It's a great question. I wonder if if it could be shown that it got better at moving from side to side, that would start to look increasingly skill like. And I think it's it's that idea that it could their molecular changes could occur that would allow the plant to respond faster. Or you can imagine selecting on a whole greenhouse of plants and the ones that are able to switch faster. Those are going to grow faster. And then you select the ones that grow faster based upon their ability to learn in that generation. So it kind of shows how you get this development of skill within a life. That's development. But then between generations, you can also have the entrenching or the canalizing of learned skill like language. And skillful performance in in this paper and in our slides, it's kind of like we're hinting a lot at this full body type performance. High jump and things like that. But of course, language is maybe one of the examples of skilled performance. It's a motor behavior. It involves a huge number of interacting motor subunits voice box and the mouth and speech therapists and others that they go deep into that. But speech is almost like the case where you say, well, surely instructions are being passed. That's maybe the the high watermark for where interactionism could go would be to have a speech, a framing of speech that moved us beyond instructionism. Because this seems like clearly the transmutation of semantic or symbolic information into not directly related motor behavior. So that would be kind of a nice case to investigate. And Dave wrote in the chat. It seems odd that dogmas that have been known to be false for decades are still premises of so much funded professional activity. I'm thinking of BF Skinner's insistence that all control activity must be strictly cortical, even though it's been known since at least the 50s, purely on the basis of speed of nerve conduction, that that would prevent graceful walking, let alone faster and more intricate movements. So that's pretty interesting. We have almost just anatomical reasons to know that not everything can go through this sort of control and cognition in the loop. There's just not fast enough transduction along the nerves. So there has to be this sort of distributed function more consistent with an interactionist framework than an instructionist framework. Yet it's easy to fall back to instructionism, maybe especially for professionalized areas. Let's go to optimal control blue if you want to, you know, feel free to add anything that you'd like. Optimal control theory is a nice bridge point, because in the paper and as we'll be going into, we're again thinking about motor behavior, but control theory is something that applies to the electrical grid, to logistics, to robotics, which are, I guess, motorized. But optimal control theory is a framework that expands to many non-motor systems as well. So that makes me wonder about whether we can also apply that instructionism versus interactionism contrast to control theoretic situations that are implicitly instructionist. Like there's an algorithm and then it sends the by order to the unit that makes the purchase on a market. Or it sends a certain piece of information to a different computer that then enacts that, like, suggestion. And then even in systems where we know instructions are being sent, like computer systems, maybe there's an interactionist way to think about that too. Any other thoughts or? Does optimal control essentially place all of our chips on the top-down aspect of the relationship between top-down and bottom-up? Does it do it, does it optimal control say optimization happens exclusively from a top-down directionality that there is really no place for the bottom-up part? I think it's a good question and I'll skip to this Friston 2011 paper, which is entitled, What is optimal about motor control? I wonder if that optimality does arise from the top or what is optimal? Once we move beyond instructionism, there's a perfect set of instructions, there's a perfect way to do it, and then that is either carried out faithfully or it's not. So philosophically, if you said optimal would be some openness to both an ability to recapitulate and an ability to discover, to be able to put those two things into some sort of coherent relationship. I'm not saying that that's what is being said here, but I'm just not sure why we always assume that optimal is always about being able to create copies. Even in skillful performance, even compared to a standard, there are things that happen when a gymnast does their routine where some element of the variability that applied to that situation is considered when something is being judged. I think the only thing that doesn't factor in variability is the clock, right? Like you either ran 9.8 seconds or you didn't. But many of the skillful performances that we pay attention to aren't always about recapitulation. Interesting, I guess it's all how you define it, but we actually want to get on to doing something useful here. Yep, and also here's a line in that first in 2011 that reminds me of Dave's comment, which is these estimates which are coming from a predictive processing or active inference framework can finesse problems incurred by sensory delays in the exchange of signals between the central and peripheral nervous system. So the generative model can include space for that delay when we think about just sort of two people who are talking and accounting for each other's delay. For example, in that kind of an interaction versus the send process, wait, send back model, there's actions that happen faster. Another example that maybe would be like baseball where there's such a small amount of time to decide whether to initiate the behavior of swinging. And this paper from 10 years ago laid out just such an interesting difference between value learning and active inference, which allows for that pragmatic and the epistemic component. The optimal suggests multiple things. First off, it's like it kind of suggests that there is an optimum that is being achieved. So that's one maybe easier way to one easier point just to say, OK, well, maybe there's an optimal way, maybe there's not an optimal way. But what is being optimized is the value just like reinforcement learning. What's being implicitly optimized here is a value function. And just as it says here, the replacement of value and cost functions with prior beliefs about movements removes the optimal control problem completely. So this idea of the value and cost, it's kind of like an economic analysis of motor behavior. So then let's just imagine that any number of circumstances like somebody has a belief about how a performance will be viewed socially. Well, you could couch that in reward and you could try to go break down from the narrative and the social all the way down to the movement of the fingers on the piano about the value. And well, if you mess up, it's a lower value action, but you're getting really far away from the embodiment of the performance, which maybe doesn't have that same sort of cost all the way down. So it's like, will we have cost functions and economics all the way down and value and cost? Or will we have generative models and an instrumental way of thinking about action? Blue, anything overall? Otherwise, also anyone who's watching lives, welcome to ask a related or an unrelated question. I'm good. Anything after last week's discussion that you'd been thinking about? I'm trying to remember last week's discussion. Trying to refresh my memory. Yep. That's what this first part is. Let's go to figure one and two of the paper and take a more formal look at and we can go to first in 2011 if we want more details on the similarities and differences because this is kind of the core of the contrast between optimal motor control theory and active inference. It's going to come down to the differences in how these models are framed because there's an interpretive layer, but at the heart of it, it's the differences between these two kinds of models between figure one and figure two here. So in figure one, we'll go to the juxtaposition. In figure one and figure two, we have in both cases the plant kinetics, which are defined with the exact same equations and the sensory mapping. So this red box is the same between both of the models between optimal motor control on the left and active on the right. And there is still an optimal control module in active inference. So that is interesting. But if we look a little closer, we can see that in figure one, that motor commands that you with a tilde over it, it's the minimization. So it's optimization, minimization, maximization, sometimes by minimizing the cost, you maximize the profit or it's flippable, throw a negative sign in there and a min becomes a max. So here, what's being minimized is an integral of the potentially the cost function c of x and u for changes in time. So cost is being minimized through time and that is optimal control in that framework. When we look at the optimal control in the active inference model, we see a really different structure, even of that optimal control module. We see that u, which is playing a similar role, it's not a command, but it's still being passed this u of t function is still related to a minimization. But what's happening inside of the equation is very different. And we can go to the first in figure caption to see what is happening there. That e, curly e, is the distinction between exteroceptive, which is e with little e and proprioceptive e with a p, reporting on hidden states in extrinsic and intrinsic frames of references, respectively. These prediction errors are the difference between sensory input observed and predicted. So going back to here, what's being minimized isn't the cost function through time. Even philosophical disagreements with Homo economicus aside. Okay, let's put aside cost. And then there's the challenge of assigning cost to different kinds of states. What's being minimized here is the cost through time. I mean, don't businesses try to do that. Everyone wants to figure out how to minimize cost or time. It's challenging to do what's being done here. The minimization between the proprioceptive sensory differential with a multiplication potentially through time. Not sure what that does in a model. But it just so interesting that the minimization here is just like we've talked about in so many of these discussions. It's about your generative model of action and sensory outcomes. Sensory outcomes are emitted by hidden underlying states that you can't directly observe. Like you don't know exactly whether it's night or day, but you are getting photons or not. So there's a hidden state that's being evaluated. That's the sort of hidden Markov model component. And the hidden state is in the model emitting states that are being observed as sensory outcomes. And in active inference, the depth of that generative model is actually such that instead of having a really rich generative model of cost functions, and then trying to reverse engineer your cost function to find the cheapest behavior, the generative model is of expected sensory outcomes. Like every time I move my shoulder here, it starts to hurt. And we don't need to introduce a concept of reward or cost into that equation. We can suffice to say that there is a generative model of the sensory outcomes under different motor policies. And the organism has a preference for, let's just say, being in non-painful states. It sidesteps that question of cost in a way where we can see that even from the side by side, even where there is an optimal control element, like a minimization element, what's being minimized is something that is very, very different. And that's one of the key aspects of active inference, that what's being minimized is the divergence between expected outcomes and realized outcomes. That's what, of course, makes it so related to predictive coding, predictive processing, rather than drawing only on the reinforcement learning type approaches that are doing a minimization, implicitly maximization on reward. Daniel, can I color in a little bit too? Please. When I saw this, it was interesting because what I saw was a replacement. So in the figure one, it's due estimation. And then in figure two, what I saw was the removal of that and the application of a rule. So minimize prediction errors. That's the rule as opposed to a step. And all I thought about was, for example, Hippocratic oath. I don't know what I should be doing in terms of this patient, but the rule is do no harm. So I think that's what I thought was really interesting here is that an actual step was replaced with a rule. And that to me was the big differentiator, the introduction of rules. That's why I asked last week when you changed the rules of the game for the plant, I know the affordances changed and I know the tools were introduced and all of that stuff. But I keep coming back to tools, rules and pools, tools being obvious, rules being obvious, pools just being an aggregation of resources. And so I think if you if you're able to sort of coordinate all three, it gives you a different sense of what might be going on. Okay, a fun thought on that just to think about game playing, the rules are always in effect. And the rules can be explicit or implicit, like it's a rule in chess that you have to move out of check or figure out how to get out of check. But then there's sort of what are equivalents to rules like if a piece is pinned, it can't move. But you don't need to state that as a rule. So rules are they're simultaneous. They're always in effect, unless there's another rule, and they can breed emergence. You just say, okay, there's here's how the pieces move. And then here's check. Those are the rules. And then it's as if there's a rule that is you can't move out of a pinned piece. So rules always in effect. Allow for the space of emergence steps have to be sequential. And so it's almost like we're moving from the within a round of a game. That's the instructionist. Okay, you're playing risk. First, roll the die. Now, see whether the three is higher than the two die or whatever it happens to be. And interactionism is pulling us back to like the rules in the pattern that still there can be sequential instructions or sequential information can occur. But it is very different to have rules rather than specified steps. So let's see. We're building up. Yeah, go ahead. Just one last thing. So when I would ask young people to watch a video clip of something and I would ask them to draw three columns and title them tools, rules and pools and find examples, find tokens of each. What do you think was the easiest thing for them to accumulate under? It was tools. Those have got the edges. What was the what was the second highest aggregation pools because they could they could literally figure out what was congregating. What was the hardest thing, the parallel processing, which is what you just described. It's always going off. The sequential is much easier to account for the parallel processing. The happening as co occurrence is the hardest thing for people when when you're walking into a novel situation to take a count up. So that's you can you can say it. It's just being aware of it and finding examples of it. That's really, really hard for most minds. So even from a computational aspect, right, like the parallelizing functions make them like speed up so much, right? But it's they also like are so computationally intensive, right? Yeah, exactly. Dave has another great comment on this slide. He wrote, I'm looking for where time is applied in the two approaches. The intrinsic frame of reference for active is you tilde of T. So that's we'll zoom in a little bit on just figure two. Here's that you tilde of T passing through the intrinsic frame of reference. But there's no explicit T in the optimal control formula. We can just see that motor commands you tilde are passed. So there's kind of like a timelessness to, you know, contract this muscle. It's not contract this muscle now. It's actually just a timeless statement. So T, he says, whereas T is inside the optimal control formula. Here is T, DT, it's a derivative through time and not in the intrinsic frame of reference as seen in active. So he asked whether that tells us where the approaches are taking account of time differently. I think that's a great question is how does time play a role differently in these two frameworks? Let's keep on looking for similarities and differences. So we have rules, not steps for active inference. We introduce a time dependence of that downward motor signal. We have, of course, a deep generative model component. We have state estimation is that's the the Hamlet joke. There's this whole module that's clearly different in optimal motor control, which is the state estimation. And that's what doesn't need to be done in active inference is this state estimation of, OK, given the sensory input we're getting, like I'm getting, you know, two thirds of my proprioceptors here and one third here are firing. So let's estimate that the arm is here. OK, now pass that to the next part of the model. Given that the arm is here, what do we want to get the arm to? OK, now given which side of where we want the arm to go to, we estimate the optimal and the actual are send the command, you know, turn the steering wheel to the side to bring it into alignment with the optimal or the most rewarding state. So state estimation is in optimal motor control theory, but in active inference, you just don't have it. And that's sort of an interesting piece to drop out of the model because we don't need to have an explicit representation or an explicit variable with the location of the arm. So that's a rule. It's like John Cage. Everybody has the best seat. It's almost like it's enough to just say you're on the ocean, you're sailing, and then you're going to be trying to reduce your uncertainty as you pursue policies that in your generative model of the world, which includes latent causes. So the deep generative model includes like latent causes of the world, which are not anywhere to be seen in this optimal motor control theory. Like there's a deep latent cause that if too much weight is on, you know, this branch, it's going to break. But where's that the constraints of the niche or something that's deep about the nation out there? So anyways, it's like you're in the boat. You want to be reducing your uncertainty about getting the sensory states that you want, like seeing that land look bigger, reflecting you being closer. But you don't need to have that GPS coordinate. Where am I? Okay, now where do I want to be? And how are we going to get there? It's enough to actually have a sensory preference and then reduce your uncertainty on the trajectory of reaching that sensory outcome. So that's a big difference in active inference. And I'm seeing even a few more. Let's look at the green boxes. So in the green box for figure one, we have a forward model. So they write that the function of the forward model is to improve the execution of action. It's kind of funny, you know, executive, executive office or the executive component of the government, it's about executing. That's where the instructions are issued from executive orders, improve the execution of action by helping to finesse the inferences of the state estimator. So again, it's in feedback with the state estimate, and then it's sending an effort copy. So that's outgoing is efforts. And so there's this feedback between the prediction of basically where the body is in this case. And then that gets sent. It's kind of interesting that the X comes from the state estimation, that's the where is the arm. And then the forward model is the forward model of motor control. That is what connects with this you tilde. That's the green box that forward model in optimal control. Whereas in active inference, we have a different forward model. It's a forward model of X, as well as V, which is the prior belief. So again, there doesn't need to be a forward model of position and motor action as happening here. It's enough potentially to have a model of action and over beliefs. We can look at the first in caption to see a little bit more closely. Any thoughts on that? I think the state estimator to go back, I think one live stream. State estimator says that there have to be ties between the two rails to stabilize those ties so that some processing can take place. I think what this talks about is the fact that we know that there's a gradient. We know the train is going to roll downhill as long as there's a gradient. There's a rule as opposed to trying to stabilize the relationship between the ties of the railroad. Two very different ways of looking at something. And I think that this goes kind of back to what I said. I think there are times when we do give ourselves instructions and then there are times when we simply parallel process because we have a rule. I don't want to make mistakes because that doesn't serve my goals or my target. Again, I just suggest that the inclusion of both is probably to our advantage. It's just that what we tend to do is give all of our, we've been trained to give all of our attention over to making sure that the thing is stabilized. Also, one piece of first in 2011 that's not brought over to this paper figure one was that motor control figure two is predictive coding and motor control. So already we see the reduction of the model in the terms of the state estimate goes away. You don't need to explicitly have a state estimator because the state estimator has now been subsumed into the forward model, which is why it crucially the mapping from hidden states to sensations is now part of the forward generative model. So we have a generative model of what we expect and now with predictive coding instead of keeping a variable of what we think is happening. We just have, which is can be tracked through time and all that. We're simplifying that state estimator by including the expectations about the state estimator in our forward model. That's predictive coding. How does that differentiate from active inference? So that's a good question. So here we have plant kinetics sensory mappings forward model and optimal control with a cost function still coming in. You still see that integral with the minimization of motor commands you over time. Whereas in active inference again, we're moving from it's more valuable to be getting the signals I like and I like rewarding things to. I'm just bringing the sensory outcomes into alignment with expectations and I'm selecting policies based upon which ones I think are going to get me there. And then this section that I had highlighted in summary, the tenant of optimal control lies in the reduction of optimal motion to flow on a value function like the downhill flow of water. So that's pragmatic value and that's awesome because think about how much work has been done on information flows in the last 10 years. Conversely, in active inference, the flow is specified directly in terms of the equations of motion that constitute prior beliefs like patterns of wind flow. The essential difference. So how do we go from water going downhill on the value? You know, gradient descent or up and down again, they kind of get flipped around, but how do we go from just thinking about what direction does gravity take you? That's value. That's reward. There's one answer. You know, which policy for financial outcomes is going to have a higher value? That's that scalar estimate of value. The essential difference is that prior beliefs in active inference can include solenoidal flow. So this is where we're opening up the whole discussion of play. So I hope that we can think about that for a second. And first and says, you know, he doesn't want to overstate the shortcomings of optimal control. So it's not that you can't introduce a solenoidal component to value. You can have the idea of solenoidal flow circulating on ISO contours. But how many businesses stop and say, you know, let's not pick the most rewarding number. Let's look for maybe it's not the most rewarding number, but then there's the most solenoidal play around that number. People would say, but we're not going to be doing those. So why would we stop at, you know, 5,000 feet when we know that we can get to 6,000 feet? And solenoidal flow and the ability to have that pragmatic and epistemic that directed in the solenoidal components opens up the space of play, hopefully. So what do you think about that? Or does that I'll put up this slide where we had separated out this off motor control theory with a value only? That's the one thing that this robot is tracking is the slope and the altitude. It's following is uphill as much as it can so it can get to the top of Mount value. Whereas in active inference, we're using the Helmholtz decomposition, which is related to the physics of flow on vector fields that is going to reduce to these two irreducibly separated components. So like they're totally orthogonal. They don't have redundant information. They have totally separate information. There is the value component, the straight line, but there's also this equal value solenoidal flow. Any thoughts on that? You explained it really well. Here's a comment from Dave. Well, people are thinking giving oneself instructions can refer to something real and useful if one interprets the execution of the instruction appropriately. I'm referring to playing a piece of music without touching the instruments, which can be nearly as effective as actually playing. If the challenge is to simply touch the right stops at the right moments, this can actually be more affecting. Empowered machine operators may seem less efficient than highly regimented ones, but may over time get to far more effective overall operations. Dean, what do you think about that? Again, I completely agree. My only question is why are more people not picking up on the potential of this and the value of having both an awareness of both? I think it's maybe just getting started in a computational way. I mean, we've explored some computation papers looking at how to optimize a given task computationally. I think that it will gain traction. It's just so new. I mean, not active inference as a concept, but the practical applications of active inference I think are just starting to be explored. I think that the play aspect is important. I mean, because it's how we learn, right? Exploring or playing or these types of things give rise to real learning, even if it's accidental learning, not necessarily effort directed. I think it's also, I think it's something some has to do with a perception that play doesn't necessarily generate value, which is kind of sad because there's so many examples where it does. The outcomes of play cause your mind to go off in so many, because it's supposed to be about distribution. This is a kind of a way of encouraging that distribution. And yet, for some reason, it just doesn't get the same respect. I'm just thinking about the pipeline from basic to translational to applied to sort of infused research, like once everybody knows that, you know, that was once basic, though, becomes applied and then becomes every day. So let's just imagine that we're going back shooting on that skilled performance slide. Well, if we have implicitly or explicitly, if we're working under this optimal motor control function, we're thinking, okay, maybe people's biomechanics differ, but for each person's biomechanics, there is an optimal way to shoot a free throw. And we want people to have a more accurate modeling of where their hand is on the ball so that they can execute the movement better so that we can get the reward, you know, score the points, put the ball in the hoop. And so it relates to having people practice or maybe get input on your hand was a little bit off here relative to the coach's expectations of where the hand should be. But then there's this whole question about visualization and about what Dave was saying, like playing the music without touching the instrument and the ability that was almost discovered in a bottom-up way of changing beliefs about, you know, beliefs about one's performance, rehearsing mentally, but also changing beliefs about performance influencing motor behavior. That is more consistent with what we see in active inference. And so there's just multiple things that previously would be seen as sort of exceptions or quirks of a pure motor control system, like incorporating these deeper elements of our generative model all the way up to even like the psychological, like maybe your generative model is that your team is fated to lose. So then you do choke in that moment because like, you know, you didn't psychologically want to win or whatever. And that's as Dina's were always joking about like the interview with the athletes right after the game. It's like, I wanted to win or something like that. Like they're not going to give you that third person. My hand, you know, was on the wrong spot on the ball. It just we're seeing how now as we have just like Blue said, more computational modeling that recognizes interactionism and that recognizes deep generative models. It will be possible to start to slide some of these examples from being like quirky outliers to actually sketching out the structure of inference and action. Instead of just being seen as sort of like this one weird trick that's going to help you learn free throws. It'll be like, that's always about your generative model. And that includes you as being a team player. For example, it always is going to include that. And then it's going to come down to maybe being modeled better by active inference rather than a reward based framework blue. So I just was thinking like, you know, in your you said you were exploring like the dynamic you're going through the dynamic that was like exploration and then doing something intentionally like learning how to do it and then like perfecting a skill. Right. I was thinking like you left off the end of that like just like from a societal like standpoint right like we explore new things like not thinking as an organism but thinking as as a social structure. We explore new things we set out the intention to complete those new things we perfect those new things and then we forget them. So does that come into play in this I wonder because I think about scaling active inference a lot like from not in the way that we talked about scaling active inference in the paper scaling active inference but just as from a like you know small components of assistant to systems. And you know in that scaling like like so are we missing something about about forgetting right like like handwriting if you think about like learning cursive. You know that was something like people start out scribbling on paper little kids you know they're playing and they're drawing and whatever and then they they intend to learn cursive they perfect cursive. But but like as a society we're not teaching that anymore so they don't know cursive and it's like typing like now we have voice translation so we don't know typing anymore. I mean at these types of things like our influential in a society so does that come into play here like where I've talked about before you know thinking about doing something like entering the flow state right when we talked about the flow last live stream. You know it's like you have the intention to do something and like you think about doing something but when you're really a skilled performer you enter that flow state and you forget about doing it you just do it it's just magically accomplished. So is that like flow state comparable to this like forgetting of as a society right so when you enter that flow state and like you're doing something magically like just with your kinesthetic memory and you're perfect at it. Does that compare to society like forgetting a task because we've already mastered it like we forgot about that we're on to the next thing I wonder if that's relevant anyway sorry if I rambled. No it's really nice here's what it made me think about was instructions again implicitly or explicitly they have to be remembered or forgotten because if they're not remembered along the chain or at least in the Ram of the computer it's like they're never carried out. If the instructor if you started a sentence and they forgot what the first part was by the end of the step they can't carry out the instructions so instructions we think about symbolic representation remembering and forgetting. But with interactions it doesn't have to be remembered or forgotten it's actually just experienced like in the ant colony the interactions between ants they don't have to be remembered it updates the generative model of each nest mate and then they continue on their path. But it's actually just experienced and that's enough in an interactionist framework rather than being remembered. I'm not sure if that gets added I think it's a great point about how as something becomes pervasive and other skills and scaffolds are layered over it. There is a forgetting that can happen but also just it's like we don't even need to have memorization of everything like. I remember hearing from different college majors and somebody would say somebody would say what major they were and somebody else would say oh well that one always was too much memorization like there was that was too much memorizing. And then for that person it wouldn't be who it was a major it wouldn't be too much memorizing. It just seemed like it was parts of a bigger picture and so they didn't feel like they were memorizing little factoids but rather just interacting with the material. Not sure Dean what do you think about that. Blue you didn't say this but you touched on something that's mattered to me for a really long time and that is. Should we scale bottom up or does it sort of exist in its own realm. Is that's the thing that you basically said I taught for a long time and it bothered me greatly that all the learning from from haptic movement was disappearing into a keyboard. So so so we talked down that one and then we forgot about it because top down basically erased the bottom up. We seem to think that if we top down it if we if we give it an out we can now forget about it and I find that find that crazy because I've had many quite. Needed debates with people who want to take another bottom up exercise where we can really learn something and get some joy out of discovery and say well that's I'm going to scale that and I say leave it alone. Leave it alone it's a bottom up thing what what are you doing. I know you can top down it but why can't we just let it explore a little bit and figure things out for itself. So Dean you really touched on something there with like the haptic movement and you know I like personally learn super well by writing things down like I take notes I throw the notes away I never look back at them again. It's the act of physically writing something down that helps me learn. And I mean sometimes I do look back at my notes but but mostly not I just take them to commit things to memory and you know my daughter was really struggling with learning her times tables right she's like doing these flash cards and the quizlet app and all this stuff. And I had her manually copy you know like the times tables and you know instantly like within a month she had them down and just like destroyed the standardized tests like where they do math computation which is just like wrote memorization of your math facts and how fast can you do it. She got like 100% and she's you know at like off the charts for her grade level in testing so anyway it's it's really very real and actual but but in that kinesthetic memory and this goes back to that like motor command. You know and like the disappearance of the motor command. And so this is like what I think about like these two juxtapositions and I think like there is a space like where you do issue a motor command. Like, at least for me right like there is a space where it's like, okay, I need to put my right foot in front of my left foot and you know walk along to learn to balance like I'm issuing a command to myself to execute some tasks. But but there are many times like where you do not issue that motor command you'd I mean like think about like, if you've ever tried to learn to play the piano like placing your fingers on the keys and stuff and then it just you just do it right like there's no command at that space and so so there's like a mental a kinesthetic like something that happens that where you transition between having to issue the command and not having to issue the command. And this is something that I you know I'm excited to talk to the authors about if they are able to join us on a live stream because I because I don't know where is that stuff where does that stuff get erased in active inference I mean we aren't always just we aren't always commandless I think I don't know I'll just put one more one last thing on that if if I think of myself if I think of myself separately from the neck down everything from the neck down giving me the bottom up my brain is way up here. There's things that are supporting that that are giving me that chance to bottom up. I don't want those things to disappear I don't want to be ahead on a on a cart. I don't want to be like that plant with just my head going back and forth between lights right like when do we stop saying that everything has to like I said you didn't say this. But I've had so many people say oh Dean if you if you all the stuff that you're doing is great and you you create all these unicorns and all these young people are able to go out and contribute in professional settings but it's not scalable. And I go what what do you mean it's not scalable every one of these things is a bottom up example that it it can happen as long as you don't try to top down it to death. Interesting the part about the body in the neck reminded me of Ken Wilber's centaur stage of development which is sort of like along the integral psychological trajectory. There's a step where there it's like half human half horse. And so it's like one step beyond kind of a human in dialogue with a horse as two separate entities. There starts to be like a fusion in his framework moving towards non dual framings. But there's that stage where it's like the top down the bottom up but you know our brain is further out just gravitationally. So it makes it seem like it's the top down in peace. But you know what about a person upside down that is a bottom up signaling. Well you've already pointed to money is on the edge of a building back there about 10 slides or people on the other side of the world upside down. And I think asking about speech and sign language could be interesting because I think those are going to be the challenge cases for symbolic. The last bastion of instructionism and symbolic messages being passed. Maybe is happening through language whereas with this sort of elegant skillful performance model. Maybe you go OK I can see where predictive processing is coming into play where it's not just about value. OK so if predictive processing is an improvement from reinforcement learning or from optimal motor control. Then I guess I could see why active inference builds upon predictive processing. That's the that's the pipeline the active down the rabbit hole pipeline for some. But I think when it actually is semantic or symbolic what is being performed that'll be where on one hand there's going to be resistance because it's going to seem the most instructionist. But on the other hand I think we're going to tap into the richness of active inference to have a deep generative model including semantic information. Whereas the semantics are not in this motor control framing so it'll be like double or nothing with active inference for some of the symbolic cases blue. Yeah I just wanted to go back to what Dean was saying earlier about bottom up and like I don't know. Using active inference and modeling active inference in like a bottom up and nested way is super super interesting to me. And I just I'm wondering like when are we going to crack that walnut because it's going to happen. I feel like we're we're teetering you know on the edge of that. What will that look like. I mean yeah what will it look like or what will be a system where we can explore it in. I mean maybe the active inference I think is maybe maybe Daniel will grace us with we get to do his paper one day. We can do it soon. Yeah the only thing all that is that we have to people that are participating in this and really are going oh my goodness we're really on the cusp of something just have to be willing to go over the edge of the cliff. And I'm not and I'm not joking it's letting go of all of the habits that we formed to get to this place because most people didn't get to we didn't the three of us didn't get here through active inference that came after a whole bunch of things that were more instructional And to what to step back it would be much easier than to go over if we've got the prospecting proposition and the provisions in place will go over the edge and it'll all be fine it'll be actually be quite thrilling. I really like that like sort of that come to the edge there's a different thought chain when you're climbing the ladder to the high jump like we've all jumped off of something that was. Scary or something like that. There's a different thought process when you're climbing maybe because your motor behavior is grounded you're on the ladder or you're on the mountain going up. That's one mode that's actually like approach but then right at the end. The mind goes blank. And you know the mind will say jump jump jump but there's that resistance and that reminds me one of the few times that I've used VR goggles. It was like you walked into an elevator this was like a scenario walk into an elevator and then it carried you up a building and you could see yourself getting higher and higher up. And then the elevator doors opened and you look down and it was just like dropping off a building. It felt like my foot was just glued to the ground like I felt like I could not take a step forward. Even though I could inch my foot forward and I could see that it was that I hadn't moved that I was still just on a flat ground. It was like it just stopped motor behavior to know that it would be resulting in some unpleasant situations in the usual case. So Dean what do we say to those who are you know not sure if they want to go to the cliff. What do we say to those who are walking towards the cliff and then what do we do with those who are on the edge. This is this is fantastic question because this is this is the net of that of the problem. So so do you want do you want instructionalism extended into the act of inference realm. Or do you realize that the whatever identity you take on in terms of learning with both eyes open with some instructionalism and some interactionism. I don't know what that will be for you or blue or or me. I just know it won't be an extension of instructionalism alone. I don't that's the counterfactual here. I don't know what that will feel like as different any more than you knew what the difference was going to be in terms of releasing to gravity as opposed to pushing against it as you were climbing up those stairs. I just know that if you considered it to be just two more stairs as an extension of what you were already doing. You're going to be disappointed. So I think it's that that is the that's the big question here as long as you don't as long as when I used to say to kids as long as you don't think that a field researcher is simply an intern outside of school. I don't know what that's going to feel like. And I certainly don't know what that's going to look like because I'm not out there with you and your sponsor. So how can you let go of what you know in order to realize this thing that nobody can give you a direct answer to. That's that's a that's a beautiful question Daniel because that that for active inference is going to be the. That's the rule that's going to have to be discovered that that even people who are spending a lot of time focusing on this right now giving it formalisms and and developing the. The way to formalize the math are going to have to have themselves. Otherwise it's just going to be an extension of instructionalism and that would be sad. Hmm. Yep, it's like jumping off. It's not a staircase in reverse. Right. And I think the idea would be maybe we already have the flight suit on where it's like if you climb up with the provisions and then you know when you. And then you know when you jump you're going to be able to do something that was just qualitatively different. It's going to feel different. It's going to be different. It's going to be a different action. And wasn't the point of listening to instructions to change to update to be better or to be different. And then it's like by being different and following that path we become who we are. Like somebody said, I always wanted to be skilled in this martial art. So then they trained along the way to become who they wanted to be. And that was them being who they wanted to be. It's a lot of sort of loose associations. But again, I think we're working towards making it happen. It is that it isn't though, because instructionalism my point was that the instruction stopped. I cannot give you a description of what that will look and feel like. So you're going to have to interact with it. That's there's not it's not a negotiable if you really want to be there. You have to let go of the instructionalism. So I don't think that's a loose association. That's a non negotiable. Now you may not like it and you may not go there. You may step back from the edge and that's fine. But don't think that I'm or other people that are actually looking at active inferences as a second eye open won't be aware of what's what you're transpiring, whether you're going over or whether you're stepping back. That's not a loose association. That's that's pretty clear. Lou, any thoughts on that? I guess my question would be, let's just stay with this sort of like climbing the instructions to get to the space of interactions. What? Yeah, it blew good. So it just as like climbing the instructions to get to the space of interactions, like I feel like something is there that is has to do with like the bottom up and the scaling. Like, you know, in the systems, like we are climbing the instructions as, you know, parts of a system until we get to to that space as a system. The rules. Well, it's like, OK, we have the ladder going up or you know, staircase going up to the jumping off point. You could frame that in terms of instructions. Go to the second step, go to the third step, go to the fourth step. Instructionism all the way. I mean, our interaction is all the way. It's like, it gives you the rule walk forward. So you can replace on a structured ascension. You can replace the sequential instructions with a rule or a pattern so that it's interactionism all the way down, all the way up. And that kind of relates to active as a scale free framework, like bottom up from where blue are you going to start with subatomic or where is the kernel going to be. And maybe just interactions all the way down. And then at some sufficient level, you can granularize or you can coarse grain and you can make something look as if it has instructions. So I wonder if it's like, you know, you see this loop, right? The active inference loop and it's like an oval like this. And so I wonder if it's like, starts to be like the structure of an atom, like, OK, there's like this oval that's this way. And then I wonder if I could superimpose one going the other way of vertical and horizontal, like a little one of these things. So that, you know, because there's this across scales as we scale up and down. You know, is there a loop going between and then a loop going this way? Possibly. Probably. I don't know. I think about it in that way. So this could be the s orbital. This is just like a spherical one. And maybe there's other trajectories or flows that have different shapes. So Dave wrote in there and anyone else in this last little bit is welcome to write a comment Dave wrote in most effective military units. Troops are told what to accomplish, not what to do. So that's the difference between, for me, like push and pull. Push, you know, lift the bar is like it's a command about what to do versus like when it's an accomplishment. And so it's the telios or the end directness that pulls the person. And then in that pole, there is the space to have the solenoidal flow to to walk straight towards the goal where there's a clear path. But then to be like, oh, yeah, but then there's I'm hitting a wall. So in order to accomplish, I do have to walk around. Of course, that makes sense in the context of being pulled by a goal. But that's where the instruction based robotics always fails. Because anytime there's something that requires a deviation from value, you need a total second layer of the model to reintroduce curiosity or epistemic gain into the value framework. So then it's this question, how do you balance all these second layer bolt ons so that you can still salvage your value driven model? Even when there's aspects that are just clearly better framed within a pragmatic epistemic distinction, blue. Yeah, it's like solving the problem without ground truth data, right? Like so how do we, you know, we don't know if we've never been exposed to the system before it's not programmed into our robotic function. You know, how do we account for that, right? So I think that that's where the exploration in the play comes in. Or maybe tossing and turning like you want to be comfortable. And then it just sort of uncoordinated behavior that's just rummaging around searching for something in a box or all these examples where it's so you couldn't say that that twist of your side. It was it's not skillful, at least for me, but it's also not reward driven. Again, you need a way second level bolt on to be explaining how the rummaging behavior is itself value driven. You can say that there is a value that's pulling or you want the valuable thing in that box. But that's, we all agree on that. We all agree that there's something valuable in the box and that search behavior is going to be required. Are we going to have a process theory that allows for exploration and then zeroing in on a goal? Or will we have a process theory that tries to go like, well, if the end is valuable, then let's just backpropagate value. If we don't know it's valuable in the box, right? Something is valuable in here. Like how do we find out what is it that's valuable? Okay, so we implement the search function, but then exploring what is the valuable item out of, you know, 50 items in the box, right? Yep. It's almost like in the simplest cases, the value learning is defensible when we have this hill climbing robot on a single hill. Okay, but now you got a triple bottom line. You got the financial success, but you also have environmental and social considerations of a business. So will you reduce all three of them to one of them? Or try to generate a fourth standard that you're going to reduce them all like a common currency for all of your values and rewards? Or could there be a deep generative model where the organization has preferences in the financial, social and environmental realm? And then take stock of its action possibilities, not even described in the value mindset. Maybe every time you have a meeting, it's like square zero. What can we do? Whereas in active inference, we start with the space of policies, the affordances that the niche provides. And so you don't need to consider policies that cannot be done. And you should reweight policies accordingly to how likely or tractable they are. So there's just so many ways that we see that the mathematical framings get expanded on through relaxation of some of their attributes, just like Friston 2011 wrote, which is that optimal control can be cast as active inference with three simplifications. And then he walked through those, which we discussed a little bit last time. But this is kind of cool how we're building out some differences between the motor control approaches that have been used in the past and active inference as a process theory, and then the way that that encompasses what we've seen before. And then maybe also opens up the discussion into just areas that weren't even considered. Like we can add in affordances plus preferences. I think that embodied skillful and active inference implies a huge leap of faith back to that jumping off the because it is. And I mean, that's a cliché that people can probably appreciate. What do you think the trust or faith is in? Yeah. That's it. Yourself. Yourself, right? Or lack thereof. Act or act not. There is no infer. No, there is. There is. So it's interesting. Yeah. How can we distill the similarities and differences so that when we see one of those limitations being like, OK, we had curiosity slash pragmatic plus so that when we see a sort of here's the new hack on value, we can see that as a saddle point for where that model could move towards an active inference framing just by kind of rewiring some of the modules that it has. Here's a great question from the live chat. Bill wrote would a priority parameter according to depletion levels of different resources be applicable? Yeah, that's back to the pools thing. Yeah, I would say yes. Okay. Dean with the pools. Here would be my thought from the multi scale decision making. Let's just think about a bee colony. The resources that they need to forage for include like pollen, which has a lot of protein and fat and helpful for the developing larva, then nectar, which is like high octane gasoline, a lot of carbs, good for the adults. And then depending on the environment, they might need like water or salt or other different resources. So the depletion of there might be an imbalance of the nutrients. And you can think about that similarly for an organism like rabbit starvation. When you only eat protein, your body will enter starvation mode because, you know, other fats and stuff are needed. So definitely multiple resources are being balanced. And part of the challenge of decision making under uncertainty, just as we've been getting at is that there's not just one kind of outcome. Like it's enough to be uncertain about value. But now when we're thinking about these multi dimensional resource landscapes, whether it's just protein versus carbs or whether it's a lot more nuanced, like the triple bottom line. We really do need a principled way of talking about the relative depletion of different resources. So in the bee colony case, selection has shaped nest mates so that they use the rate and the type of interactions they have in order to update their nest mate level behavior. So I'm having a lot of interactions with hungry larvae. Maybe we need more pollen. I'm having a lot of interactions with food reservoirs. Maybe that can slow me down because we probably have enough forage for now. So colonies that don't consist of nest mates with that productive way of interpreting their interactions, not instructions, those colonies are swept off the table. And so we end up seeing distributed systems succeed where they act as if there is a correct prioritization amidst uncertainty of different kinds of resources. And then if we use active inference instrumentally in the deep generative model of the body's nutrient ratios or the colony's relative balance between protein and carbs, then it's as if there is a priority parameter where when something is depleted, it's the one that's prioritized. But we don't need that expressively in the model. We can have a multi-dimensional generative model of different resources and then actions will be selected according to their expected free energy being minimized, almost going along the curve that takes us on that edge of explore and exploit from wherever we are in that resource depletion landscape towards our preferences. So would be totally open to hearing other explanations or maybe there is a priority parameter that can come into play. But one sort of longish response would be organisms do act adaptively to prioritize depleted resources. But that doesn't mean that there needs to be a parameter in our model that explicitly prioritizes specific resources. It's enough just to have a preference vector that includes multiple types of resources. Blue. No, I think that that's right. I think the right. So you can make it as simple or complicated as you want, but I always think less parameters in the model is better. It just makes me think, you know, how do we get those preferences? And then what happens when the niche changes so that preferences that were adaptive like to act as if prioritization of sugar and salt over all other preferences? For example, what happens when the situation changes? So how do we integrate high precision priors with highly accurate sensory input? That's something we've talked about before. We get the best of both worlds with being able to like fly blind because we have a deep generative model. But then also when there is sensory input that requires a categorical shift in cognitive model, for example, we want to have the fluidity to do that. And we don't want to be wasting our time in one mode when it's the wrong mode. Be hard when hardness is a flaw or be soft when softness is a flaw. So I wonder if there's like, you know, I mean, for organisms like bees and birds, like there's some programmed, I mean, things are shifting all the time, right? Like there's, you know, a huge amount of like migratory stuff that comes into play and hibernating and all these things, right? And so like for us, like we notice that prioritized like shift, like I need water right now. Like, you know, if we're suddenly like parched, right, like the need for water is overwhelming. But there's not this like, because we have offloaded so much of our needs into our niche, like the need for to stay warm or whatever we build houses, we were closed. So we've offloaded a lot of this stuff. So I wonder if it's like the instinct becomes less instinctual. I wonder if that's if that happens, like the instinct for to sequester food or fly south for the winter or these types of things. Just because, you know, we don't, we don't, we've just offloaded our need to do those things. So it just becomes, I don't know, there. I think that's another distinction here. Here we can include extended slash embedded, you know, for ease, 70s, 11s, however many there are. We can at least frame those situations in terms of niche modification or niche construction. Where is that in motor control? It's nowhere. Where is the golf club with a squishy handle that is being, you know, interacted with as an instrument? We don't have those nice interfaces, whereas active inference has excellent interfaces quantitatively and qualitatively for being enriched by pieces that are, as Dean would say, maybe non negotiable for the actual creature. But it'd be totally ad hoc. Where would you put the instrumentation in here? Is it a state estimator? Should I be estimating where my golf club is? Or is it enough just to estimate my hands and the angle of the hands and then do another second level prediction? Not quite clear. You could probably implement it multiple ways. And here's another nice comment from Dave. So Dave said the term in cybernetics, so it's always good to connect back to related areas. Or putting drives into a model is algodonic. So here's the definition of algodonic comes from, I guess, algos meaning pain and hedon meaning pleasure. Sorry Dave for the mispronunciation. Putting multiple algodonic drives like pains and pleasures may yield complex instructive behaviors. But there are so many advantages in having a model, I think active inference, so sophisticated that it generates its drives. The drives come out of the model rather than being put in. So yes, and he wrote then clarifying, I mean the innumerable drives emerge from the model's internal dynamics. If your robot is spontaneously curious, spontaneously attends to events or responsive externals, it's as if it has a curiosity parameter. So it's true, maybe we could even write like drives quote, you know, as if I'm going to take Shakespeare, thanks Hamlet, you are a great prince. Drives emerge from model don't need to be put in. So instead of like maybe the rabbit being put in the hat, this is like the rabbits emerge from the hat. If you put in enough hydrogen and enough time. So that's interesting to think about where cybernetic models come into play here. So let's give just a few final minutes. I'll write any final questions in the chat. And thanks of course to Dave and to Phil for the great questions. We can just think about maybe dot one, Dean is being on the edge. Dot zero is climbing. And so it's a two directional trail, you know, there's people going up to half dome and there's people coming down. So dot zero is two directional highway, lots of on ramps and off ramps. But we try to make the topics accessible for those who are not in the active inference game, but they're related, they're familiar with related topics like the keywords usually. And in the dot zero, we're going to go from bridging the keywords and the topics of broader concern to active inference. So that's the on ramp to active inference, but then also for those who are already in the active inference mode of learning by doing and working. Then it's their window out to like, oh, wow, you know, this idea that I had been working on active inference in one context, it's also applying outwards. So that's in the dot zero dot one, you know, whoever and whenever it is. We are in the bathtub face, the part between the bookends. That's we're at the edge and we can look back to the dot zero and see again recapping the steps that we took. But also we want to start looking out and then I don't know where dot two would be. But dot two is when we sort of take the next step and the dot two is the no going back. I mean, I can add a twist to the fact that I'm going to be hitting the water soon. And that's a skillful embodied movement, right? We actually watch people that do the 10 meter platform and can add quite a few things before they get the water and regain control. But I think this is this is what interactionism is versus instructionalism. I mean, the way that this is set up to zero one to is interactionism. It's more about this than it is about bidirectionality. And I think you talked about that at the beginning of today's episode. Right. We have a structure that we hold to a rail that we could modify as well. But we hold to the rail so that we don't have to frame it like instructions. We weren't instructed to do this and I don't know. I hope that people don't feel like these are instructions, but rather interactions or engagements with them. And then just like so many other things that we're seeing online, the new technological affordances allow it to actually be an interaction. We actually can have people asking questions live being a part of the conversation. Whereas if it were just recording a video and releasing it or record making a book and releasing it, it'd be hard to break out of the instructionist frame because the deliverable would be static and very unidirectional. Here we have a different affordance that helps us break out of that frame basically for free because even if instructions are being conveyed on the live stream, it would still be within an interaction of participants of individual and collective and of the community. So any final thoughts on one? No, fun times. Thanks both of you for the dot zero and dot one fun. We've had a few hours to think about it, but the paper is the distillation of many more hours. So there's always value, dare I say, going back and playing with it because it's fun. And it's just, yep, they're great topics to be considering about and learning. So cool. Thanks everyone for watching. Thanks, Dean and Blue. And we'll see you next week for 23.2. Thanks guys. Bye.