 Okay. It's December 23rd, 2021. Nice Sarah, cool ceremonial lighting. It's the end of 2021 paper review stream. So welcome to active inference lab. We are a participatory online lab that is communicating, learning and practicing applied active inference. You can find us at some of the links here on this slide. This is a recorded and an archived live stream. So please provide us with feedback so that we can improve on our work. All backgrounds and perspectives are welcome here and we'll be following good video etiquette for live streams. This page shows a snapshot of our website, which I'll also put in the YouTube live chat and head over there if you want to participate in 2022 and check out this code site if you want to see more information on our past and upcoming live streams. Today, the active 21 paper review, we're going to just be reviewing and reflecting some of the fun times and interesting things that we've talked about this year. And just like a dot zero is just an introduction, not a review or a final word. This is going to be just even more so because we're going to spend like literally two to three minutes, two to three memes per paper. And we talked about them for several hours. So this is just going to be kind of an off the cuff thought per paper, not a total rehearsal of the paper. And this should be really fun. So Sarah and Dean, thanks a lot for joining. Maybe if each of you would like to say hello and just what made you excited to participate today or what would be one kind of overall thought to lead us in here. Go Sarah. Oh man, the pressure. Okay, I haven't been around for a while. So like my brain is not fully dialed. But I actually was just reviewing some Michael, Michael Levin videos. And, oh boy, my brain is about as exploded rewatching them as it was the first time. So, you know, like looking at life through like an agential point of view versus looking at it through this like homeostatic. I'm just my brain cannot hold it all. That's where I'm starting. Oh yeah, my name's Sarah, by the way. Cool, Dean. Hi. Yeah, hi. I'm Dean. The reason why I wanted to do this is because when I was doing programming, pre retirement, one of the, one of the things I found to be really, really, really helpful was what we call the learning exhibition where there was a timeline a chronological review. And there was a lot of time given over in front of what we described as a significant, significant audience to be able to give people a chance to take the entire breadth of what they've done and sort of go through it themselves and pick out the highlights. And then a lot of light bulbs went off in the learner's mind. And so for me, even though I haven't been participating throughout this year, I've only been about maybe 30 to 40% of the live streams. I still find that review process to be really, really helpful in terms of getting me some sense of where I am because as we said in the, I think it was the 34 paper. Yes, I think it was the 34 paper. It matters where you came from. So this gives me a chance to kind of look at that and give myself some sense of where the directions are going next. Great. Yeah, we don't have any papers at all selected for 22. So it'll be like a fun fresh start. So yeah, for each of the papers, well, first we're going to go just look at the videos that are not paper live streams and just see if any we wanted to recall. But then for each of the papers we're going to just talk about like what were our memories if we read the paper or if we went to the discussion or listen to it, how did our perspective or generative model change from that discussion or the paper. And then third question, what ideas or themes happened in the paper that came up later or touched upon something we went into later. And then also the last question, how does the paper or the concepts in it or the claims change how you thought about something or did something. So we'll start with just looking through the non live stream videos. So it may total either you can bring up one of these if you want to highlight one of these guest streams that you liked. We had 13 guest streams or just total we can both pass but the guest streams are just research related to active inference. And so we hosted a really global range and many different sectors of one off guest streams from 13 different people. And these are like always open if somebody wants to do one themselves or wants to recommend that somebody do one. So do either of you want to highlight one of these streams. Dan, can you remind me where who's the gentleman from Australia that was talking about sort of the Phillip Garance, Phillip Garance. Okay. Depersonalization and Kota Qatar syndrome. Yeah. Okay. That that one I that one I found particularly particularly interesting just because he brought up things that I wasn't even aware existed so. Okay, Sarah. Okay, we're past no worries. Okay. We had the model streams, which are math and technical walkthroughs or exposition. So the first four 1.1 through four was Ryan Smith and Christopher White. That was awesome. I mean, it's like a 10 hour series. So it's kind of a fun course there. And then a few others. So these were really fun and we hope to have more of those. And then the last category was the org stream. We had one of those with Richard Bartlett about any kind of organization. And then we had three math streams. Sarah, we had great times in math streams. I think it was maybe even your idea to do the first one. So It was different for me. Yeah. Yep. Fun times. So we're here to talk to about really the live stream series. And so in the live stream series, we met once per week, basically every week. And we spent about two weeks per paper. So a dot zero video would happen at some point before the dot one. And that was background and context. And then at the regular weekly times we had a dot one and a dot two, which were group participatory discussions. Sometimes the authors joined other times the authors didn't join. And we had a few dot threes or higher, where we really made an extended series. And this allowed us to have a lot of time to go into different ideas. So again, today we'll just kind of take one favorite meme. This is like the haiku form of the papers. But it's just awesome how many perspectives people brought and the papers that we got to read and just exploring it as a new format. So cool times there. Any other comments before we jump into the marathon? Just one quick one. I really like, in the previous slide, I really like Richard Bartlett's coming in and the idea of that work stream, because I think he really wasn't sure what we were on about. And I think we brought a lot of things to his attention, not just it was that that felt like a very reciprocal conversation. And the other thing is when we had the zero one two and when we had authors with us, I think I can't speak for them. But I think that again, when we had those kind of interactions with people, and they got a sense that they weren't just sort of writing to the clouds. They were writing to real people who were really wanting to have a real conversation with them. I thought that was as the year went on, I think maybe we're gaining a little bit of a reputation for sort of making these things a little bit more available and a little bit more interesting because we can have a conversation about it, not just leave it in the script. Hey, great. That'd be awesome. Anyone who wants to come on and visit or discuss their paper or share their perspective. So we're going to jump in to the papers. So either of you or anyone in live chat just totally feel free to just ask a question or to give an observation. And all of the papers, if you want to follow along, then go to the Coda, which I'll put into the YouTube live chat again. At the Coda, you'll see the link to all the papers, but also we'll have each paper will be seen with like the title and the authors. Okay, so here we go. Rewind your clock, spin your arrows of entropy and time. It's now January 2021, early yonder. And we started with the paper in 13 integrating cybernetic big five theory with the free energy principle, a new strategy for modeling personalities as complex systems. And that was by Adam Saffron and Colin de Young. So the aims and the claims of the paper were essentially connecting FEP free energy principle and AI active inference with another kind of clative theories. And that was the fusion of cybernetics, the goal directed systems and big five like the psychological descriptive features. So CB five T had already integrated cybernetics with big five by seeing the big five axes of trait variation as being part of cybernetics systems rather than some other kind of system. And then this paper extended that into the free energy principle and active domain. And just to show one figure, which or table, which we'll do from each paper, this was table one. And it just makes a very clear laying out of the correspondences between the traits and the terms that are used in CB five T. And then the the terminology and the ontology of active inference and how those terms exists. So it was a fun beginning of the year because it had to do with personality dynamics and about the connections of domain specific and domain general fields and how active inference could play a role in all of them. So either of you want to add a thought on this paper. Okay. So that was paper 13 paper 14 was the math is not the territory navigating the free energy principle by Mel Andrews. So this was a single author philosophy paper. So very different format than some of the ones that we read other times in the year. And this paper does an awesome job with the memetics of the title, but it really makes awesome historical contributions. And also there was latter versions of this paper that were updated after our discussions and through other interactions in the community. And also other works by Mel. So just like many things, it's kind of part of a bigger picture. But here the project is to trace out the historical and the philosophical threads that constitute free energy principle and active inference. And so it's kind of interesting because I know these terms will come up in later papers, thinking about, for example, the Helmholtz decomposition of a gradient field into a solenoidal curl component and an irritational divergent component. So this idea came up in the historical antecedents and the philosophical threads that play into the free energy principle, but it was being approached from a philosophy perspective. So it was a fun discussion. We talked about the history and philosophy of FEP about what is science and about how does the work of scientists and discourse with philosophers. So it's a fun paper. Any comments on this paper? I got a question for you, Dan. Can you give me a spoiler alert? Did you resolve the second bullet on this slide here when you guys were discussing this? This one, certainly not like anything like a theory, a law, a hypothesis, or a research program. Yeah. Yes. So I think I forget the terminology of the paper, but it's like target models versus targetless models. And there was some discussion about how it's a source of hypotheses in a in a different way than research programs of the past. And yeah, it was a cool paper. And it feels like the only thing my brain can retain about that was that it was like free energy principle is like an API and how you apply it is how you apply it. Yes. I think this introduced a lot of like the API isn't the territory. It's an interface agreed and for other domains, how do philosophers interact with this object? What is their what is their intellectual API for all of this? And it's a programmatic interface. So it's just a protocol. So yes, I'm like that. I don't know. Yeah, let's look at that because that's yeah, application programming interface. It calls so you can like call a certain function from a different program. Okay, so 14, we've moved from personality to history and philosophy of thermodynamics. We then had a very important discussion in 15 with free energy principle, computationalism and realism, a tragedy by Thomas van S and NS Hippolyto. So this paper differentiated what they called a realist approach and an instrumentalist approach. And it's actually amazing that these terms came into play and heard from stage left very early in the year. But it was a theme that certainly came out again and again during the year. And to get to their figure, here's their conclusion of their paper. In this paper, we have defended the instrumentalist take on the FEP, arguing that the realist approach is a non starter, regardless of whether it is representationalist or not. So they kind of make this two by two, which is in a table in their paper, but we illustrated it as a quadrants here. And so the left to right is realist versus instrumentalist, like we're really talking about the real state of the system versus we're using a statistical instrument to measure and model systems. And then there's an orthogonal axis with representation and non representation. And this paper comes out ambiguous or neutral or system specific on the representation axis. But it dis favors the realist side and says that the instrumentalist side is going to be possibly very productive. So in other words, some of the claims that the FEP and ACTIMF make might be consistent with us using them as statistical instruments or models, but not necessarily describing quote real features of the real world. So that was definitely a theme that came up again and again into year. And one of your favorite themes to Dean. Yeah. Yeah. So again, I'm going to go back and look at this one because I didn't know that you've done this. But so question for you. When they talked about realism. Are they saying that what are they saying there of what what how are they describing what is real. I think the realism and instrumentalism was in respect to the term generative model. So the generative model is literally implemented by a human brain to calculate the potential states of the environment. So that's kind of like brains are doing active inference or brains are doing predictive processing. Versus it could be just an insightful statistical description that doesn't need to be the same as the basis of the underlying process. So like you could use a linear regression on brain data without the claim that that's what the brain is really doing. It's not really doing linear regressions. It just that we use that statistically. So when we use an active inference model to describe data, is that what the system is really doing realism? Or is it just a clever statistical tool that we're applying? And that's instrumentalism. So can I ask you one more quick question? Because I know we got a lot of papers go through here. Have you seen the movie Sully? No. Okay. Well, it's Sullenberg had to land the plane on the Hudson River. And he had to make a calculation when the when the bird strike hit. So would how would these authors view his calculation in terms of picking a place where he had to land? Would they say that that was instrumentalist because he was using instruments and he did land didn't land in New Jersey. And he didn't land back in in LaGuardia. So how would how do you think they would? How would they play that scenario in terms of their argument in this paper? So the idea of using instruments, it's also it's related to extended cognition and like humans using technology and instrumentalism can still be approached when there's real lives on the line. So let's just imagine that the pilot was using some sort of heuristic and some sort of back of the envelope calculation and based upon a falling parabola. That's instrumentalism, because it's being used just as a insightful statistical description, rather than necessarily a description of the system itself, which is like realism. So maybe I need to see the movie though. Yeah, no, you should. It's a great movie and be I actually used it in my program, because I actually thought that it was some combination of the two. There's the real situation and then there's the instruments to tell you what how far you can go. So I'm just curious, I'm gonna go back paper now. First in others, I believe have called that kind of a structural realism. Like, there's not a denialism that there is statistical regularities in the world. So it's not like a sort of pan denialism. It's just a qualified recognition that we're only going to be able to use observations and we're not going to be able to map hidden states. And there might be special cases where we can, but we need to prepare for the situations where we can. And in the chat, in respect to Sully, Sid wrote, his decision was mostly intuitive. Instrumentalism over years of internalization is indistinguishable from realism. Very biased by Sid. So let's go to 16. 16, we switched gears to WANJA visa and Carl Firsten's paper and WANJA joined us in these discussions on the neural correlates of consciousness under the free energy principle from computational correlates to computational explanation. So this was another example. It's kind of like 13 of taking a topic that's an important debate or an ongoing area of development in some area like personality sciences or neuroscience here and then see what can be gleaned or added by fusing it with FEP. So the neural correlates of consciousness is people looking at how different anatomical and functional and informational aspects of systems are associated with their capacity for awareness. And it's a big area. We kind of reviewed a few different ideas about it in the videos to summarize up the argument and the contribution of this paper. They wrote, in short, it may be more useful to consider the system from the point of view of the extrinsic information geometry, probabilities encoded by internal states, then from the point of view of the intrinsic information geometry, the probabilities of internal states. So we talked about what that meant and what the implications were about whether the systems with conscious awareness were those that had potentially some kind of informational signature in how they worked, like a slot machine. There's going to be some sort of statistical description of it. And is there something that might differentiate a slot machine's actions from a human pretending to be a slot machine? Are we just in a world where we can never tell or might there be some sort of way that with the right statistical analysis, there'd be some feature? And of course that leads us to questions like what is alive? What is awareness? What is consciousness? Suffice to say they were unanswered, but we had really cool discussions and explored how a few different formalisms from ACTIMF and FEP could play into this basically eternal discussion. So any thoughts about this paper? 17. We stayed in the kind of trippy and interesting science domain with the paper of Fields and Glazebrook called Information Flow in Context Dependent Hierarchical Bayesian Inference. So Chris Fields came on for 17.1 and it was really a memorable discussion and 17 was also where we met Shana Dobson, Chris Fields' collaborator. And this paper was really using ACTIMF more in the discussion as sort of like a concordance or a harmony and less of a fundamental building block of the paper. And they in this paper used the concepts of quantum contextuality and category theory and Bayesian inference to put them all together into a new way of modeling systems. It doesn't look like I pulled out a second image for this one, but they had the cones and the co-cones and it was all about frames and meaning and semantic flow and it was just a very interesting discussion. Any memories from those discussions? Yes, Sarah? Yeah, this was the first time and this was in my last, my first year of this graduate program that I'm in, Philosophy of Science, where I learned about the frame problem. And it's really like, it's like once you see it, you cannot unsee it and now it's like I see that constraint in everything and I'm not sure if I'm thankful or if I hate him for bringing that up because it's like, it's terrible, it's a tyranny. What, if you like were to summarize the frame problem, or why it matters or how it relates to acting, what would you say? I mean, really kind of a layman way of putting it, but it just, you, it's like contextuality, you can only get so much information, like with a particular perspective, you only, the information versus context ratio in some sense is fixed. Is how it's kind of like a woo woo way of putting it. But, you know, and I think about it, even with the paper we'll cover later, like 11th problem, like you can slice the problem on the time axis or the state space axis or like, but you, you're always slicing it. You can never get the full picture is what it feels like. Dean, what does that make you think about? I walked in out of the wilderness on this paper. This was my first encounter with with you folk and I'm still putting my head back together because my head blew up and I pick up all the pieces and reconstruct it. So, again, I always try to bring it back to the simplest examples and, and I agree with Sarah, for me, this speaks to why there's such a thing as change blindness, like why you can't count the passes, the basketball passes and see the dancing bear at the same time. No matter how much you try to take in, there are upper and lower limits. And this this is when I, as soon as I saw this, I was like, Oh my goodness. Sarah said you can't unsee it. I was just like, we will, I don't know if you're going to be a slide here with the moth the flame, but I just flew right into the flame at this point that was it it was over because I was, I didn't realize that people were talking about this stuff. And it was really cool. I just pulled one of the slides just so we can kind of recall that. So it was like about these graphical images. And then there is. It's kind of like a way of looking at programs. It's a structure that represents programs but also other things too, with the two spaces now it's a whole discussion right like what is the scope what kind of systems are we talking about here. And then there's like the sort of coherent system in a given perspective. And then there's the case where like two perspectives are looking at a given set of classifiers the a sub letters. And so that could be like the person at one point and the same person at a future time point. But like if the memory is in a different frame, and it doesn't have the same semantics then like the memory changed but if memory does have the same semantics there was some invariance. So then what is the structure of that invariance. And then this isn't a math review stream but with Shauna Dobson. This is where we were going the whole time crystal diamond infinity perfection diamond sheath area. Which is like I was just like glad to have heard it from. I hope that we can like fill in some of the structure underneath that as individuals and as a group to better approach those ideas because they were interesting. Okay. So on to 18. All right so 18. We met back up with Christopher and Ryan who had done the model stream one four part model stream on the step by step guide to active inference. And here they joined us for more of a research discussion on their paper the predictive global neuronal workspace a formal active inference model of visual consciousness. So this was fun, because it was some colleagues that we knew Christopher and Ryan, and they kind of knew how we operated a little bit. And then we were talking about consciousness, but in a very operational or instrumental way. Whereas the previous visa and first in paper was much more about like the metaphysics of consciousness, even though that was also being approached in somewhat of an operational way. But this paper was very much focused on the neuro physiological observables and the specific access consciousness, which is specifically just the parts of consciousness that we can report on. And then they kind of like paper 13 took a big idea in neuroscience which is the global neuronal workspace, the GNW model, and the predictive anticipatory predictive processing paradigms kind of took the fusion previously of those two ideas, like cybernetic big five, and then combine that with active inference. So we're kind of like seeing some scaffold structures madlib structures by which people are contributing to active inference research, and also staying integrated or bridging with other fields. And the one of the figures that really summed up this paper was figure three. This is a Bayesian network description of the generative model with arrows showing the dependencies between the hidden state factors and outcome. So each one of these nodes is like a variable of different types. So the level one outcomes are visual on the left side. And then on the right side, there are things that you say verbal report, I see a square or I didn't see anything. It's kind of like a linguistic state machine for action and then like a visual state machine for input. And then there's this attention modulating precision about how the visual level one outcomes get mapped through that a matrix to the level two outcomes. And so the sense states of level two, the outcomes are the hidden factors relative to level one. So that's like very much the kind of hierarchical Bayesian modeling that we've been talking a ton about. And then this is like the inference combining the level two factors as well as information about the trial number. And then that is what kind of allows the model to swing back from sensing like a square and red and then making inference on it being like a red square or whatever it is. And then that arcing back into the inference on linguistic structures. So it was a very integrative paper. And cool topics that we had brought up even already like consciousness and visual awareness and using specific models to test hypotheses and make unique predictions and experiences. So any thoughts on that? I think what I took away from this was because this was the second paper that I participated in and I realized very quickly that in order to be able to get some traction and be able to show how things are working. There were really, really, really, really small windows that the researchers were looking at or really short, relatively really short timeframes. Otherwise you don't get any traction. So I quickly understood that if you're going to be able to do something with this, you have to take it one piece of the puzzle at a time. You can't take all 1000 pieces at once. If you want to be able to say something that you can actually defend. Otherwise you're inferring instead of being able to show the empirical evidence. Yeah, it's almost like, yeah, you can infer, you can predict, you can even look 20 miles ahead. But you got to take one step at a time with your body and that can be like slow and painful because it's easy for your mind to race out way in front. Oh, I could have been here already. If only this had happened or I can imagine myself at the top of the mountain. Yeah, but you got to act. And so this was sort of the kind of work that we love to see. So Paper 19. This was deeply felt affect the emergence of valence in deep active inference by Casper Hesp, Smith, Parr, Alan, Friston and Ramstead. So 2021 paper. And this was kind of like fun because we were just discussing papers weeks after they were published like fun to just be in the loop at that speed. And I kind of remember that for this paper because it combined a few different things that people had said about active inference for a long time, which was first off, it can deal with affect and anxiety and metacognition. And the second one that there could be a deep temporal planning. So not just instantaneous gradient minimization, like moving your eyes to the dimmest or most uncertain part of the image. But like moving to the part of the image that wasn't the most uncertain now, but was predicted to be the most uncertain. That is what is called temporarily deep active inference. And this paper did that. They had some beautiful images. And they specified and encoded generative models for affective where anxiety was an affect active inference where this beta and the gamma are like the precision and the inverse of that which is like the ambiguity. They're like kind of one over each other beta and gamma. So whether it's high precision or low ambiguity or vice versa. How that uncertainty variable plays into the free energy minimization given see the preferences and the affordances on Pi policy. And that's what allowed the organism to do these temporal rollouts. So it's like at time point one, it was doing prediction on time point one, two and three. At time point two, it was still doing it on one, two and three. And that could be either it like doing kind of retribution or it could just be doing some time horizon of forward looking or some combination thereof. But they showed some really provocative patterns. Like when there's a deep temporal rollout with a high variance and possibilities like things could go like 90% of time go well 5% of time they're amazing and 5% of time they're just like deathly. That can lead to anxiety because every path will have some lethal possibility. But that's like walking in a city like you can't do the every possible rollout. And so that's a theme that came up which was when we're applying active inference. How do we do principled rollouts so that we can do computation that's effective. Whether that's because we're more on the affect management side or whether because we're on the computational efficiency angle in either case having adequate rollouts is really important. So it was a cool paper. Dean or Sarah, any thoughts? I got a quick one. I think where this was really interesting for me is that even though things were laid out in a step by step, like a step word way in order to be able to explain what Casper and the other authors were looking at. It introduced the idea that as you mentioned already Daniel that there was availability factor as well. Feelings determine what you made yourself available for not just how precise or how much attention you could give over to something. So although the way that they showed the model was step step step sequentially. The affect piece of it said, so what, even if you're present, and even if you're counting out the number of time steps, what else is what else is going on here and not and talking with Casper about that was, I thought quite enlightening. Yeah. Awesome session. So paper 20. We discussed the Emperor's new Markov blanket by Yele Brunberg and Christoph Dillega and Joe Dewhurst and Manuel Baltieri. So this paper was cool. It was on our Markov blanket theme, as well as our qualified criticism and improvements of active and FEP. This paper did two things as they lay out. They want to they also did something similar to paper 14 Andrews paper with a bit of a history of science recap of kind of how we got here, which came up in Axel's paper 34. But this is more of a philosophers how we got here history story. And they trace the development of Markov blankets from Bayesian statistics through to where they got to where they are in active inference, even though the actual citation history, I believe will much more will be learned that we know at this point. And then they talk about the philosophical implications of different perspectives on what Markov's blankets are. So they conclude either you stick to the original innocuous but metaphysically uninteresting formulation of blankets or it's bolster with novel metaphysical and probably also perception and control theoretic math premises. As we saw, some of those premises were indeed fulfilled in 2021 with paper 26 and paper 32. Some but not all of the criticisms are still really important though. And we talked about the distinctions between like a generative process, the actual causal structure of the environment that gives rise to observations, like that's the actual ecosystem where it's raining. Then there's the generative model, which is the agents generative model of that ecosystem, for example, and then also the difference between a posterior density and a variational or recognition density, which we're not going to go into, but this plays into the difference between using inference with a model versus within a model. So like the scientists is applying the linear regression that's doing inference with a model, but then like the universe that they're in. If you're thinking of that, it like is the model itself. So it is also related to a little bit like realism and instrumentalism. So it was a great philosophical and important socio historical critique, Pape. Also, Yele was the first person who I ever heard about FEP from him and Tobias in the summer of 2015. So that's kind of cool that he was able to join and that we were able to reconnect and that he's still working on those topics and sharing his important perspective. Any comments on this one? Yeah, so the within the within part is really interesting because if you inference with that kind of lends itself to the traditional sort of academic perspective of a framework. But if you're within a model and you're trying to see yourself. Not just within the sort of your your your focal limits, but knowing that you're embedded how that changes things at the time that we took this paper up. I wasn't really paying much attention to this because maybe because I already kind of was thinking of active inference in both in both ways being sort of inside and also framing something. So I was selecting out what I wanted to pay attention to. But one of the things I will say about Yele is I've read three of his other papers because we've made them available and not just a prolific author but brings up really good ideas. Great. So on to 21. So 21. This was a fun series. This was with John Boyk. So John Boyk, who will be like a active lab advisory board member in 2022 and got more involved in the dot tools organizational unit and like really contributed a lot like it was awesome just to get involved with him this way. So he had written a three part science driven societal transformation series. It's really like a three part small book but in these three papers. And they talk about applying active inference across multiple scales and socio technical structures. And I picked out this slide because this one, instead of a regular dot zero, where like a few of us will kind of collaborate and do just a quick run through the paper. And I did a four part dot zero. It was dot zero one zero two zero three zero four with this long Jamboard sequence where he really broke down the whole paper. And we walked through the whole Jamboard and just did a really long lead up. And he talked about social transformation and about how active inference plays a role in that in multiple different capacities from the integrated systems modeling to how we think about social systems. I don't even think we can go into the sub details but also it was cool because it was like a work in progress as any proposed R&D slash transformation platform would have it. So this was a fun discussion with John Boyk. And it's cool that we got to work with him a lot this year. Any comments? Okay, as a part of that. No, that was fun though. Okay, so 22. I think I remember, if not both of you then certainly you were there Dean. So 22 was with Alex Kiefer single author paper from 2020 called Psychophysical Identity and Free Energy. So this paper had a target thesis that biological systems are those whose internal states come to encode statistical models as a result of spontaneous self organization in response to environmental pressures. It's kind of like adaptive self organization. It's not a FEP only take. However, this is where the real FEP hypothesis comes into play, which is that those systems learn and make use of their models by minimizing their physical free energies. And that brought us to a discussion about minimization of free energy and life. And so we talked about their systems that are at equilibrium. And then there's systems that are sort of infernos and then there's the controlled burns. And all of these systems are kind of playing the free energy minimization game. So we talked about what kind of equilibrium is living versus non living. What is the boundary that we're looking for order or disorder to be shaped within what time scales phases temperatures metacognitive processes matter. And Alex was a very willing converse ant. And do either of you have any memories from this discussion. Other than the fact that he was bringing identity in which I think sometimes we can focus in on the continuation aspect of it, where we can what he brought up was how can we see identity playing along or playing nicely with the free energy principle. Yep. Sounds good paper. All right, 23. So this isn't kind of the middle of the year. So this paper was a 2021 paper in synthese called embodied skillful performance where the action is by NS Hippolito manual Balteri Carl Friston and Maxwell Ramstead. So, Dean, why don't you just describe the paper. Well, no, because that's, it's not because, you know, as a parent, you're not supposed to pick your favorite children. Oh, but it's pretty obvious what my favorite was so rather than be biased, I'll let you talk about it a bit. All right, I'll give I'll give the I'll give the bad cop version. No, it was a great office paper really brought a lot to the live in the discussion. So what they did was they talked about control theory and several historical frameworks for control theory, including what they called instructionism, the idea that motor commands are passed like kind of telegraph messages to the motor plant which is kind of the operative physical actuator. They also talked about theories of optimality. So that was kind of theories of information transmission and control. And that's a theories of optimality specifically like opto motor control theory. They then contrasted those traditional, or at least non active inference models of control with active inference. And they argue that active inference does not need to posit the instructionist assumption. Rather, it can be interpreted as an interactionist account, which was something we went a lot into. And just to dig one layer into detail before Dean loved to hear more, we talked about how active inference challenges optimal motor control theory, and the way that it frames sensory motor active behavior. And just one cool meme that I think stayed with us was traditional OMCT accounts of behavior can only specify performance using a single number, a scalar that is defined and consequently tracked by a value function. So here is the greedy hill climbing robot on the just trying to find out the gradient but of course if you just go up the gradient, then you get trapped in local maxima and how do you balance reward and learning and development. And so yes, reinforcement learning, reward learning, they have all kinds of heuristics and totally functional things that they do to get around some of those traps like local maxima and learning bias. But what was very fascinating in this paper was to take an embodied skillful and cultured perspective and combine that with a different way of thinking of this surface that landscape and the agent doing inference on the landscape thinking about it in terms of the irrotational curl free component, and the solenoidal divergence free component, aka the Helmholtz decomposition. And then thinking about what that decomposition meant and did in the context of this interactionism that active inference brings, rather than the instructionism, which is implicit and explicit in these optimization driven optimal motor control theories. What do you think about that, Dean? Yeah, I can remember going back through the paper and just like cheering them on because I think that they laid out a really good argument. At the end of the day, what this paper for me asked without getting into sort of the technical parts of it was, how soon do you bring the material into the interaction? So when I say the material is something pre-designed or something pre-formatted or already somewhat organized. And even when we talked to Carl in June, and he was talking about the EDU aspects of active inference, there was still some question around are there consequences for bringing plans forward too quickly in terms of what that means in terms of a cost if we are building a generative model? How does that affect things? Are we helping people by making things in finger quotes simple, or are we in some ways manipulating them instead of allowing them to be the manipulators of what happens next? So again, if the way that we this year, or the parts that I participated in, which include a 0.0 and a 0.1 and a 0.2, emphasize the part of interaction as opposed to just simply somebody preaching and another person being in the choir. I thought that this paper was reinforcing some of the methods and some of the ways and processes that we've enacted. So I was really appreciative of that. Yeah, a lot of the discussions about education and about the course and about onboarding and accessibility and learning of active inference and the ontology development. We really brought this idea back. Like, are we interacting with people who know different or less or more? Or are we instructing? And how does that change the system's design? And it changes everything. Okay. Oh, I thought we said no spoilers for the right. Okay. So 24. This was a paper by Dimitri Markovich and Stojik Sarashvobol, Stefan Giebel. The paper was called an empirical evaluation of active inference in multi arm bandits. So Dimitri and Sarah joined for the discussions. And that was really awesome. What this paper did, it was a lot more like a traditional machine learning paper in its structure. They provided an empirical comparison between active inference and two other state of the art machine learning algorithms, the Bayesian upper confidence bound, UCB and optimistic Thompson sampler. And they applied these three different algorithms, the active and the two other machine learning algorithms in two different challenges, the stationary and non stationary multi arm bandit problem, which kind of maps on to like basically being in a casino and figuring out which slot machine has the best return through time, kind of like investing type problems but also it maps on to a ton of different other domains. And here's a representative figure five, where they were looking at those different algorithms, the Thompson sampler Bayesian confidence bound and the approximate active inference agent. And then they were looking at the case of the switching multi arm bandit so the rewards were changing and the difficulty was non stationary. And so listen to the discussions and read the paper for more. But there were certain cases, not every single case, but certain cases where the active agent did really well. So that was cool because it just sort of showed a path for a kind of paper which I hope we have 100,000 of these papers and frameworks that make these kinds of results more accessible, because it was like, Hey, if you're working on this kind of problem, maybe try this kind of model out. So that could be kind of cool. And this discussion we talked a lot about the technical aspects, and some of the code side, because we had talked a lot about philosophy, metaphysics and inactivism. We talked about some of the math and the analytic modeling. And this is where we talked about some of the numerical and computational elements of active inference. Okay, Sarah's back for 25. So 25 was the computational boundary of a self developmental developmental bio electricity drives multicellularity in scale free cognition by Michael Levin. Sarah, do you want to maybe summarize the paper, or give just your overall thoughts on it. I don't know that I could ever do justice on a summary, but I think the thing that I took away from it or the thing that felt like a novel kernel of thought that was like, in the name of brain conservation, the only thing I wanted to take with me was this idea he had of a cone as a kind of a conserved quantity between different ideas of the individual or different species or cell level, like however you want to construe the individual, that this creature living form of some sort has an ability to imagine versus inability to enact and that those two kind of opposing or orthogonal considerations are somehow possibly conserved between different levels of organism. And I just thought that was brilliant. So that was like the big thing for me. Yes. So there was the one figure with a bunch of different there was the aliens and the ant and the flea and the human with the different cones. And we talked about time cones. And then here was a slide from questions for 25 to. And so there's this example of the ant colony. And it's like, which one of these levels is alive, or scales or however you want to call them like a cell, the nest mate, the ant, the colony. And it's like, well, the cell will survive if you put it in the right media for this amount of time. But the nest mate will survive if you put it in this media for this amount of time. Okay, but then the colony survives lifted from its environment only for this amount of time with this kind of stability. It's like, where are you going to stop? Are you going to just, you know, pull out New Mexico and put that into a test tube? So what is the living system? And how do we talk about systems that have radically different cognitive architectures? What is the role of bio electricity in rapid basal cognition, the kind that might be influencing the formation of cancer and development and bio films. So just, yeah, and so you and blue did an awesome job on these streams. Yeah, blue did an awesome job, but I really enjoyed it. Oh my God, his ability to like extrapolate from 1000 different points of view. Just wow, it was great. And I really like that he doesn't. You know, he's not, it might seem in the paper that he's like making a commitment to to what an individual is or whatever. But in fact, the way he's what he's trying to propose or how he's trying to think about it is is resisting that temptation to commit to, you know, this is living, that's not and things like this. It's, it's wonderful. It's a good, good way to jump into a different paradigm, I think. And also maybe at this point, a little bit more than halfway through the papers, thinking of multi scale individuality, like everyone who came on was awesome. People did like really well. No one had any massive unforced errors. That's like one low bar to clear, like that didn't really happen because I had literally, I had literally only two hours sleep that night. I mean, it was like the worst night of my life. No, not of my life. But I was really, and so I was actually looking at that video, the old video and I'm like, did pretty well for two hours. Yeah. So, but beyond just no blundering, people really showed up and blazed way in a different kind of format. Where were these journal clubs five years ago? I don't remember them. Yeah. So everybody who really stepped up and did it is amazing. And I hope that they like inspire others and that everybody else feels like maybe being on a public recorded live stream is not the exact affordance for you. That's totally fine. But if you think it is the affordance for you, then like, you should do it. I just want to, I just want to put a plug in for like getting more like, if I could get Michael Levin and this guy Scott, I can't remember his name. I'm sure he knows about him. But he's he's actually doing a termites. He did studying termites. And the relationship between these, these living forms and versus how they, you know, niche construction, how they affect their environment. I just wanted, I have this dream of like getting people that have these complimentary or even a little bit opposing views in a room and then doing live streams with those people, like, like more than just one person. So they're not just inundated with just their point of view. We'll facilitate that. We'll help anyone design that and then we'll make the email thread and make that event happen if the people want to participate. 28 or 26. Sorry. So 26. Nice, you know, a through Z good number. So this was basing mechanics for stationary processes by Lancelot de Costa, Carl Friston, Connor Heinz and Gregorios a Pavliotus. So this paper, as well as 32, which we'll talk about soon, it was kind of like this left right one to punch of two very impactful papers that were submitted in mid 2021, like June ish, and then kind of came out in the end of the year. But these papers, um, let's see in the future, looking back how much they changed the game or in what ways they changed the game because of course there's so much more to be done and so much different too. So it's not about like evaluating which ones were super impactful or not. But suffice to say that 26 and 32 took a big step forward by taking a big step back, and they took a first principles look at internal and external states, and did some very important work to formalize if not prove to the nth degree and demonstrate in every system and tighten every bolt and everything, but give a unifying formalism and way of talking about this idea of the Sigma or the mapping function between internal and external states, because just the sort of blanket concept alone, as we heard from several very important guest streams and other critical papers in the community. Just having statistical installation of states, of course does not entail that one side of that blanket or the others living or doing inference or cybernetics or doing anything interesting like at all. And those who didn't think that were very optimistic, but also they didn't think through some of the basic implications of just mere statistical insulation. And in 26, the mapping function the Sigma brings in the idea that even though the Bayesian graph representation might have statistical insulation between internal and external states in fact that's their definition. There still may exist for certain classes of systems on map Sigma that means that internal states could be at least acting like they're doing some sort of inference on external states, which is one of the big pieces that's required for active formalism to have any sort of not just biological validity or realism, we kind of burnt that bridge back in the Hippolito days, but just to be like statistically coherent. So it's like, if the linear model doesn't have internal coherence like the program doesn't run. It's not like we're having the debate is the economy a linear model or not. We're saying it's not or it could be but we're not going to use a linear model to make that claim. But this is just like at the level of the linear model working. So this is getting the theory working in a way where it wasn't previously. So that was a fun discussion. Dean any thoughts on 26. Did you, I don't know if you remember when we had when we had Connor back in the 32 three, and I asked him straight up I said so. So, when you're looking at this mapping thing now, are you looking at it at a, at a sort of a tight focal point or are you looking at the big picture and he said we're looking at averages. This, this was what I was asking, because this paper here, this paper, you said game changing. I think what this did is it brought high definition. I think the game is the game I think the game is still being played out that there's still some uncertainties. Thank goodness, because that's why we're here. But I think the definition and the high definition and the contrast that this paper and then the 33 33 paper both of those just the level of detail they were able to sort of pull through in those formalisms and try to speak to what that synchronizing map, what they believe it is mathematically, that's what I appreciate. With the funny and the high res and the game changing, it makes me think about like watching tennis, and then the TV is better. So it's like, Nope, that tennis ball is just fuzzy. Exactly. Can't get your way out of that one. Right, right. And I think that matters because I mean, there's, there's a part of me that wants, there's a part of me that wants to explore. And there's also things that we don't need to go over again and again and again, just to prove that we can. And I think that is some of the papers we had the good fortune of having authors come back and say, we, I think we can lock this part of it down. Right, they're not saying that the whole thing is, is done and complete and go to bed now, but they, there are aspects of this that we can be pretty confident that we have now, of course, a housekeeping rule was introduced. Okay, so it's not completely locked down either but there's a certain parts of this now that we don't have to guess so desperately I guess as how I would describe it. Yep. So, 27. Here was a another single author paper of a quite different type. This was active inference, applicability to different types of social organization explained the reference to industrial engineering and quality management by Stephen Fox, a researcher fearlessly unafraid of long paper titles. And this was a great set of discussions with Stephen on the streams. So we talked about organizational active inference and some of his other later work was even on like auto ethnographic active inference, which is really fascinating topic for a lot of us. And we talked about the quality management system and about perception action in like these extended cognitive systems. And lest anyone think that he was coming on to sort of speak for the factory farm approach. There was a very interesting discussion about craft and about how do you have the mobile and flexible producing qualities in systems and have them be regenerative and participatory and bring kind of like the high reliability attributes of yes car factories but also like organisms high reliability biochemical factories and then take that sort of a integrative perception cognition action approach like we do in active to this system. So any comments on 27. Yeah, I think I think I think it's time we mentioned two of our colleagues in particular because they really stepped up on this one that Steven Silat and blue night because they they both came in and they asked some pretty some pretty good questions and I think I think I think Stephen did a great job of talking us through his paper, but I won't forget Stephen Silat's question around the scaling of dead things. And I thought that was like that was one of those moments where it got really interesting really really fast so not to say that there is never drama and holding live streams but that was a really really good moment because not not not not every one of these papers is their complete agreement on. And I think that was one of those times where a couple of our colleagues really stepped up and said, you know, you know, rather than attacking but just saying please tell me more because I'm not really sure this is how we've sort of evolved the idea of active inference so far it's in some respects it felt coming into it like there was a preforming in order to be able to make a case, but I thought this author did a really really good job saying no no no. You mentioned this he was it's not just on on on grand scales and you know mass production there's there's aspects to this that are very much craft based and I thought he did a brilliant job of speaking to that. He shared his screen and he brought up a lot of other resources and yep. We can't name everyone who was on a live stream go to the code page to actually see everyone who was but Stephen blue Dave. Yeah, Scott David. Many of you and others really like made it super fun like we just had many of you where we would have like other research meetings with very different tone but like we had a ton of fun in the streams but we also I think kept it kind of like on the rails which is that balance right because it is recorded speech. So 28 speaking of this was Lars Sandvitz Smith with HESP. Matt out first in Lutz Ramstead a late 21 paper so this was just again kind of fresh off the presses discussion with Lars who came on and the paper was towards a computational phenomenology of mental action modeling meta awareness and attentional control with deep parametric active inference. So we had talked about the deep parametric inference previously deep active inference with the rollouts through time and about the anxiety and metacognition. We also had talked about attention with a Smith and white paper about the visual attention. So here we kind of combine those ideas and we talked about multi level Bayesian modeling of deep temporal systems time point one two and three and that same rollout structure. And now we still have those uncertainties those variances but also we have the second level the attentional level is now nested inside of a third level which is this meta awareness how aware am I of where my attention is. So again that first level is like the perceptual level then the attentional on perceptual and then the meta cognitive on attention on perception. So that was the probabilistic graphical model with the deep generative model and three hierarchical levels of state inference. So this was just a fascinating discussion. It just showed how the active inference structure even just the ones that we've seen now which are by no means all the structures we will have how it allows itself to be nested and composed here to make really recursive strange loop agents. And so this was a very interesting paper. Yeah we got into the statics and the dynamics and again we I don't know if I was the only one but there was also some some time given over to the again. What am I paying attention to and versus what am I what am I available to and not just focusing on all of the attention aspects of it. But what's what's maybe just slightly outside of that attentional frame but it's still factoring in and influencing what the go forward or not go forward move is. So again this was a Casper Casper had a couple of pretty pretty I would say impressive papers last year or this year. I guess we're not there yet. It's still this year 21. We'll do the little odd leg sign karaoke at the end if you're still. Okay paper 29 so I'm 29 years old and we read my paper for 29 with Alex Chance Maxwell Ramstead Carl Friston and actual constant. So the paper was active in for ants in active inference framework for ant colony behavior. In this paper we adapted a model that previously had been used for visual epistemic foraging on culturally patterned pottery motifs. And we adapted that to the case of ants who could do Stigmergy so they could modify their local environment with pheromone and then they use their antenna senses to prefer pheromone and select actions that would realize those preferences. And then we simulated those ants in a multi agent setting the T maze test which is like a common test used in all kinds of foraging paradigms in the lab. And just like this paper 28 showed that it was possible to do these recursive composed nested agents here. We actually showed for the first time that it was possible to use active inference in a Stigmergic context. So not just an embodied inactive agent or not just one that had a cognitive policy selection element that was active inference but multi agent setting where there was also modification and feedback with the environment. So it was pretty cool to talk about the inference paper and Blue did an awesome job as the facilitator on this one. And yeah anything you wanted to say on 29 Dean. Well I was getting ready to go over to Europe so I didn't get a chance to sort of be on the live stream. One of the I did I did watch the live stream you and you and blue go through it. And one of the things I'm glad you mentioned the recursive recursive piece one of the things that I took away from this was there. Not only was there a function that was looking back on itself. Not only was there Stigmergy which was something new to me. There was also a kind of a fractal element to this. And so I as I said I didn't participate in the interaction aspect of this. But what I came away from interestingly enough is I wasn't really sure whether after reading your paper whether active inference is actually a framework or if it's a process. And so maybe that's something that will revisit in 2022. Cool question. All right 30 so for 30 we read a single author paper which was by Matthew Sims how to count biological minds symbiosis the free energy principle and reciprocal multi scale integration. So this paper did some philosophy of biology and then had a case study of a squid bacteria symbiosis so we'll just show that to get at the big idea of multi scale integration. So multi scale integration is kind of like emergent systems multi scale systems. However the author here makes the contribution of differentiating uniscale from reciprocal multi scale integration. So uniscale multi scale integration or sorry unidirectional is like you have the entity. It's the user and then you have the resources and the resources are like inanimate. So the user could be like some sort of emergent computational phenomena invoking dumber debtor computational elements. So there can still be this multi scale integration with emergence but the sort of activity only flows one way even if there's bidirectional causality. Still there's like only a building up in one direction which is actually what makes these systems very easy to talk about in terms of like top down emergent priors and bottom up emergence and things like that. And that's in contrast with these systems where kind of both ends are each other's user they're more like interactive so this is a bit more of the interactionist instead of the instructionalism. This is also challenging notions of just the unidirectional integration from sort of dumber and more particle like to more abstract in general. Well that's how some systems are the unidirectional integrators but the bidirectional reciprocal integrators. It's kind of like I don't know a rainbow but it's not just like anchored on both sides by something that's of one type it's kind of in a feedback in a different way. So that was a cool paper a lot of philosophy and bringing some previous theories into play like here I remember that Dave brought up Gordon Pask and conversation theory. And that was also a cool thread to hear about previous systems and cybernetics theories and about how active gives us like a new light on some of those topics. What do you remember from this paper. I was watching you guys and and really enjoying it because I had some I had some barley courage I was enjoying sunsets in exotic places and and what I really liked was the fact that as you say the reciprocal multi scale integration isn't just feedback it's feedback and feed forward. And I you know that whole parallel process of bidirectionality piece I think it was really important to sort of make that explicit and say this is what this is this is what sophistication means. I mean it can be less than that but when it's at this level of bidirectionality. As you say early said earlier that's a whole different game. So yeah kind of wish I was there but I also was pretty happy with where I was. You're in a good spot my friend. Yeah. Okay so home stretch. All right. We read the paper from 2021 by Patricia Palacios and Matteo Colombo called non equilibrium thermodynamics and the free energy principle in biology. So this paper was investigating the statistical physics historical and conceptual and dynamical systems historical and conceptual bases of free energy principle. They focused on primarily the 2013 life as we know it Carl Tristan paper which is kind of a big target it's like the death star of FEP. A lot of fighters have approached it even though things have changed a lot also in the last eight years. And this paper approaches three areas that as we kind of found out in the discussions like they're not death nails for FEP they're kind of like critiques where their areas where it's good to have our attention. But that doesn't mean that they're lethal criticisms nor that they've been solved either. And that was basically the challenge of the pre-stated phase space for open-ended biological systems. The role of ergodicity in biology and the challenge of associating homeostasis and allostasis behavior with attractors in those phase spaces. So many of these terms like about the generality and the applicability of FEP ergodicity. A lot of themes that came up a bunch of times in the year and some good discussions. I remember Maxwell came on this stream and was on others as well. So what do you remember about it? I was getting doing molecular testing and I wasn't really participating in this one. Yes. Good. Okay. 32. So 32 was awesome. I'm sure it was one of our favorites. Yeah. It was Stochastic Chaos and Barkhoff Blankets 2021 paper by Fristin Heinz, Oldshofer, Takosta and Parr. So this paper we just had the dot three like a few weeks ago or like last week, right? With Connor, I think. And it was just awesome. It got at that Kronos-Kairos difference. Like it's always, it's always dot three somewhere. That's the new 5 p.m. Because it was like, we can always do another discussion and we can always have the author share their perspective or their, how they're looking back at the work now. Or if anyone ever loved one of these papers, they could just do another discussion on it or just share their view. Or it's like always possible to click that Kairos up by one again and just build on the past like a blockchain a little bit. And then there were so many figures that were fun in this paper. But it was about this same coupling that we've seen a bunch of times, but it did a few different things here. First, it introduced that unity is plural and at minimum to notion of the coupled systems in the internal and the external states. And then took on the sort of granddaddy complex chaotic system, the Lorenz attractor and did some things with it. I've never seen, which was to approximate instrumentally while retaining the chaotic signature in the Lyapunov and reframing this sort of vector field dynamic system into this decomposed and then recomposed and partitioned and reintegrated way that was just really exciting. What did you think about it, Dean? Well, I don't have the math background and I keep bringing that up. But the thing that I found interesting is I learned about the Langevin, I learned about the Lorenz, I learned about the Laplace. And I was actually, after you, I'm going to pump your tires for a second here, after you sort of took the reins on this one and tried to explain in a way that was available for people who may have been watching that. It wasn't just for people who already have a solid foundation on the math. You tried to explain it in a way, and I think if there were two papers this year, if you wanted one on the philosophical piece, and if you wanted one on the explanation of what the math is telling us, that I think the Hippolito paper and I think this one, I don't really have a favorite because I think each one of these papers brings out an important aspect of this whole active inference realm that we're diving into. But this paper in particular, wow, I mean, in terms of its trying to explain what the math formalisms are trying to tell us, if you don't get scared off by that and it gets scared off by the kind of arrows going off in all directions piece, it's very informative. I don't know that it's a takeoff point, but if you've been looking at this now for as formally as long as I have, which is about six months. Again, this was a really big moment in terms of sort of filling in a lot of the stuff that we're hypothesizing about and making it a little bit more tangible. When I read papers like this, I think I don't have a math background either. That's why we need a disclaimer. Yeah, but I was equal to have conversations with people and after going through this, I was like, no, this is the Laplace place and this piece and this is the Lorenz piece. Right? It wasn't just a land, it wasn't just a Langevin formula anymore. I mean, I wouldn't be able to write it out myself, but actually seeing it now, it just didn't look like a bunch of squiggly lines or some kind of an abstract painting. There was actually some coherence to it. Yeah, like looking at this and thinking, oh, this is like the chaotic flow and the arrows of the flow and the blue arrows that are pointing in different directions. Here's like the flow on my strategy or on my approximation. Of course, if I were over here, it would be a similar flow to if it were very close. And then how do we hold that highly operative chaotic real flow, even one that's analytically defined so nicely like a Lorenz? And then how do we have it be tractable yet retain its essence and not just sort of like cut up the cat and then just go, OK, well, now cats don't move. Well, I got a better set of pullback attractors. I mean, while we were going through this, I was imagining things like, so maybe there's a math to desire lines out in the Serengeti or maybe there's a math to the termite mound. Not just in terms of how we can graph it, but in terms of how it makes sense in terms of realizing those patterns in nature as well. Anyway, again, this was pretty, I mean, I guess I'm on full geeks on display, but if you like to geek out on this stuff, this one was a really good paper. Yep, it was a classic. It was a classic. So fun paper. Thanks for Connor also for that dot three. Yeah. Paper 33 good old 33. We turned to one of our lab colleagues to serve all a care of El Gweningkalu for his paper as part of like a really broad and creative research program into social and historical dynamics that he's a part of. And the paper was thinking like a state embodied intelligence in the deep history of our collective mind. So also, as with many, many other streams and papers blue did awesome work on this paper and presented on some work at some conferences so awesome job blue. This paper made the argument from historical as well as sort of conceptual bases that states are current. I don't even really need to describe them that they can be understood as hierarchical control systems, essentially similar in their core physical architecture to brains. And Avel uses active inference to ground collective intelligence across different systems and scales. And the paper hits on a few different topics which is kind of outlined by this map figure that we used in some of our presentations and it covered topics related to like multi scale embodied intelligence. Some of the anthropology and historiography, I guess about states and anthropology, just like, yeah, areas related to people studies and urban metabolism and the collective mind. And the memorable moment for me was with Dave Dean and Stephen. When we in the dot to we were just talking about like the cafe next to the park with the red light and the preference for light. And we just started started walking through how we apply active inference. And so that was just a cool discussion. And this is an example is like, yeah, like, you know, cities and states, the things that people care about. Wouldn't it be useful if we had a better way or a different way to talk about them. So that it was just a fun experience with you all. And also it's clearly a very relevant topic. Yeah, and you started when you started doing some of that table manipulation stuff, I in that in that moment, sort of taking and manipulating and moving things around. I think that's active inference. And I think the other my big thing was, I wondered if I was going to start some kind of a conflict around the idea of scale friendly, but I didn't. There are actually people who actually took that into into their lexicon and considered it. And here's the thing, I think sometimes in this, this setting, one of the things that we've that the participants have evolved around is the idea of introducing new things. And I mean, if they're ridiculous things, I mean, we'll all give each other a hard time. But if we're introducing new things, it isn't just sort of shot down out of hand. People will try and find ways to incorporate it. And again, that is active inference. So that was my big takeaway from this. I was really kind of impressed with the idea that bringing something in that isn't necessarily sightable, but something I think has to maybe be included, kind of like housekeeping rule. And I'm no Carl Friston, but there were other people on those live streams who kind of went, oh, okay, well, maybe we need to consider that when we're talking in this context. Yeah. Oh yeah, not mentioned during this stream was the symposium where we just published the transcript with the DOI. So that was very exciting. And yeah, I think scale friendly. I liked it because it almost it gives agency to the doer instead of passivity to the thinker. It's like, it's a scale free framework. It's like, oh, great. So it's in a museum and you pay money to see it and it just figures stuff out for you, right? It's scale free. It's like, it's perfect. It's like, it's scale friendly. You can use this actively, but you got to be friendly to it as well and use it like anybody else would use a dynamic resource or asset to, it's like, it's Mount Everest climbing friendly. As far as climbing pins go, it's the one you want. So like, it's just totally. Yeah, it's a Hudson River moment. Honestly, it honestly is. I mean, that's not a landing pad. But I mean, it's, it's friendly enough, right? And in, in a moment of crisis, that's an example, I think of what we were, what we were at least entertaining. Doesn't mean that we have the math for it yet, but we were entertaining it. Yep. So yeah. And everyone else who like brought in like that was the fun of the participatory approach. And I know that we're only beginning the voices and the perspectives that we'll hear about on the stream of the papers that haven't been written yet. That's what's fun. The beginners question and the domain specific criticism and the philosophical and the personal and the way that it can come together. And we can do that spontaneously and align with our values and also just meet each other and help others who are learning. It's really special that we got to do all these streams. So 34 the last paper that we read this year, which was the free energy principle. It's not about what it takes. It's about what took you there by axle constant. So what a paper to end the year with. So Axel is a long time collaborator with me on the inference paper and some other work. And so I love his writing and the way that he's talked about active inference and contributed to it in the philosophical and in the ecological and other areas. And the writing was engaging. The examples were clear. The labeling of what was called the entailment problem was one of the centerpieces of the paper. What's the entailment relationship between minimizing free energy and life is the strong claim. The sufficiency claim valid that if you're minimizing free energy, then that's sufficient to say that the system is alive. Or do systems that are alive by what definition, but systems that are alive out of necessity, must they have done something looking like minimize free energy. And just that nexus and the slide play and the discussions that came out of that just in the last few weeks with you and Steven and others were great. Sid wrote the paper 34 did a number on me. JK. Yeah. It was a very interesting paper. So any comments on 34? Well, I think the best compliment I can pay 34 is just to ask people to make sure that they get a chance to look at the paper. Agreed. Agreed. All these papers I think are read worthy. And we hope that you'll raise up important papers to read next year. So I'll just kind of close with this slide. Yeah, for 22. What are you looking forward to? What papers should we read? Who would like to join in? Who can we invite? What else could we do in dot coms or other units? What other functions could the lab serve? If anyone's looking to contribute or get involved, they really should. All backgrounds and time zones. If you just want to show up for even one hour a month or more, there will be an awesome place for you. And if you want to interact, it would be cool. So we had a great year. Dean, do you want to add anything about 22 or on any other point? Well, I think I mentioned the other day that one of the things that I'm kind of looking forward to is, I hate the word reset because it implies that somehow something was already set. But it's not saying that I think that as much as we've had the privilege of learning this year and the privilege of having people come in and explain what the papers are and that sort of thing. I hope that 22 reintroduces a little bit more naivete and a little bit more. I think I used the metaphor of adding another walnut shell. I think it's the biggest thing for me is that is everybody that has participated so far has come in with a really open mind. They don't necessarily come in trying to show how much they know. And I think that if we all sort of carry that forward into 22 and really really appreciate that learning is about not knowing. First and foremost, and if you come into it, even if even all the priors that you have all the experiences in the backgrounds and the things you've been able to figure out. And we seem to check a lot of that before we before we go live and our and our song begins to play. And I think that's a really, really healthy thing. And I think in 22, it will be the thing that will keep the band together. For lack of a better way of describing it, I think people will people will know that every single time they can show up. They can show up with with whatever background they have because you keep saying it. But I think it's it's great in principle and it's hard to overcome. People need to see evidence of that. And I think again, I'm not trying to pump your tires too much here. But as long as we can keep that front of mind and keep that culture on display. I think 22 is setting up to be something really, really amazing. So congratulations to you for for doing this. And like I said, back in April of this year when I I don't want to get the name kill the name. But I was at my cabin and I saw you guys talking to Chris White. And I just was like, oh my goodness, oh my goodness, oh my goodness. What's what is this? Where did this come from? So that's that's neat. That's saying it's not necessarily for everybody. But for those who it is for, I think it's only a matter of time before they're going to hear about this and they're going to want to they're going to want to get deeply involved in it. So well done. Daniel. It was a very fun year. So big thanks to all the organizers and just everybody who is part of this year's fun. And epistemic and pragmatic value because it really was special. So to 22 and beyond. Right. Great. Take it easy. Peace.