 Okay. We began. Welcome, everyone. It is the resumption of our cohort. Update the video description. It's the resumption of cohort one of the active inference textbook group. It's our fourth. Okay. Could someone mute Giuseppe or feel free to say whatever you want to say. Can you hear me? Yes. Yes. Okay. So, sorry, but I'm completely new to this interface of a gather and code. So, I'm not sure I understand cohort one and cohort two here. Yes. So, because cohort one was the, wasn't it the previous one, like the one that started last year? Yes. Well. From May till July, we did chapters one through five. Okay. And now we are in this cohort. So, if those who did not participate in this part, like respectfully, you know, thanks for the question, but everyone in general respectfully let's try to have people who are in the cohort participate in it. Everyone's welcome to join and of course rewatch. But contributions and the questions and the chat are probably best so that we can respect the people who are continuing on. This is our 14th meeting. And now we're heading into the second five chapters of the book. Okay. Does any other person want to give classes? Oh yeah, go ahead. So, the current one that is going on right now is the cohort one and not cohort two. We're all learning in different stages. So, the previous hour was the cohort that is focusing on chapters one through five. Okay. And now some of us a different set of people, some are joining both. So that's why we made the meetings convenient was so that we could like for many of us who wanted to both we can join or there's but this is a separate set of people. I see. I see. Yes. So, okay. So cohort two was was first and then I mean the right in this meeting in this meeting right now cohort two was first. Yes. And now is one which started. Correct. Before we warmed up the chapters one through five. Okay, so I will leave because I subscribe for cohort two. So excellent. Okay. Good. Thank you so much. See you next time. Thanks for setting up all these. Yeah. Okay. Bye. Bye. Yes. Awesome. Okay. Anyone else want to add anything? We don't need to do a code overview. I would say people who are watching this recording can look at the live recording if they want a code update that we just went through for the cohort twos live meetings. But does anyone want to give any introductory thoughts otherwise there's a few ways I think we can go for this session. But we're here just to rejoin our activities and orient towards chapters six or 10. We can scan through the chapters. We can have any suggestions or discussions around ways to approach them. I don't know. Rohan and then anyone else. I was wondering when are we going to get to the projects part or discussions about the projects. Great question. Let's look at the chapters and see where that first off it can happen. Whatever you want with contacting people and like we can make a sub page like I see exists here this looks really awesome. So people can request a sub page they can mention it that contact them now or however. So it's up to you for a project that is under your control. It can be developed in the open here or however. You can also participate in many ongoing projects like active block fronts which also which might all share this. Our get coin grant was approved yesterday. So if people are familiar with this or want to become familiar this is a way to like provide or receive support possibly. So you can get these projects in block fronts and contributing to that open source package. Does that address what you were thinking were you thinking about these project ideas or were you thinking more about like the sort of build your own model approach that will be starting in chapter six. No project ideas and the code. Awesome. Yeah, like these are. These types of projects clearly more will be added. But they're of difference. They're not necessary. They might involve building a generative model, but they are not the generative model itself or only. Let's look at chapter six. Because this is where the generative model construction is going to come into play. And there's a lot of projects that you would only need to qualitatively think about generative models for like educational material production is related to the generative model or could be modeled as the generative model of a learner. But you wouldn't necessarily have to make a generative model but in chapter six. It's a recipe for designing active inference models. It is going to rock. Just really quick. I think Luke is here. No, it doesn't have access. He asked for access. Okay. Okay. Message in discord. I mean, message and gather your email address and I'll add you. Okay. So, sorry, didn't. Okay. You requested access. All right. I'll add you. That's enough then. In chapter six, it's going to ask some questions that are driving the design of the generative model. Which system are we modeling system of interest. What is the appropriate form for the generative model. And also all these other questions that aren't exactly within the textbook, like temporal depth, hierarchical nesting, which parameters are learned versus fixed, all these different directions that the models actually have to be elaborated on, even before getting to the data engineering and maybe even code implementation. Just from a structural perspective, what form is it. How to set up the generative model is where some of those analytical details begin being introduced. Not necessarily even the program level implementation or an executable simulation. But like, okay, we have a preference vector that's, is that, is there two states are, is there some continuous preference distribution, whatever it is. What is the actual details of the generative model. And then the other side of the coin slash blanket is the generative process. So what are the endogenous dynamics and the causal mapping with action for the generative process. And there's kind of cases where you have more of a communication setting active inference entities on both sides. You could also have more of like a simple, simple partnering, like two deterministic programs that send messages to each other, or it could be like one more cognitive, more agent like generative model, and one more environmental nonliving type. Any thoughts or overviews or how would people think about how can we make this type of a recipe design, should we be all designing a model, or how do we even go about reading a section of the book in a chapter that begins with a recipe provision. Yeah, so, so I spent a lot of time building what I think would be classed here as generative models and what I find is that this, what I characterize it as the step of framing it up framing up is, is often far trickier, sometimes much harder, frankly, than building a model, depending on what, what the situation is. And in my experience, I find that it all comes down to making sure we understand what is the question that we're trying to answer, right? It's sort of like, because, because every modeling process, I'm sure everybody on this call knows, is the process of deciding what to leave out, right? What to put in, what to leave out, and what level of representation. Then it all has, where everything that's in there, if it's going to be a good model has to be necessary and necessary to answer a question or provide a particular insight, or so forth. So that whole issue of framing out what you're going to build, you know, and in all those different to 10,000 questions, you know, there's more than four, you know, ultimately, many. Then it's all about that, which often means understanding the audience. If, if the model builder is the only audience, this is a much simpler. If the audience is, you know, some much larger group with different perspectives, a lack of, of homogeneity and insight. Then there almost has to be, I'm going to say almost a pre-modeling or framing process of sort of framing up what are the questions we care about. And then we can talk about how, how those translate into a model. So that's just some general framing. Thank you. That's very interesting. It makes me think about like this audience question for the model, like, why are we building the model? Why are we shaping our regime of attention to build this model? Model building can be an end of itself. It can be creative and expressive and fun and all these other things. And maybe later there's a road that has some pragmatic value too, but also it can just be like fun, like, you know, what would a generative model that could do this look like. And that can be very didactic. There's other times where you're presenting information. Oh, I made a simulation of Twitter discourse and dot, dot, dot. It's always going to be presented in a way that's not only just like the model distilling down from the world, but then you're communicating it. And then when you're actually not just communicating, but if you're making an application that does something and communication is doing something as well, then the system of interest, as Lyle mentioned, is totally non trivial. And there's the whole framing in some of these early stages, especially because they can't always be changed easily later. Ron, and then anyone else? Yeah. So I'm just curious when you say who's the audience? Is it that, like, isn't this part of the active inference model itself that, okay, so whatever makes sense in that framework? Wouldn't that be the audience here? And then it has to survive, right? Your agent has to survive. So whatever a generative model helps it survive, would that not be the model of choice over here? And I'm just curious, I'm not sure. That's awesome. It's awesome, like, way to frame it. And it's maybe someone can share the free energy governance link in the chat. I just can't look it up right now, but it puts this very nicely in a recent pretty short book slash dissertation about, like, that imperative for organizations. And yes, in one sense it can be said the right generative model or like the audience for it is whatever its survivability conditions are. But, and within our textbook group, the survivability conditions are open. However, we're also learning about, or even just gesturing towards, like, okay, not just we're going to imagine a room with a person who can take on a jacket or take it off. We can make that a didactic model and the audience could be this textbook group or future cohorts of the textbook group and the function could be just sort of like a neutral exemplar active inference generative model. But then, of course, the considerations for information presentation, empirical data analysis, any kind of application. The system of interest is not just the one person with the jacket on or off two states room with assigned fluctuation. So that's a different situation for model building. So is that how should we interpret this recipe? Oh, okay. But is there not like a general framework for what a good generative model looks like is what I'm saying. Like, I guess you could frame that differently. So if you want to build pixel level representations, for example, you can use a GAN. And that's a pretty decent one, but maybe pixel level representations are not a good, not good for the task at hand or something like that. So it should fit a general category, right? So this is what a good set of models looks like for active inference so that that sort of is getting it. Yeah, that's a very fascinating framing. Does anyone want to add something to that? I totally hear you like on what is there a rubric? Like what are we evaluating the generative model and process fit? Because it's not like there's just simply a best generative model, you know, bigger or you know, whatever, no feature about it except outside of a generative process context is the fitness. And then what is the rubric is our gold standard? Is it like, oh, you're dealing with audio data. Here's the file format. That level of standardization certainly not. However, are there some practices or lenses that we could bring in from people's experience with evaluating model adequacy and relevancy? There probably are. Lyle? Well, yeah, that really comes right back to purpose, right? So a model always fit for purpose. And you, of course, this also surfaces the questions of model validation, right? One of the things we have a talk about here is what's the right model? How do we frame up the right model? What do we choose to put in what we leave out? Part of that is purpose. And how would I evaluate, you know, if I do what I'm imagining a model will be, how do I evaluate its quality, its validity? Well, what data is actually available? That's one thing you always have to ask at the outset. What data is available? What data can I use? What's computationally adequate? And what parts of this then when we kind of project, you know, when we apply this model, if that's its role, then does it fit for the purpose? Does it provide, you know, good, you know, adequate results and improve to be, you know, whatever behavior we're trying to address? So you can't, you can't say if it's good if you don't have a purpose. So if your purpose is just to explore model building, which is something I love to do, then that's a pretty big, there's a lot of things that fit for that purpose, right? But if the purpose is to, you know, do a prospective analysis of a system and what actions might be taken to intervene, then that's a different thing. Thanks a lot for sharing it. I'm really happy and I think it's relevant that we're having this kind of a discussion because we've gone through some of the first chapters. And we will continue through chapter six, I think into having this conversation. But does anybody else want to add something about this more general topic of like, how are we even coming to talk about what system or how are we framing the system that we're modeling? That's in section 6.3. I mean, this is too broad to, it's not active inference specific, but whenever I'm trying to model something, the only questions that I'm, I mean, it is contextual to the purpose. It entails a purpose in an audience, but all I want to know basically is what are the questions that need this model needs to like generate an answer for? What is the information that is like necessarily therefore kind of required to model a system that would have such answers in it? And maybe another way to say the same thing is like what affordances would the model require to answer or address whatever the some set of questions that are posed the system. If you asked, you know, I don't know, like how frequently does a yellow colored car, I don't know, drive down this particular road? Then I wouldn't need to record like the number of bikes or their colors that are, you know, using the bike lane or buses or trucks, large, you know, like this is maybe outside the scope of what I mean by yellow. How many, you know, cars out of all the cars, how many are yellow or something. That's right, driving down this road or whatever. This is like not even exactly a generative model, but my point is that modeling that system there, there is obvious data that's like completely irrelevant, right? And so if you were going to build some sort of simulation of any kind that's going to be generative, like there's often a lot of data that's just completely extraneous depending on what questions you're going to ask of it. So thanks. In 6-3, in the first paragraphs, they're going to come out hard on the interface concept of Markov Blanket. This is how you'll do system modeling is by finding interfaces that distinguish the internal and external states of a system. Then that is critiqued or elaborated because there's an infinite degrees of freedom in the course screenings, and those are modular specific. And they suggest that that matters, especially if you want to be considering embodied extended perspectives where just descriptive interfacing models won't give you the whole answer. There's a great paper, could a neuroscientist understand a microprocessor showing that even if you have total access to the microprocessors firing like an EEG and a generated set of single and double lesions, a lot of the interpretations would be strongly misleading. Yet people want the wiring circuit of the brain, which is dynamic and also involves diffusive molecules. We have all 302 nematode neurons mapped, but that also as an interface diagram is not even close. So if you want to describe the nematode like model building, then it's a fit for purpose. However, there might be very simple models that could be doing a lot better in this in nematode behavior observation. The form is discussed. There are three main design choices. First, discrete and continuous time. We had some awesome discussions in the previous months about like similarities and differences, and where like in theory, they're different and then in practice, where they're different. The whole question of like discretizing continuous systems. The second choice is between shallow models and hierarchical models where inference operates at multiple time scales. The third choice is the consideration only of the present kind of the variational free energy versus models with temporal depth. Temporal depth consideration could be purely passive, and it could also include some element of affordance, which if we remember to like the B matrix, actions are influencing how states of the world are changing through time. So that is how active inference is unifying policy comparison by kind of embedding action into estimates of how things could evolve probabilistically. Those three design decisions are explored in slightly more detail in the subsections. Six five, how to set up the generative model. After specification, then the specific variables have to be included. Okay, the latent state temperature is going to be continuous. The observations in my thermometer, I'm going to take in as just integers. And then there's going to be a person who is going to be in a binary state of like whether it's hot or cold, or maybe I want hot neutral cold, or maybe I want them to have a 10 point scale, or I want them to have a scale that's doing dot dot dot. That is in this stage of actually setting up the generative model and giving a little bit of like color to what had been just outlined. It's going to be a nested model with this continuous variable and this kind of a discrete variable. Here we return to the frog example from earlier in the book, where there's like, but it's kind of being generalized a little bit. It was introduced with the whole Bayesian equation walkthrough. Now we're returning to it and thinking more about multimodal, or like even you could consider this as sensor fusion, because there's some hidden cause. The jumping or the action of a frog that is inducing several domains of impact, potentially in the generative process. And then that leads to specific observables from the perspective of the cognitive entity. Those could be again two communicative cognitive entities or one and not or however. So this is like, okay, we're going to be modeling forest fires. So forest fires, they change the temperature, they change the chemical composition, they change the vibrations in the ground or something. Okay, and then maybe there's three sensors, maybe you have three thermometers, and then what about them? Are you going to average them? Are they going to be like three parallel modeled time series? That is where some more specification of the generative model comes into play, keeping the active inference ontology in mind. What are the affordances of the frog? What are the latent states that are being changing through time, like sound, physical presence or photon emission or something? And then what are the data that would be the observations? This box looks like it's about different sensory modalities. I'm sure there's a lot of interesting discussions we could bring in here, priors and empirical behavior, complete class theorems. Any statistical decision procedure behavior may be framed as Bayes optimal under the right set of prior beliefs. Discussion of fixed and learned variables. One can imagine the pros and the cons generally of having fixed and learned variables, like computationally it's simpler to fix hard code of variable. However, if there's a setting where the model's adequacy depends on being able to update that, maybe you want it learnable. However, then you have to deal with these questions about like the kinetics of learning, like instantly switch or slow switch or trend averaging. So there's a lot of discussions around learning. How is attention regulated? What is precision on learning? It's like it very much complexifies the model. And there's many variables that you can bring into a learnable setting. But when it's framed as just static variables and only like the hidden states changing through time, that's like the simplest case. But more and more cognitive entities probably are going to entail considering learning dynamics. Setting up the generative process. The reason we postpone the design of the generative process. So little slate of hands. Entity first, not the niche first is that in many practical applications discussed in this book, we simply assume the dynamics of the generative process are the same or very similar to the generative model. We generally assume the agents generative model closely mimics the process that generates its observation. That is clearly a simplifying case. Because it doesn't address what if you're paying attention to predicting width and the world is dependent on height. There's so many complexities of that, which is really that broader like what is being modeled what matters question. But in some way very upstream or very close to the stream. It is like the rabbit being put in the hat. Like if the rat knows that there's a five by five maze. That's a huge amount of information versus if it only has access to pure local information. And that's just something to like, I think for us to know as we look through models. Like are we modeling the generative process. Oh, so the seasons oscillate so it is a sine wave. So then the generative model of the agent is also going to be fitting a sine wave. Well, then it kind of collapses to like a stochastic regression. And maybe then it looks like the fit is going to be very good. Active inference modeling brings in another complexity with action influencing outcomes that prevents some I guess fallacies of modeling. But by no means all and probably introduces many others, Eric. Yeah. Can you hear me? Sounds good. So this might be a time to to discuss the question I posted and I apologize. I didn't put it in the right place the other day. I just figured out today that it goes under this particular location, but this is about the question about. Why modeling of a generative process is necessary is a necessary step for building an active inference model. So let me just read it because it's a lot of words. So, you know, we know that an active inference agent employs a generative model as its model of the external world. And it uses a generative model to perform inferences about perceptual input and to make decisions about actives. So in general, the external world is complex and unknowable to the to the agent. The generative model may or may not be able to learn. But if learning is supposed to if it is learning is supposed to put the generative model in a better alignment with the world's generative process, whatever that may be. I can see how one might model the generative process. If one is building an artificial agent in a laboratory where you kind of have access, you you control how the world behaves with respect to the agent. And then you might want to study how well the agent design works versus one versus another under control conditions. But this is not the same as designing an agent to go out and cope with some real world environment where you really don't know what that generative process is. I think that the recipe for designing an active inference models would include formally setting up the generative process only in the kind of a laboratory situation. But not if you're really trying to build something out for the real world because you don't know the generative process. You might create an artificial laboratory kind of situation by trying to build a model of how the world works and say, well, does my agent work under those assumptions or not? But that, you know, that might set you up for putting the agent in the real world. But in fact, you don't know the generative process if you're really trying to build something to work in the real world. That's my question. That's a very insightful question. Thank you. Lyle? Well, I love Eric, I love that framing because I think there's multiple different cases, right? I think if you're modeling something that, as you say, goes in the real world, then, you know, perhaps it's a, you know, some sentient creature or being. Then, then it could obviously there's the real world, even if we knew everything about it is too complex to model, right? So, so that's, that's fool's gold, you know, to assume you know everything, right? There, and so I don't, I'm just gonna say, yeah, that's hard. You know, typically, I guess we would say what are the parts of the generative, the real world, the generative process that we're going to consider relevant and deal with that and knowing that we're making a leap of faith. But there's another case, I think, which is more generalizable just to a complex adaptive systems that operate in what I call the built world, right? So, I work with a bunch of people who build digital twins. They build digital twins of human infrastructure, right? So buildings and all this stuff. There, the generative process that we're concerned with is largely knowable, right? And largely representable. And so, and what's interesting about that is there are a bunch of blind spots and inadequacies in what we know about that generative process. And the ability to do inference about what the real state of that system is then becomes the value of this approach in that setting, right? So, so I think I don't think there's a general answer. Once again, I think it really comes down to purpose and what the specific framing you're looking to do. To me, this ability to say something about what I like that idea of decision agents, right? Decision agents that are going to potentially take action or infer actual states is much richer than what a lot of the modeling, historical modeling paradigms have given us. So, I think that's really interesting. Thank you, Lyle, Ali. Yeah, and I also think that the distinction between generative process and generative models become significant if we want to simulate learning rather than just inference. I mean, that's the answer I got from Ryan Smith, because I asked him almost the same question. Why do we need generative process in order to simulate the agents? Because you see, for the generative process, if the generative model exactly maps into generative process, we only need that capital D matrix, right? And that's just when we're just trying to simulate the inference side of the agent, right? But when the learning comes along, then we need to separate that capital D from another matrix, probably the case D, which maps onto the generative process. So, yeah, one of the situations that the distinction between two might be necessary is when those two processes, I mean learning and inference, are taken as two separate phenomena to model. Thanks. One note is also there's work on symmetries in the generative process, generative model, like I think like the bidirectional paper with Matt Sims and Blue discussing it. And then here will be a defense to Eric's excellent question. The entity first modeling approach that's taken by this chapter. One can say I don't know whether I'm going to a forest or a desert, but I'm going to have these sensors on my designed interface, and I'm going to do extensive modeling on interception and homeostasis. And if I can learn policies that balance homeostasis, then I've survived. If I fail, then failure happens. So that doesn't mean that it works or it's doing the right thing fit for purpose, any of those aspects. But one of the insights from the four E and the embodied side is like a vast amount of our what is modelable as cognitive isn't necessarily the declarative. I think therefore I am possibly even symbolic type thinking linguistic. There's a lot going on. And many of that a lot going on can be simplified by not even knowing about the external process, maybe not even the as modeled kidney, knowing about what the as modeled liver is doing. So this type of a focus was like we're going to define our entity types and the kinds of observations that they can make the kinds of affordances that they have or do empirically or that they conceptually could do. That approach is consistent with embodied biological systems and persistence of organizations. And so I'm wondering how that piece comes into play with this discussion that's very important about and also their earlier discussion of the embedded and extended cognition is that it kind of blurs the line with what is a generative model, what is a generative process. So, hence there being a lot of thinking and attention on what the systems modeling is. What are the known unknowns and unknown unknowns going to be. Let's just see. Chapter ends 67 simulating visualizing analyzing and fitting data using active inference. So standard routines for active inference that provide support for all functions are available. Hashtag SPM and appendix C. Step by step model is a great look through this. This is MATLAB code. One amazing work that we could all iterate on would be porting MATLAB into the ontology into Python and or Julia with PI MDP or active blockference. But there's a lot in SPM and some of its neuro imaging specific. Some of its very variational base specific, not all of it is going to be state of the art or the most useful in every situation. However, I wonder if only offering certain routines and MATLAB, which is like a traditionally very academic software platform and potentially non free can be a limiting factor. So those who feel like they want to learn more or contribute this way. Helping structure some of the code architecture so that cohorts coming through can be doing multiple language. Maybe first we can just move some of these routines into a folder. And then people can start to organize those types of aspects. Summary. It's a four step recipe. We walk through it from the consideration of the system of interest broadly like the entity and the process, focusing in on the entity specifically thinking structurally about what it is going to be like doing slash composed of specifying that to a few levels of detail, then doing the same for the generative process. And then once that is specified, it's not quite quite there for most people today. But where it seems like they are laying out a path towards and where we probably can and will be after that level of setup in some potentially graphical editor or parameter file. Then as SPM and Matlab have done in many, many papers, the generative model specification almost prepares the working script. But there's still a lot of details and pieces to add in, especially in a modern environment. And for all of these like non standard and novel cases that people are wanting to consider. But that is once we have a notebook that goes through the recipe, then the recipes can be forked and so on. And they can be written in different languages. PyMDP, Blockference, Ali. Yeah, just one thing that I think is not made explicit in this chapter is that actually that first step which system are we modeling, especially the identification of the Markov blanket part is basically at this point of active inference research is more like a hypothetical step. Because I also have a misapprehension that we can somehow identify the Markov blankets from data. But then as I went on through my project and by consulting Maxwell and others, actually this is the comment Maxwell made about my proposal that he said that identifying Markov blanket from data is still an outstanding issue that is not resolved yet in the research and active inference. And one of the most active areas of research they're doing at versus lab is just focused on this part of the modeling recipe. I mean, how to how we can identify automatically identify or somehow learn the Markov blanket from just pure data points. Awesome. And that ties in with Dalton S's work on the blanket index and like continuous variables for dynamic assemblies of blanketing. This is going to get really nuanced and empirical with the discovery and the thresholding of Markov blankets and causal inference in real world systems with many different kinds of sensors and doing structure learning on generative model and process and like everything empirically, which is why as a simplifying step, it actually really reminds me of how Dave Snowden situates the difference between systems thinking and complexity thinking, which is systems often uses like a lot of nodes and flow charts to represent system elements and relationships and like interfaces. Whereas in complexity and the Kinevin especially framing, it's like more like self organizing sand dunes. And so causal relationships might be very much in flux and it puts a higher focus on action rather than system description, especially in situations like moving your head where the relationship of different things is going to be heavily action dependent. And so for now, we're kind of in the systems world where it's like a priori here are the system interfaces. And then we're going to take whether it's a heuristic or whether you take a literal interpretation, but you don't have to. You could just say I'm taking heristically the approach of the systems mapping. But that's not to say that that generative model couldn't be trained in a systems mapped context and then placed into a different setting where there was a mismatch or different generative model. And so it's going to be an ongoing discussion how are when are these lines drawn one other point other than not needing to use last names to refer to specific concepts. Like, are we talking about the blanket that statistically inner insulates every partition of nodes in these base graphs, which are going to be vast. As Ali and Jacob and I were discussing on the branching time active inference from just a few days ago yesterday, like these graphs are going to be vast. And so just saying there will be a markup like it in fact there will be many markup blankets with different semantics that you might assign to them or not. How does that map onto the real world is like the realism instrumentalism, pragmatism, abduction conversation that we've been having. Do you come in with these kinds of boundaries, heristically and distinctions, or how many levels of uncertainty and nesting. I don't know how many categories of blank there are. I don't know if it's a category or a continuum of this. You can actually still take on multiple levels of uncertainty in the modeling, but then fitting it with empirical data. If it's fit for purpose use is to do some sort of statistical inference or like provide meaning or action. But just didactically, when the models purpose is educational, then there's a lot of things in architectures we might play with. And then the recipe and also this conversation helps, I hope hold space for the educational and the curiosity driven model development where we're going to have a lot of fun and interesting GMs and people's applications and working with different real world constraints and different settings will all be learning a lot from it. And hopefully some of it will be in repos we can reference. Other people may want to play around inversion within folders that we can make if anybody wants to be added as a collaborator in this textbook repo, which is just pretty much empty except for, I guess, a script from Jakob. And there'll be more to say on probably GitHub and on as we continue through. But in these last minutes, does anyone have any thoughts for our continuation of the cohort? I have one question, Daniel, you referenced a paper earlier in the discussion that I'd love to get the reference to about persistence of organizations and survivability. Oh, yes. All right. I will put it into resources. Here's the work. It's a PhD dissertation that was published as a book and I'll upload the PDF as well. This is extremely interesting work. Yeah, it just came out. In our last minutes, it's too fun not to share. So Carl Friston has written the foreword. What is a firm's purpose? What is an organization's purpose? It's as difficult to answer as why am I here? Free energy governance and the FEP inverts the question. Instead of asking how a system, how must the system behave to survive? It asks how do systems that survive behave? The answer is rather deflationary. They survive. Their purpose is just sustainability. So some classic Fristanisms, some very nice framings on page 95. There's some great juxtapositions. So it's always awesome to see this kind of quality work. And who knows what people here will see. And I don't see any scripts here. That's the textbook. But who knows? Maybe we can do some generative modeling of organizations or we can specify it and iterate on it. Realize it in Blockference and PyMDP and FortyLab and JF's symbolic suite. So there'll be a lot of fun things to explore this cohort. Any other questions or ideas that people want to share? Thanks and congratulations to people for joining this continuation. It's a great continuance. And I know that this part is going to be very interesting. The next three months will be interesting.