 Hello, everyone. Welcome to the Active Inference Lab. This is Active Inference live stream number 31.0. It's October 14th, 2021. Welcome to the Act Inflab. We are a participatory online lab that is communicating, learning, and practicing applied active inference. You can find us at the links here on this slide. This is a recorded and an archived live stream. So please provide us with feedback so that we can improve on our work. All backgrounds and perspectives are welcome here. And we'll be following good video etiquette for live streams. At this short link, you can see some of the past, present and future streams that we'll be doing in the lab as part of the comms organizational unit. And the first tab has the regular Tuesday discussions that are group discussions dot one and dot two. Right now we're at dot zero kind of ad hoc scheduled contextualizing live stream. And then the other tabs have other kinds of live streams that can be scheduled at different times, different topics. So just let us know if you want to get involved or co organize an event with us. Today in active stream number 31.0, the goal is to learn and discuss this paper non equilibrium thermodynamics and the free energy principle in biology by Patricia Palacios and Matteo Colombo. Like all the dot zeros, you'll get the disclaimer that this is just an introduction to some of the ideas of the paper and the context. It's not a review or a judgment or a final word. In 31.0, we're going to start by looking over some of the key aims and claims and the abstract and the roadmap of the paper to get the big picture. Then talk about some of the keywords. Maybe you're really familiar with some of the keywords. Maybe it's the first time you've thought about some of these topics. And then we'll walk through some of the figures and formalisms and just get set up in the coming weeks to be discussing this paper. So we'll start with just some introductions and warmups will say hello and then maybe just mention something that we're excited to talk about or learn about or something that we noticed while reading the paper. So I'm Daniel. I'm a researcher in California. And this is just an exciting paper and topic because it's at the intersection of chemistry and physics and thermodynamics, but also active inference. And it's a nice starting point for a conversation about what are the precursors and the tenants and the principles of free energy principle and active inference. And then what are the consequences? And what does that mean for the big picture? Blue. I'm Blue Knight. I'm an independent research consultant in New Mexico. And I really was excited to see this paper bring up some of the points that I've brought up as I've been reading more about active inference, specifically with respect to ergodicity and what that means, the potential implications for that. And also just other minor topics, but things I've mentioned in the past. It's cool sometimes to see how the threads of our previous discussions and things that a participant might have brought up, like, hey, this isn't making sense. And it's like, don't worry, it's not just you. These are open research questions. And so we're seeing that and we'll be enacting it in the coming weeks. Some of the big questions in the paper kind of framed in non technical or in not the author's words. It's really asking what fundamental ideas from physics are utilized in the free energy principle and how are they utilized? Why are they utilized? And then what are some implications or tradeoffs associated with the use of these ideas? Either the ways that they have been used? What are the pros and cons or what are the implications? And what about the ways that they could be used? So in the spirit of positive contributions to the literature, how are we taking ideas from physics and other fields, bringing it together in free energy principle? And then what are the fundamental or just spuriously contemporary tradeoffs? Let's first go to the aims and claims of the authors. So I'll read it and then blue, you can give a thought. Well, the paper is the non equilibrium thermodynamics and free energy principle in biology on the Philsai archive at Pitt. So check out the paper and read it if you want to. And a few of their aims and claims that we can extract out the free energy principle has received a lot of attention. But its foundations in statistical physics and dynamical systems theory have not been probed yet. This lack of attention is unfortunate because the theoretical adequacy of the free energy principle, as well as its practical utility for the study of key biological properties such as homeostasis and robustness depend on the validity of those foundations. Here we begin to fill this gap. So that's kind of one of their contextualizing claims and their aim. And one of their arguments, a claim about their arguments is that as we are going to argue, the foundations and physics concepts allow models built from the free energy principle to achieve maximum generality. But these assumptions also decrease the ability of these models to include enough factors to provide biologically plausible representations of the causal networks responsible for living systems dynamic equilibrium. Any thoughts on the aims or we can continue. I think that the aims are good. Sorry, I just realized I was muted. I think that these are good aims. I think that one of the fundamental claims is that in order to make the FEP more generalizable, it's not as specific to biology as it could be. I think that that was like the key trade off, maybe. Yes, between specificity and utility and generalizability. So one mindset is that as we generalize, we're going to be explaining more systems, different systems, explaining them better, doing more, and it's just all up, up, up with generalization. Another contrasting opinion is that as we generalize, we out of necessity abstract or average out over particulars. And so we might actually lose utility or potentially, and even disastrously lose the specifics of the system that we're talking about. So all of a sudden, we have this free floating generalization or abstraction that maybe initially was inspired by wanting to explain some sort of key biological property like homeostasis and robustness in the author's view. But then we end up sort of throwing out the kernel or the essence of what we wanted to explain because we generalized too much. So we'll be exploring that. Okay. Do you want to read the abstract? Sure. According to the free energy principle, life is an inevitable and emergent property of any ergodic random dynamical system at non-equilibrium steady state that possesses a Markov blanket, first in 2013. Formulating a principle for the life sciences in terms of concepts from statistical physics, such as random dynamical system, non-equilibrium steady state and ergodicity places substantial constraints on the theoretical and empirical study of biological systems. Thus far, however, the physics foundations of the free energy principle have received hardly any attention. Here we start to fill this gap and analyze some of the challenges raised by applications of statistical physics from modeling biological targets. Based on our analysis, we conclude that model building grounded in the free energy principle exacerbates a trade-off between generality and realism because of a fundamental mismatch between its physics assumptions and the properties of actual biological targets. Cool. And there's a lot to say. That's just the opening salvo from the authors. How do they make good on this abstract? We can look at the roadmap. There's not too many sections in this paper, but there's a lot of meat within. There's a general introduction and then a specific introduction to the free energy principle, which is described in the context of self-organizing, dissipative systems. And then there's a little discussion on the explanatory scope of the FEP and the introduction of the winged snowflake thought experiment or idea, which we'll return to soon. Section three talks about biological systems as random dynamical systems, which isn't something that FEP invented. It's not something that ACTIMF invented. This is a broader question about how to model biological systems and dynamical systems. Section four models biological active states as random dynamical attractors and talks about three big physics ideas that if we take section three, if we want to be modeling biological systems as dynamical systems, then we're using this physics ontology and what other key terms are relevant in physics for discussing random dynamical systems. And we will go through them. They are the idea of phase spaces in biology, ergodicity, and random dynamical attractors. There's then a discussion and a short conclusion. So that's kind of the overall roadmap. That's where we're going to go. We're going to be exploring that trade-off or the implications of the introduction of physics fundamentals into the generalization of biological systems and maybe even non-living systems as well and just how is it all going to come together and how does it influence how we research going forward? The keywords are FEP, free energy principle, dynamic equilibrium, homeostasis, phase space, ergodicity, and attractor. And we can ask where's active inference, not that it has to be a keyword for everything, but as we discuss, let's keep in mind, what are the differences between FEP and active, and what are the implications of these claims about FEP for active? All right. How about the first keyword, dynamic equilibrium? So interestingly, like you said that you like how this paper brings together physics and chemistry and biology. And I thought that chemistry was kind of blatantly omitted. And in my heart, I'm a chemist. Like if I hadn't studied neuro, I would have really like gone into OCAM. I just thought that like working in the lab was a little scary, like too much ether and too like, whoa, when I came out. So really, when I look up dynamic equilibrium or the context that I'm familiar with, dynamic equilibrium is in chemistry. And so a dynamic equilibrium occurs when you have a reversible chemical reaction. So it's not that the reaction is over, right? Like here's an example of water ionizing into hydroxide and protons, right? And so it's not that when you have the equilibrium that the reaction is still occurring. It's just occurring at equal rates in the forward and reverse directions. So this is dynamic equilibrium and it only happens in a reversible chemical reaction. Like so there's no other way that I know of that dynamic equilibrium happens. It is an example of a system, particularly a chemical system at a steady state. And so I looked through the paper. I don't know if you want to flip to the next slide. I looked to the paper and I looked like, well, how are the authors using dynamic equilibrium? And I found this phrase, which I'll say here, the core of this account is a free energy principle according to which all biological systems actively maintain a dynamic equilibrium with their environment by minimizing their free energy, which enables them to avoid a rapid decay into an inert state of thermodynamic equilibrium. And so then there's some citations Friston and the Ramstead 2018 paper and Parinfriston in 2019. And so even I looked back through these papers and looked for dynamic equilibrium and it's not there in the references that I was able to find. And so I was curious what the authors meant by dynamic equilibrium in a living system. I mean, as far as I know, like maybe they meant dynamic coupling between the organism and the niche, or maybe they mean the non-equilibrium steady state of homeostasis, which is not a dynamic equilibrium. It's a non-equilibrium. So I just found this phrasing kind of curious. And I'm assuming just based on context that they're referring to homeostasis, but homeostasis includes plenty of irreversible reactions, irreversible chemical reactions. And so that's kind of why this was a little bit sounding. It makes sense. Like if we think about instead of the water ionizing and the H plus and the OH minus coming back together and the overall values on both sides are staying the same. So let's think in the biological case, like moment to moment, our mass is staying the same, let's just say, or some people might maintain the same mass for 30 years. But underlying that stability is actually many, many synthesis of fatty acids and the breakdown of fatty acids. And so it's like, okay, I get it that they're both going both sides at the same rate. So there's a stable point. But then also it's constituting its own irreversible thermodynamic reactions sometimes. And so it's a good question. Do Friston at all use the term dynamic equilibrium? And are we talking about dynamic couplings, which might be at steady states? Or are we talking about the non-equilibrium steady state of homeostasis? Okay, brings us to homeostasis. Sure. So homeostasis is the steady state of internal physical and chemical conditions that are maintained by living systems. And there's a lot of variables that can be included in here. Mass, I think actually changes at night, like the carbon that you exhale, like you actually weigh less in the morning from like sleeping all night and like emitting carbon, I heard. I don't know if that's really true, but I guess it depends on what scale you're measuring. But I think that pH has to be the same or within like a pretty defined range. Osmolarity, sodium potassium and calcium ions, blood sugar, body temperature is a huge one. So and we act, we do things throughout the day, like eat and drink and use the restroom and put on jackets to maintain, to help with our, you know, homeostatic conditions internally. Well, unless we're eating at night, we're losing water and some other molecules. So definitely we're lighter in the morning. And then one discussion that we've had in a few different streams is when do we think of homeostasis as reducing uncertainty from a set point, like sort of a desired temperature, and then we want to just reduce our discrepancy versus when do we see allostasis or almost like anticipatory processes by which biological systems actually take excursions, qualified excursions from set points. And how do we think about that within the broader history of cybernetics? And it's been said that cybernetics had some challenges with dealing with post homeostasis. So it's easy enough to see how a cybernetics system seeks a set point. But then once you start thinking, what is the process by which that set point is itself moved around, all of a sudden that challenged a lot of historical cybernetics frameworks. So that's definitely what we're exploring with ACTIMP. Let's take another dip into a physics or a mathematical idea of a phase space. So I'll read the yellow parts or you want to go for it. I'll read the yellow parts and then you give any description. So in a phase space, every degree of freedom or parameter of the system is represented as an axis of a multi dimensional space. So like every point on this spiral has two variables that describe it. It's like its value on the X and value on the Y. And then the phase space trajectory represents the set of states compatible with starting from one particular initial condition located in the false phase space that represents the set of states compatible with starting from any initial condition. So just a quick comment or thought on that. So even just in the maintenance of homeostasis, like where your, you know, osmolarity is the same and your blood glucose is the same, or just intercellular glucose or ATP levels. Or, you know, if you just want to look at just maintaining homeostasis, not even including like the potential possible actions that I could take in like moving in a three dimensional environment or anything like that. But, but it becomes very quickly an expansive multi dimensional space like something that we're not even like here we have like a 3D ish visualization. But, you know, in all these dimensions, it becomes very, very quickly computationally intractable because of the dimensionality that of options that we're presented with. And just even to maintain a homeostatic set point is intense. Yes. Phase spaces are just kind of the descriptive. It's literally just the space that something can happen in. And then there are trajectories through phase space. Well, there's points, which just are static values like three comma four. And then there are trajectories, which are like linkages amongst sets of points. And then also out there in phase space are attractors. So in the field of dynamical systems, an attractor is a set of states in the phase space towards which a system tends to evolve for a wide variety of starting conditions of the system. System values close enough to the attractor values remain close, even if slightly disturbed. So a kind of mental model there is like a rubber sheet and someone pulls the middle down a little bit. That is now an attractor, because things are from near or far are attracted towards it. And then if they're at the bottom of that well, they're going to resist being moved away from it. And so that's sort of a simple ball rolling into a bowl attractor state. And there's simple equations that govern that type of movement. But in higher dimensional phase spaces, there are different geometries of attractors. And then also for biological systems, it is a challenge to model using attractors. But that's what this paper is really about. Can we use that model of an attractor to talk about biological systems and how they regulate? Yeah, I just pulled this image out. Actually, I think it was brought up in the paper. And I don't remember if the original author who outlined these four types of attractor, if that was Kauffman, I think I feel like it was Kauffman, but it was cited in this paper as well about the four different kinds of attractors. And so that's what's shown here is the strange attractor, the point attractor, the cycle and the torus. So just in case you were wondering what those kind of attractors looks like, it's drawn to points in this kind of way. It is also on another phase space of attractors with its own x and y axis. Okay, ergodicity. So this is a quote from the paper. Then blue any thoughts would be cool. The ergodic density is an invariant probability measure that can be interpreted as the probability of finding the target system M in any state x when observed at a random point in time, first in 2012. The assumption of ergodicity is important to get the FEP off the ground, since it ensures that organisms can be modeled as having invariant characteristics over time. So just to kind of relate ergodicity to the phase space. So the concept of ergodicity is in a potential phase space that all of the potential points in that space are visited at one point or another, like that is ergodicity and then the probability that you can find a the system in that space or at that point in space corresponds to how much time the system spends at that point in that space, if that makes sense. It's kind of a circular loop, but this is just one slide. We're going to return to ergodicity later on. But this is a topic that Blue and I like learning about. However, if you know more about this, it would be great to have your perspective as well as if you're also just learning it and have questions to bring, because there's a lot you could learn about ergodicity and the ergodic hierarchy. So take a dive and then let us know how the water is also getting FEP off the ground. I'm thinking, does FEP start on the ground? Does it start underground? Does it start? Is there a snowflake with wings? Has it always been that way? So exactly. It's like the water cycle. Okay. So on to the FEP, which is of course the target, the system of interest of this paper. And in ACTIMF Livestream 14 and before then and after then, we've had a lot of discussions. What is the FEP? How does it relate to active inference? And it seems like we'll be working on those questions for a long, long time, but a few pieces that we've pulled out as kind of being good sort of cues to bring up alongside free energy principle and just see them as like cousins or in a network together, multi-scale systems, ensembles, like the collective behavior of multiple particles or other system subunits, the nexus of information and thermodynamics, which is really like information, thermo, like heat and dynamics, dynamical systems. So often thermo and dynamics aren't really separated, but we're getting at that delineation. And then action policy selection as variational Bayesian inference. So using information dynamics to study how policies are selected. And then there's sort of FEP as it is today. And then there's where could we go and develop with this framework, which is what we'll be exploring today. Any general comments before we get into the paper? Only that, you know, we haven't thermodynamics has classically been one word. So like, I'm starting to just use infodynamics as one word, just to be like non discriminatory, you know, or just dynamics. It's like, it's all information. It's all thermo. So yes, exactly. Okay, here we go to the paper. This is the first paragraph of the paper, and it's very informative. Life scientists use the term equilibrium in various ways. And so we're going to look at three different definitions of equilibrium. And this is one reason why it's important to have ontologies and have ways of using terms that we can, if not distinguish, at least be clear about in their use. So sometimes life scientists use equilibrium to refer to an inert state of death, where the flow of matter and energy through a biological system stops, and the system reaches a lifeless state of thermodynamic equilibrium. That's sort of the ultimate equilibrium. Then there's more often they use it to mean homeostasis, which is the ability of keeping some variable in a system constant or within a specific range of values, going back to these old citations for where the term actually arises from. And so that's sort of a living system. So there's parts of it that are consuming energy, hence not at the ultimate equilibrium. But there's also other elements of it, at least over certain time scales that are invariant. Some other time, the term equilibrium is associated with the concept of robustness, which refers to the capacity of systems to dynamically preserve their characteristic structural and functional stability amid perturbations due to environmental change, internal noise, or genetic variation. So it's sort of like a low bar, a medium bar, and a high bar of equilibrium, ranging from just, yeah, the molecules are not doing any motion. And there's a carcass on the ground to the molecules are churning. And there's something invariant about the living system, going up into even the idea of robustness, which is that the system is doing something to maintain its sort of equilibrium, like its balance amidst noise, which is maybe seen as a type of homeostasis, but not exactly the same. So any thoughts on why these distinctions are mattering or why start the paper this way? You know, I'm curious Daniel, like I mean, we've been in the life sciences for 20 some years more, maybe. And I've only used equilibrium in terms of life sciences in the first way, to like, as death. So I'm just curious, and perhaps this is the inner chemist in me, but are you familiar with all of these uses of equilibrium and the life scientists in the life sciences also? I think I've seen the second one, people might not call it equilibrium, but like a buffer equilibrium, well, the carbonate and the CO2 in your blood, and it's maintaining a buffered a pH equilibrium. So that's not death, it's a living system that's doing it. But it's referring to some chemical process that's being kept at a dynamic equilibrium within the scaffold that a living biological system is providing. And I'd also be curious, I don't think we dove into it this Kitano 2004, like homeostasis to keep something within a range of values, it sounds a lot like robustness. So how are the, how are the second and the third definitions different? Because like antifragility is something different where perturbations actually strengthen the system or cause it to develop, for example. But robustness is a little bit more like homeostasis, but open anyone's thoughts, we'll see. Now, on to Friston et al. So over the past 20 years is giving us some historical context, you know, since the new millennium. Theoretical neuroscientist Carl Friston and collaborators have developed an account of the conditions of possibility of a certain kind of dynamic equilibrium between a biological system and its environment. The core of this account is a free energy principle, according to which all biological systems actively maintain a dynamic equilibrium with their environment, by minimizing their free energy through inference and action, which enables them to avoid a rapid decay into an inert state of thermodynamic equilibrium. So here's a few of the citations, 2012, 13, Ramsted et al. 2018, Parr and Friston 2019. So here are the papes. If people want to dive into the references, those are always really good places to start. But whether you take the authors at their manifold, dimensionally reduced word, or you go into these original papers, or, you know, you send a tweet to an author, or you come into live stream and talk to them, what are the key claims of the FEP? From a rhetoric perspective, what does the FEP assume? And what does it claim? Which papers are a key to FEP? Not what was important in 2005 from 2002, but today, what is important? So how do we respect that different citations from the past might have a partial picture or use a different formalism? And then, which papers are contradictory, even if only temporary? So somebody might make a claim in a 2010 paper, you can't explain situation X. And then like, you know, two weeks later, Friston et al. write the response. And then some people have, they bought the rumor, but they haven't heard the news. So how do you keep a coherent narrative of the literature as there are errors, contradictions, disconcordances, all kinds of things that just make it less easy than it could be to understand what is happening with FEP, its proponents and its detractors? So just a thought on this and something that I was thinking about as I was reading the paper, and a lot of this is footnoted throughout the paper, but the FEP, like, has evolved over the last 20 years. So it didn't start off like in the state that it is now, it didn't start off like a completely elaborative theory. And so it'd be really cool to develop a timeline of the FEP, like first concept, you know, 2006 or whatever. And then at what point in time did like the Markov blanket come into play? And then when did active inference separate from the FEP and to kind of get like a good idea of the situation as it's gone through its several iterations now? Great idea, Blue. I'd hope that in ACTIMFLAB and other groups, we could collaborate to really make that happen because to have the timeline of developments and crosslink that with the core terms, which we'll talk about later today, and then also just who was involved and what were the contributions, it would just make it tremendously easier instead of just going back to square one sort of a scholar search, just when we're curious about, hey, has there been something written about free energy and X? Okay, so that was sort of the neutral context or at least not so barbarous. But here's where the authors bring in the the gap, the free energy differential in the literature and their contribution, what actions are they going to take to reduce the differential? The free energy principle has received a lot of attention. No citations from who? A lot relative to what? It's not I'm not saying it's incorrect in the paper, just it would be awesome to learn what does it look like to have a lot of attention? What does a Google trends or a book and gram viewer search result in? Are we on the up or are we going down from some period of time? So what attention, what regime of attention has the FEP received and from whom? But the foundations of FEP in statistical physics and dynamical systems theory have not been probed yet. That's a claim, maybe some people will think that they have, so we'll look forward to learning about that. This lack of attention is unfortunate because two reasons. One, the theoretical adequacy of the FEP, and two, its practical utility for the study of biological properties, depend on the validity of the foundations. So it's like we want the house to be beautiful and useful. And if the foundations are not strong, it will not be beautiful and or useful. So this work of the paper, as well as all the work around it is related to that gap. So a few big questions like what does it mean for a principle FEP? And we can, of course, have some subtlety with like what's a principle, a framework, a theory, a hypothesis. So let's just kind of roll with it together. What does it mean for a scientific principle to have foundations in statistical physics? What does it mean for it to be grounded in dynamical systems? Why does that matter? Why does it matter how it's grounded or who thinks it's grounded in what? For the FEP and in general. And then always trying to connect the generalities with minute particulars. What does this paper actually do? And then what work is still needed? Because it's easy to get caught up in some of these big sweeping, multi decade, many person research trajectories. And it's helpful sometimes to just separate out what is the paper doing? And then what is the bigger picture like the grand synthesis that has to be done between FEP and some other field? Anything here? Okay, here's from a footnote. And it's to forestall any confusion. Okay, great. All my confusions are about to be forestalled. I love footnotes like that. Our focus is on free energy theorists, not principalists, by the way, theorists, assumptions that all biological systems at any scale can be modeled within the framework of ergodic, random dynamical systems, and that homeostasis and or robustness can be defined and studied in terms of dynamical attractors. So I don't know if that's an assumption of all FEP colleagues, I don't know if it's a rather than an assumption, but rather a consequence, like there are assumptions that people use, but then actually they have come to this claim as a consequence, not as an assumption or an ungrounded axiom. And while our discussions should suggest that concepts and mathematical representations from statistical physics and dynamical systems theory are sometimes abused in biology, it should not suggest that those concepts and representations have no use. So it's saying, in fact, you can have good explanatory predictive models that involve dynamical systems. So it's not saying math and biology is hopeless. And then also this 2004 paper thought, who do you cite for saying that, you know, math gets abused in biology. So this is Robert May, a famous biologist in 2004. And this was just very interesting, like understanding the nonlinear dynamics. And this was in the case of a virus and health situation. Just extracting some of the key memes, it's like, we're going to have to learn across systems, we're not just going to be able to have one default system. We need to think about multi scale systems for applying and educating in transdisciplinary contexts, like having an understanding of viral dynamics is going to require population and ecological understanding as well as protein. And then I venture to predict that the corresponding immunology texts will indeed look different in 20 or even 10 years time. Okay, well, we're right between 10 and 20. And maybe immunologists are starting to go, Oh, yeah, you can't just talk about what's in the, what's the white blood cell touching the pathogen. There's a bigger picture, there's a society. And then what are those abuses? And they're not always easy to recognize, especially because few have the expertise or the confidence or the personal narrative that they're able to ask questions across disciplinary contexts. So it's easy sometimes for something to be transdisciplinary. And then like, the biologists don't question the physics and the physics don't question the biology. And all of a sudden something is not really getting questioned. So it's totally a great point. And this is a really important kind of fallacy, essentially, where a mathematical model is constructed with an excruciating abundance of detail in some aspects, while other facets of the problem are misty or vital parameters uncertain within a massive range. And this happens just so many places, it's unreal. We made a mathematical model of this. Oh, well, you know, of course, we didn't consider other factor ABC. So it's not that everything has to be a theory of everything. But we have to recognize that extremely partial mathematical models in biological systems may be less helpful than people claim them to be. And it might be better if people pooled their efforts and work towards better models. Any thoughts on that? Just, you know, I mean, I think I'm definitely guilty of like, I remember back to Mike Levin's livestream when we were abusing physics, so guilty as charged sometimes. But it is important to, you know, have an open dialogue and to ask questions, like if there's a concept or something that you don't perhaps understand, maybe like you feel like you're a beginner. I mean, I've definitely felt this way, like maybe I'm just being dumb, but you know, and then there's actually a very valid question at the end of that sentence. So if you don't understand something, be sure to ask until you do, I think. Yep. Imagine just all the times somebody has said, Oh, FEP, does that apply to living systems? And so yeah, it's a done deal. Well, you know, it's not a done deal. So the conversation is still happening. Here we get to section two, introducing the FEP. The free energy principle FEP says that unless a biological system minimizes its surprise, it will rapidly die. Now, what do those citations say? Do they talk about system death? Or do they talk about system persistence? More specifically, FEP presupposes a view of biological systems as essentially persisters, different authors here, and for foregrounds the conditions of possibility for the persistence of biological systems. So that's their top level take on FEP. And then to connect a few of the keywords in Howie 2020 refers to a system's periodic phase attractive dynamics, so attractors in the phase space that have periodicity, in the state space to define what it is for a biological system to exist. Howie writes, biological in existence is marked by a tendency to disperse throughout all possible states in state space, e.g. a system ceases to exist as it decomposes, decays, dissolves, dissipates, or alliteratively dies. In contrast, to exist is to revisit the same states or their close neighborhoods. So this is kind of bringing together what is life? Schrodinger's question, Ram said it all 2018, with dissipative systems and systems that resist dissipation, systems that have a dynamic equilibrium, what is the phase space we're talking about? These are the big questions we're getting at. And how does one disperse oneself exactly? Yes. And how? I think we heard that from somebody earlier today. Somebody was getting dispersed during a practice talk. It happens. And where's surprise come into play? Not just the psychological, like, wow, I was surprised. It was a surprise birthday party. But how do we go from attractors and dynamical systems to this statistical Bayesian concept of surprise and attracting sets and lower dimensional manifolds in attracting sets? Make that quote big one more time for me. So something I thought was just unique or maybe remarkable was it says free energy theorists relate the notion of entropy to the information theoretic quantity of surprise. But I think that this is done before the free energy principle arrives. So in information theory, that's Claude Shannon, makes the relationship between information and entropy and surprise. So I don't think that that's something that the FEP started doing. Well, two themes we'll return to today and again and again, which is like, first, this shouldn't be a paid versus paid battle. This isn't a battle royale for whose hot take will reign supreme. We should be collaborating on bigger research agendas. And so clarifying the contributions that different people have made will help demystify and make active inference and FEP more accessible, because which claims or which contributions were made like 100 years ago, not that that makes it more valid, but weren't made recently. But sometimes when we only see recent citations, it can be easy to get swept up. But actually, the authors of this paper do a good job of going back to some original citations. But these are big, eternal questions. Here's a fun paragraph in the paper, because it's sort of like, I reformatted it slightly, because it shows that we're in synchronous and asynchronous discussion with Friston at all. So here is like, you know, him thinking slash saying, Friston saying, by finding this is Friston's claim, pseudo Friston claim, not actually his quote, by finding a suitable attractor in the dynamic model, life scientists would be able to gain understandings of the workings of dot dot dot dot dot, basically like this is the dream of active inference that if we had attractor phase all these terms coming together, then we could just make it work for diverse biological systems. And then the authors of the paper, right, to better understand what this means, let's examine how such idealized models are developed and why free energy theorists opt to build their model starting from concepts and physics. So this is like the dream of active inference and physical groundings for biological systems. And then the authors do the right thing, which is to to steal person, the argument, to investigate what its actual strengths are, and why somebody would want to make that argument or why this is something that we're motivated to pursue, instead of just immediately critiquing it as like not an important thing to want. Okay. All right. So here is in the red quote from the paper. So first in at all 2006, way long ago, way, way long ago, ask us to imagine were you even born yet? Oh, please. Oh, please. Ask us to imagine a snowflake endowed with wings. This wing snowflake could move use its wings as solar reflectors or fans and exchange energy with the environment for a much longer period than we would expect of an inanimate piece of ice to keep going under identical circumstance. Unlike familiar snowflakes, the wing snowflake can choose actions, which are the means of by which different outcomes are brought about in different states of the environment. Its wings allow the snowflake to bring about outcomes such as lowering its core temperature in response to an increase in air temperature. So let's go to first in 2006. And this is figure one. So we're imagining they're sort of on the left is like a traditional snowflake model. They're sort of it doesn't have any action affordances. So it's just being tossed around in the soup and it freezes when it's cold. And then when the temperature gets below this critical phase boundary, it melts and the system fails to persist. So it's like, you know, checkpoint, are we good with systems persisting versus dissipating? This is kind of our simple physical example, a piece of ice freezing in the sky and melting. All right, now let's imagine if the ice froze in a way where it had wings and slightly different colored wings. I wondered what the symbolism was here, but it has these two different wings. And that allows it to engage in action on the environment. So for example, if it detects or predicts or acts as if it's predicting that it's going to head towards a warmer area that's like, you know, lower down or something, all of a sudden it can flap a few times and get into a colder area. So here this snowflake just falls, but this one is able to flap and stay cold. And so through action, it's able to persist longer. So it's precluding a phase transition from solid to liquid through this active exchange with the environment. So that's like the winged snowflake model. That's kind of a nice thought experiment from Yonder to help us understand like the steps from mere physical system to systems that start to engage in actions on the environment, especially those that keep persistence of the system. Anything here? Okay, here's figure three from that same Friston 2006 paper. And this is really an awesome figure because it shows some concordances between actinth, even though it's not even really used in those terms. So that's why we need the timeline. Like is actinth even mentioned in this paper in 2006? It might not be actually. But we can see there's this perception action cycle. And this is related to a few other kinds of action perception cycles, footnote, paper incoming. We can think of this as like a bowl. And then the ball is rolling to the bottom of the bowl. It's an image we return to again and again. Now, the y axis is the free energy. And so here's the minimum possible free energy. And it's like the system starts here, there's a change in the environment. That causes a mismatch. That mismatch can be reduced through two different types of updates of a model. One is an action and the other is a perception. So which order happens details, details, details. But the idea is like at each loop, there's an environmental change, which is then renormalized or returned to something through perception and action, which relates to an active inference. How we say that the two ways that free energy can be minimized are through updates of the generative model, like learning or parameter updating, or through action, how do we reduce supply surprise, inference and action? Why reduce surprise? Because if we have an optimistic generative model, we don't want to be surprised about our model where we're living. So we act and we infer so that we persist. Any comments here? But it's just a cool like input could be the state of chemo receptors, you know, already, we're going from these sort of physics formalisms and dynamical systems formalisms towards just oh yeah, you know, drop it, it could be like chemical and biological. But was it justified in 2006? Well, looking back on it, not as much as it is justified today. And what will they say in 2033? Looking back on this, so all work in progress. Okay, so this is from page six of the paper. And won't read the full quote, because we've kind of gotten at this trade off between utility and accuracy and generalizability. But the big questions here are, what systems is the FEP modeling or describing? Where has it been used? Where could it be used? Where shouldn't it be used or can't it be used? How good is the FEP for modeling or describing these different systems? How good could it be? And what would it take to get it there? And then also, an important question for those of you who are FEP or active fans, and you have a challenging conversation, take a step back and ask your colleague, what models do you prefer with similar or different explanatory scope? Just what else is out there? Because it really helps ground the discussion in what that person's understanding or preferences are. And a lot of times, people will come up blank. They'll say, I'm critiquing your unified model of perception cognition action. Okay, great, critique away. That's what I'm doing too. What do you prefer? Oh, nothing. So it helps us stay positive with the contributions that we're making to this field, while also recognizing the discrepancy. And that combination of recognizing discrepancy and taking action is active inference. This is funny. This is a good contrast to the Sims paper that we just did. And I feel like at the end of the last livestream, we were just talking about like thermodynamics and information theory and the relationship between those two things. But I always want to take the FEP and use it for like non biological systems. I think I like veer toward panpsychism more than most people. And like I don't like that I have a hard time with the line between mere active inference and adaptive active inference or like the evolutionary late evolutionary cover late, whatever, like the, you know, brain is the ultimate source of cognition and capable, capable organ capable cognition. And so like what would the FEP apply to a set of molecules in a beaker? Like it doesn't have preferences. Does molecules prefer to be in an equilibrium state? Or how does that work out? Like I want to take the FEP and use it in systems that maybe it's not designed for. I don't know. I tend to want to go the other way. Makes sense. Here's that adaptive versus mere active inference slide from 30. Perfect segue. And so here's a question from FEP Landia. What makes biological cognition special or different? So we're not going to go too much into it here. We talked about it in live stream number 30, a ton. And it was just super interesting. To what extent does it matter that it can engage in counterfactuals? Does the temporal depth matter? But if we're on the outside, how do we know what the temporal depth is? Is it the temporal depth it acts as if it's cognizing? Where does energy dissipation come into play? So these are big questions. What is cognition? And is there just one kind of cognition and biological systems do a special combination of it? But it's different in kind of decimal point, but not in the top level? Like are they or are they really different in type? Or is it such a great quantitative difference that it's different in type? Big questions here. More from the paper. Active inference models are phase space representations of biological systems. So that's like a full stop moment. This isn't what biological systems are. It's a phase space representation of systems. Here we're talking about biological systems. As forming expectations over observable external states and inferring policies, state action mappings that minimize the expected free energy of those states under a generative model in some predefined Markov decision process. By minimizing expected free energy, the modeled system would attain a non-equilibrium steady state. And so it would maintain in some sense a low entropy probability distribution over its states. So what is active inference? What topics is it related to? What is it based upon? We look at that ball in the bowl and it's easy to say, well, the way that the imperative that's pushing the system forward is that the ball wants to be as low down as possible and it's going to be minimizing its potential energy that way. Another story about the bowl is the ball is going to end up or act as if it's going to end up with the tightest bound through time on how far left and right it goes. Now that also takes you right to the bottom of the bowl, but you didn't have to mention that you want it to go down. It was just that the desire to minimize the uncertainty led to the bottom of the bowl in that structure. And so how do we think about some of these different ways? And what are the physics metaphors? And then in assumption from the paper, an assumption made by the active inference models developed by free energy theorists is that target biological systems can be represented as random dynamical systems. What does this assumption mean exactly? What does it mean? And what are the assumptions of active? What are the assumptions of other frameworks? Does active have the same assumptions as other frameworks? Does it have fewer or different? Does it have more assumptions? Are there assumptions of active that are consequences of other ideas or vice versa? So these are just awesome questions that the authors are raising and that their work is like a jumping off point into. Good. Good. So one question that I always have is what if the active inference and free energy principle presumably have the same assumptions, but do they have to? And so that's always something I wonder. How has active inference developed? This is a fun paragraph. Active inference modeling evolves rapidly. Eats, shoots, leaves. As Maxwell Ramstead, Dr. Maxwell Ramstead hopefully reminded us in conversation, up to roughly 2012, the dynamics of internal and active states in active inference models were determined through gradient descent on variational free energy. So let's call that phase one before 2012. We need a real timeline for this. This is just informal, but this is why we need the timeline. Figure out if we're truly on the best timeline. From around 2012 until recently, like 10 minutes ago or 2019, active inference models have been equipped with algorithms for policy selection, which evaluate the average free energy expected under each policy. So expected free energy, not just variational free energy. In their latest work, the active inference community have endowed their models with a recursive expected free energy functional, which enables the models to represent target systems higher order counterfactual beliefs, which are beliefs a system has about the beliefs it would have as a consequence of action. So that's the deep parametric active inference, that sophisticated active inference, that's what we spoke about with mental action, paper, sand vets, Smith at all. Okay, so those are three waves that are important to distinguish in the development of active inference models. Because these modeling advances all seem to make assumptions about the ergodicity of biological systems, and the biological meaning of dynamical attractors, the points we make in what follows may help explain why hashtag all active inference models thus far tend to trade off realism for maximal generality. So they're respecting the developments on the framework and encompassing it, summarizing it maybe accurately, maybe not. And then saying, because they all make some common assumptions, they're all victim to this trade off of realism for generality, or is it utility for generality? Which one was it again, realism, utility? What are the trade offs? So what are the waves of active inference? How does ergodicity come into play in these different waves? So I know there's a lot on there, but this is really an important piece to understand that there's been recent developments, and yeah, it's still in progress. Okay, here, there's some technical details that it'd be awesome to just learn more from the authors or from anyone else who sees this and gets excited by it. This is about some of the physics formalizations of noise and dynamics and phase spaces. And a term that I was curious about is like a co cycle. So it's kind of taking that idea of returning to an attracting state and maybe connecting it to some other ideas like a co cycle. But these are just things to learn and discuss about. And then this is sort of the classic Bayesian network, Markov blanket. Is there a Markov blanket that picks out the minds? Is the network of auto poetic processes and biological systems identical to the Markov blanket of the system? What is the metaphysical status of Markov blankets? And then the author's right, these debates in the metaphysics of the FEP can be left on the side, since we're not interested in metaphysics here. Hmm, I thought we were having a discussion that was meta to physics. So where is your metaphysics now? Okay, anything else there? Action in the loop. Given this setup, free energy theorists are interested in finding the active states of the systems that are confined to a bounded subset of states and remain there indefinitely. So we want to have our internal states be at a certain level, like thinking about our temperature being at a certain value or anything else. And now we want to talk about the control theory question, which is what active states should the system engage in conditioned on internal and sensory states, such that the internal states do remain bounded within a subset of states. More technical details. If one assumes that the dynamics of the target organism is dynamics, is organic, if one assumes that the dynamics of the target organism is ergodic, it's grammatically incorrect. If one assumes that the dynamics of the target organism are ergodic, that's why you can't read it. That's why the compiler failed. Then random dynamical attractors can be associated with the ergodic density, a p distribution of x given m, which is proportional to the amount of time each state is occupied by the organism. So here's where we return to their definition of the ergodic density as the invariant probability measure. There's a few other terms that I know some listeners know about, whether they'll help us with it's another question. But what does it mean to talk about the Lebesgue measure of the attractor? How is that connected to dynamical systems and entropy and whatever this integral is calculating in the limit of time approaching infinity? But this is kind of this thing that we see in space and in time. Like one over space, so like what's the tiniest infinitesimal space for all space? Or what's the tiniest, tiniest moment for all time? So how does that connect to action control theory? More on that later. But here, even in this, even in this slide, raises the question. So here it says such active internal states in an organism. And so maybe it's the way that the diagrams are usually drawn for active inference, but I don't ever think of active states as internal states. Like there's active states, sensory states, internal states, external states. And I always think about the sensory and the active states as on the boundary between internal and external. So absolutely, there are internal external sense and action states. So what are the active internal states? There's a few ways of thinking about it. The particular states are the sense, action, the blanket states and the internal states. So maybe that's what's being referred to, or we can. Well, when we're talking about mental action, I guess, then the active states are internal. Yes. Between the past and the future, but still internal. Like in mental action, right? Yes. With this print, with this definition of active states in hand. So I believe that we're now working with the first in ontology of action being not the internal states, but who knows. First in 2012, finally formulates the principle of least action as follows. The principle of least action. The internal states of an active system. So that wouldn't be the action states. That would be the internal states of a system with on the interface having action. They minimize surprise or L such that the variation delta S of action S with respect to its internal states, R of T existing in R vanishes. And then classic future Friston. Look at that. The 2100s are going to be just so good. The FEP would suffice to derive the principle of least action and is defined by Friston in the past and the future as follows. The free energy principle, technical, technical, technical. If the internal states minimize free energy, then the system conforms to the principle of least action and is an active system. This will be good to unpack and learn more from the authors as well as from those who study these areas, because it's sort of something we all ask is like, wait, least action, but then they're engaging in action. Couldn't they just do less action? Wouldn't that be more of a fulfillment of the least action principle? But what is it actually? It's like a stationary principle. It's an invariance. So there's a little bit more nuance than just staying on the couch. And I haven't heard this term active system before. I've heard like active inference system or inactive system, but this active system is new to me. And indeed, Friston was a very forward thinking guy. He is a very forward thinking guy all the way into the 2100s. When expressed in these terms, so in the terms of physics, the FEP provides us with an account of biological persistence as a generic type of stability of random dynamical systems. So then they talk about Friston 2012 and 13, some simulations. And then here's what the author's challenge is and say, let's then put into better focus some of the difficulties involved in this challenge and ask, is there any reason to believe that all biological systems at any scale can be modeled as random dynamical systems? And how does this maximal generality bear on the degree of realism of the models? So we had a few previous discussions ago. We had realism, we had utility, we had instrumentalism, we had all these different axes. And how are we going to manage that tradeoff because sometimes the most quote, realistic description of a system isn't the most useful, or it's not the most general. So there's a lot of tradeoffs happening. How does model realism relate to generalizability? And how does that relate to the utility of the model as well as other attributes like its simplicity? That's like the relationship between the map and the territory. If you wanted the territory, then you wouldn't need a map. Yeah. And then almost like the compass or a piece of paper that just says you are here. It's like, it's so general. You can always look down and that map is going to be at least not telling a lie. This is your left or something like that. But is that useful? It generalizes really well. But have we lost this whole question of navigation in the desire to find the simplest map? So something that I wonder about like, you know, are biological systems, random dynamical systems? Like, what's a non-random dynamical system? Like a dynamical system as opposed to like a static system, like static dynamics? And so like, what other system could a biological system be? Like if it's not a random dynamical system, what is the alternate hypothesis? Just like what you said about like, okay, well, what is the better model for cognition? If not, you know, active inference, then what model is the better model, right? Like so, but what about for a biological system? If it's not random dynamical system, what is the best model for a biological system? Yes, dynamical through time, dynamics, random, used in a lot of different ways. But is that statistical? Is it stochastic? Where is determinism fit into play? Well, let's talk about it. So here's section four. And this is like some of the main body paragraphs and contributions of the paper. It's the section modeling biological active states as random dynamical attractors. Free energy theorists account of biological systems persistence relies on three main modeling assumptions, ergodicity, the existence of Markov blankets that imply a partition of states into external and internal, the existence of random dynamical attractors. In this section, we concentrate on assumptions one and three. And on the more fundamental challenge of defining phase basis for target biological systems in the life sciences. So the three core ideas that they're saying are underlying FEP work in biological systems are ergodicity, Markov blankets, and random dynamical attractors. Are those the main three? That's a really important question. This paper is going to focus on the first and the third, not going as much into the Markov blankets discussion, because there's been a huge amount of discussion on Markov blankets, and we've talked about them separately. So they're going to focus on ergodicity and random dynamical attractors. That's 4.2 and three. But that's going to be introduced by discussion on phase spaces in biology. So that's what the authors have set out to do. Now, shameless lab plug, we're working on an active inference ontology in a participatory way that includes references, definitions, translations into multiple human languages. It will be many things, and it will scaffold educational content that will make this entire field more accessible and rigorous. So terms like phase space, we've talked about we talked about state space on September 13. So every week, we're bringing together different perspectives and sharing resources and working towards common understandings in this document to really get at what are the core assumptions of FEP or ACTIMF? What do these terms mean? What would somebody need to learn if they wanted to understand more? So it's really important work, and we're definitely going to draw upon the authors' contributions here as we work to scaffold an active inference ontology so that we don't get tied up in active state, action state, systems with action. We can be clear about how we're talking about this field, and in fact, we really have to be. Anything on our ontology document before we continue back to the paper? Sound good. But this has been a fun project with many contributors. So let's talk about phase spaces in biology. We introduced this in the earlier keyword section. So the phase space, the system is a giant space formed by the relevant degrees of freedom of the target system. And then that's the space of the possible. A point in the phase space corresponds to microstate, which completely determines the system in terms of the variables and parameters required for analyzing its dynamics. And the dynamics are determined by the equations of motion, which describe the evolution of points or trajectories in phase space. So here's that phase space kind of model. And then the authors are going to suggest that there's at least three facts, cold hard facts that make phase spaces in biology hard to apply. Because first, biological systems have many degrees of freedom. It's not just a pendulum. Second, the symmetries, which are invariant preserving transformations, underlying the observed regularities in biological phenomena are more unstable than physics. Okay. And third, the phase space for many real world biological targets is much less stable than that for the kinds of target systems studied in physics. So what do you think about that or about phase spaces in biology? Yeah, I mean, I think I mentioned the multi dimensionality in the beginning when we were talking about the phase spaces in biological systems. And I think that it just comes to that with any, with any system, right, like you can always include too many parameters, like you can have the territory or you can have a dramatically reduced map. And so finding that that correct balance is hard. And if we're looking at, you know, say states of a system and trying to predict the next state, what are all of the factors conditionally independent? Is there some, you know, mingling of the variables there? And is it okay to just model a couple at a time? And what if we don't know if there are some confounding variables in the system too? So it's, it's a difficult task, but it's how do we start to get a grip on it as hard? Yes. And a big question in the background here is, does this matter for applications? So yes, these are in principle critiques of doing dynamical phase space models of biology. But we know that dynamical systems models of neural systems of hormonal systems of ecology, these are all really useful and people use them every day. So Gene regulatory networks, don't forget those. How could we forget? But does something having these sort of fundamental critiques prevent it from being useful? Not always. This is a really nice points about the open-endedness of biological phase space. So this is the work of Longo Montville and Kaufman 2012, who said that in physics, one can pre-state the phase space for target systems based upon stable stable invariances and symmetries. So we can like say, the ball is going to be rolling around in the box, those are the boundaries. Those are the boundaries. And here's the invariances. Historical processes studied in evolutionary and population biology, like all these processes that we know and love in biology, involves symmetry breaking, which makes the phase spaces structurally unstable, ever changing and unpredictable. Now unpredictable is a little bit of a tough one because that's a continuum. What if you can predict 1% of the variance in great evolutionary transitions? Is it unpredictable? Or is it predictable? You know, is it half full or half empty? And then this is from the paper. Because we cannot pre-state the ever changing phase space of biological evolution, we have no settled relations by which we can write down the equations of motion of the ever new biologically relevant observables and parameters, but that we cannot pre-state. More, we cannot pre-state the adaptive niche as a boundary condition, so could not integrate the equations of motion, even were we to have them all. So it's like there's the ball rolling around on the billiards table, but now there's mutations in the genome of the ball so that there's like totally new mechanisms arising in a way that might have a historicity. Oh yeah, and the table is mutating and it's made up of other organisms. So it's easy to see how we get lost in the complexity, yet adaptive action exists and useful models exist. So how do we reconcile the fundamental open-endedness and creativity and novelty of evolution with our desire to explain some portion of the variants so we can act effectively? Well, and this relates like to a question that I had brought up to you yesterday, Daniel, I think about like, what about discretizing space? And so I know that like some of the physics principles rely on infinite time, like the ergodicity specifically relies on the possibility of infinite time. And this is all stuff that's like kind of beyond me, but what if we have a temporarily restricted moment? Like what if we're just interested about now in five years from now? Or what if we're just interested in now in five years ago? And so in each moment is this symmetry breaking happening or at what scale? And so is there perhaps a chunk of time, a small discrete chunk of time from a microsecond to 100 years where these types of inferences can be used and where the equations are applicable and where we can use the model with under the, you know, specification that this model is not good for the next two million years, but it will last you 20, right? Like so. It's a model of the phase space of blood glucose versus pH in a person as measured by the devices we have. It's not of blood sugar over a million years because that doesn't even make sense given the system of interest. So how do we reconcile pragmatics and utility with these really important philosophical questions? So that was for one, phase spaces. Word two, ergodicity. One of the assumptions of the FEP is that all biological systems are ergodic. So paging, Ramstead at all, at all. This is something we really want to unpack and understand like, how does it mean ergodicity or more precisely metric transitivity in random dynamical systems? Like, what is the math and the physics? And is that an assumption of the FEP? What does this mean to assume that biological systems, quote, are ergodic, which is different than saying biological systems for some spatial temporal scales can be modeled using statistics that involve ergodic assumptions? So there's a lot of nuance there, like whether we're making realist claims about how biological systems are or whether we're using statistics in an instrumental fashion to make useful models of organisms. But the big questions are like, what kinds of systems are ergodic and how? What are the implications of ergodic assumption? Is it ever not an assumption, but it's actually like a consequence of some empirical grounding? There's the ergodic hierarchy. So ergodicity, it's not just like yes or no. There's a total continuum with a lot of complex math. And then just for everyone, how or where does this matter for modeling under the FEP or active inference in real world situations? And they did call out in the paper a reference to Ramstead and the local ergodicity, which is for a biological organism, something like the boundaries in which the organism can exist. Anyway, I think, Maxwell, come explain it to us, please, but the authors did mention that. Here's a quote from the paper, Birkhoff, 1931, demonstrated that if a dynamical system is ergodic, then the infinite time limit exists and coincides with a phase average for almost all initial conditions, little x, like the realized conditions within the bigger x, like the space of the possible. This is called the ergodic theorem. And that is going to get linked to everything we were just discussing, suprisal minimization as ergodicity maintenance. And then here, the proof of the principle of least action is straightforward. Okay. And rests on noting that action and the entropy of the ergodic density over external states are related via the ergodic theorem, first and past and the future. So PNAS, Birkhoff, 1931. Pretty interesting paper and just, it's amazing to think like how, you know, 100 years of mathematics after this, but this is kind of that initial formalization of what if the space average and the time average were going to be related to each other. So it's a cool topic. We've brought up ergodicity before and it would be really helpful for people who know some of the technical details to help work with it. This is one analogous to how they brought up like here's the facts that challenge phase spaces in biology. So here's some facts that challenge the application of ergodic theorems to real systems. And there's good reasons the authors say to think that the theorem cannot be realistically applied in the domain of biology. So first off, it's not enough to show that the infinite limit exists. It also has to be shown that the convergence rate is plausible. We're going to talk about that in a second. Second point, comparing biological to non-biological systems, it's harder to estimate the amount of time that they need to kind of asymptote because of the non-uniformities. Another reason, and this is going to be one that I know people will have things to say, why ergodicity cannot plausibly be assumed in biology, is that the ergodic theorem requires the dynamics of the system to be ergodic, which means that eventually almost every point will visit every measurable region in X. We've talked about that before, like, but biological systems never revisit the exact same state if you think about every molecule constituting the phase space. But if the phase space is just temperature versus blood sugar, yes, you pretty much will revisit some points. But will you revisit every point? Are you going to reach a value of negative 5,000 sugar and 900 degrees Celsius? No. We're talking about bounded states. And that's kind of a nuance. And then- Well, it's also the arrow of time is moving forward, right? It points one direction. And so, like, you know, at age 40, my skin will never have the collagen it had at age four. Like, that's never going to happen again, right? So, I mean, and there are some things that just decrease as you age. Yes. So, there are, yeah. Let's look at this Palacios 2018 paper. So, this is, I guess, an earlier paper by the first author. So, it kind of gives a little context. Had we but world enough in time? But we don't. Exclamation point, colon. Justifying the thermodynamic and infinite hyphen time limits in statistical mechanics. Great title, well-punctuated, a bevy of punctuations. And this is just a very interesting paper that we skimmed over. And I'm sure there's a ton to say. It'd be great to speak with the author about it. But they argue that this has, there's an important consequence for the philosophical literature on infinite limits. And that's basically, it's not just about like where you are. That's where you are in phase space. It's about where you're going. That's the asymptote in infinite time. And it also matters how fast you're going to get there. So, it's like kind of the asymptote for pain of glass is like in a puddle on the ground because it's a liquid or it's a glass. But that might take so long that over some time scale it can be modeled as if it's not changing, for example. So, that's kind of a cool notion that it's not just about where a system is or where it's being attracted towards, but actually the dynamics of the approach matter a lot. And that's certainly true in biological systems. The assumption of ergodicity to give a definition of equilibrium states is a controversial assumption even in the domain of physics, especially due to existence of physical systems in equilibrium that have proven to be non-ergodic. So, I didn't find this Palosios 21 citation in the Biblio. This is the reason why physicists and philosophers have offered alternative approaches to equilibrium that do not rely on ergodity. Even philosophers that defend a quasi-ergodic approach to equilibrium recognize that this definition of equilibrium may plausibly apply only to a restricted class of systems such as gases. So, sounds bad. But then we go okay, the authors say it doesn't mean of course that ergodicity or a weaker notion of local ergodicity may sometimes be a reasonable assumption to make. So, in other words, it might be useful for specifically defined systems anyway. However, the fact that one cannot extrapolate the assumption of ergodicity to the behavior of all relevant systems in biology casts doubt on the project of formulating a principle that depends on this assumption and is both maximally general and biologically realistic. So, ideas that we've heard a little bit about, but how are we going to finesse realism and generality and grounding in physics when biological systems might be grounded in things that are not filled? So, that's 4.2. All right, 4-3 random dynamical attractors. What makes the application of FEP to account for real-world behavior of biological systems is even more challenging beyond 4.2 and 4.1 because one also has to show the existence of an attractor and also justify why certain attractors but not others possibly denote homeostatic states in the target. There might be multiple attractors and then there's this question of chaotic systems. Do you want to add anything on these? Basically, it's non-trivial to say that there's an attractor and it's the right one and it results in adaptive behavior. There's a lot that goes into it. So, but we can learn more. We'll hear more questions later. And then just to go quickly to five and six. So, five, discussion. In the previous sections, we've analyzed three challenges involved in the justification of the free energy principle. So, one was pre-stated phase spaces for most biological systems. Two was the lack of the warrant for making ergodicity assumptions in biology. And three is the challenge of identifying homeostasis with attractors in a phase space, even if one and two were accomplished. Based on our analysis, one overall conclusion is that because of a fundamental mismatch between its physics assumptions and the properties of biological targets, model building grounded in the FEP achieves maximal generality, yay, for minimal biological plausibility, that. So, how is that going to work? As we raised earlier, how does generality versus plausibility get traded off for real systems in the FEP, in active, and in what other frameworks? Let's always have another one if people want to or can think of one so that we're not just asking whether it's like as good as it could be, because here's a hint, it's not as good as it could be. We're always going to see ways to improve the theory and the literature, if only by organizing other work and making it more accessible. So, what is its relative biological plausibility? What is its relative generalizability? And what are we relativizing to? Because as they say, like for other idealized scientific representations, even if the FEP involves simplifying distortions, it does not follow that it must be minimally realistic and have limited explanatory or predictive power. So, they're saying like it's kind of like enemy of my enemy is my friend, it's like saying just because you idealize and simplify that it doesn't mean that you're going to get explanation or prediction. So, that's their discussion. They fall back to the three key points that they think challenge the usage of physics ideas in biology that are postulated to be underlying the FEP, phase spaces, ergodicity, and random attractors. And they're concluding that there's a mismatch between the physics assumptions and the properties of biological targets. Here in the implications and predictions, they write, if the FEP or active inference models at grounds make simplifying distortions, distorting assumptions about the phase space of a target system, the ergodicity of biological systems, and the existence of an attracting set corresponding to homeostatic states for these systems, then these idealizations should earn their keep. So, you know, you've been a bad theory, but it's not all over. You could be a good theory. You could earn your keep maybe in the future. And then they write, we'll just read them. These idealizations should allow life scientists to attractively draw some explanation and prediction about relevant biological observables in real-world systems and assess those predictions against measurements relevant to understanding some aspect of homeostatic processes in actual biological systems, DeCosta et al. 2020. Our contention is that the idealizations made by free-energy theorists do not play these practical and epistemic roles. And so this is the DeCosta et al. 2020 active inference on discrete state spaces, a synthesis. And here are from table one. Here's all the applications. So it's interesting because like this pragmatic and epistemic role, that's kind of the ontology of active inference. That's how we talk about policy selection. And then it's like, okay, looking at this list, I'm seeing a lot of pragmatic and epistemic value. But what is the contention? That the idealizations themselves do not play pragmatic or epistemically valuable roles? I think it'd be good to get a little clarification because table one has a lot of, and since then even, there's a lot of interesting areas. What do you think about that, Blue? Yeah, it's interesting, like the word choice of the author here, like the idealizations made by the free-energy theorists. So what are these idealizations, like the reliance on ergodicity and random dynamical system theory? Like are these the idealizations? Or is it actually using the FEP and active inference for practical applications that don't play the practical and epistemic roles? So I just, I'm curious as to that word choice. And Steven in the chat wrote, the use of approximation science around ergodicity could give the potential to have sufficient revisiting of states to make meaningfulness plausible. How can we measure when sufficient approximation to ergodicity exists in order to make variational free-energy inferences? So good question we can ask. And as we kind of explored, revisitation is one component of ergodicity, but also there's this element of revisiting every point and of exploring the whole phase space. So as we kind of addressed, what is the reality of ergodicity and applying those models to biological systems? And then, yeah, can we use approximations and approximation science to make it work, even if in the infinite perfect case, it doesn't exactly seem like it would. Like the t-test assumes that there's an equal variance between the two groups, unless you have a t-test with unequal variances. Yet no two groups in reality are going to have the exact same variance. Yet the t-test works. So how can we take that same approach and think about the systems that we're working with here? Here's their whole conclusion in a churning maelstorm. They're arguing that FEP theorists have pursued maximal generalization by relying on physics concepts. However, that means that they've sacrificed biological realism. And the danger of sacrificing biological realism is that you risk minimizing explanatory and predictive power of those accounts for biological systems, which are homeostatic and manifestingly far from equilibrium in their persistence. And their last paragraph, the FEP can perhaps be better understood as a maximally general definition of any system that persists. But this definition does not seem to provide us with any new insight into biological systems. To the extent free energy theorists treat all biological systems at any scale as pre-specified generic objects with fixed, currently unknown equations of motion, their account risks missing all features that make biological systems interesting kinds of thermodynamic systems. So it's like, you know, a city where there's all kinds of intersections, and then you have a grid map. And so we distort the system in a way where all of a sudden, we thought we were generalizing, but we ended up losing touch with the system and therefore explanatory and predictive power. Any thoughts on the conclusion? No, I mean, I think that that, you know, they have made their point pretty clearly. So I think that the FEP also has a lot of room to grow, being that it's, you know, relatively not quite 20 yet. So I think that, you know, hopefully that some pragmatic and epistemic value will be put forth by the generalizations made by the FEP in some future model, perhaps. That's almost like a deep time sociological value. Even a provocative theory, which has limited epistemic or pragmatic value, not saying the FEP is, could play a role in a broader social system that still gets the job done. So that's kind of a cool notion. What questions for 31.1 did you put up? So I think I've raised them all already. So like what is the dynamic equilibrium of living systems as the authors state here and as I stated in the beginning keywords? And it says we should pay closer attention to the fundamental differences between physics and biology. Like what are the differences between physics and biology? Like in the paper, they made, they went to a great lengths to explain like why biological systems weren't ergodic. But then also like they made the point that physics, they have, they have a really hard time, like proving organicity exists at all in any physical system. So what are, what are these fundamental differences between physics and biology that, that are so critical? Like that it seems that, you know, the FEP kind of misses the point. And then they refer often in the paper to active states as internal states. And I just wonder what, like, I mean, I've really thought about active states as like the interface between internal and external and part of the blanket. And so I just wonder why the active states are included in internal states and their, you know, description of the model. So just some, some technical things that I have questions about and, you know, hopefully the authors will be here to answer them. Cool. Stephen wrote, we may never know the whole phase space as the phase space may be variable. Yes, that's what we talked about, like the phase space of temperature, does that include impossible temperatures that the system cannot persist at? And then the fact that the phase space can change is the open-endedness of the, the Longo, I think 2012 open-endedness of the phase space in biological systems. So for sure, that's why we can read the papes and then have specific citations and ontologies in mind. So we're not kind of cross angling, because those are great points. And they were the points made in the paper. And then Stephen asks, is biological realism based upon folk biological assumptions? So I think the sort of like folk hyphenated everything has come into the forefront with the discussion with Matt Sims. So yeah, what is biologically real? You know, what models are biologically realistic? Isn't that just like the question of what is real? So, Yeah, what is even real at all exactly? Yeah, realism sounds good until you, so you ask too many questions. So what a fun discussion. So we ask all these questions, like what is a good understanding enable? What are the unique predictions and implications of this paper? What are the next steps for FEP and ACTIMF? What are the goals of this research? And then what are we still curious about? It was a fun discussion. So thanks a ton, Blue for doing it and for working a ton on the slides. We'll look forward to 31.1 and beyond. Awesome. Thanks. Bye. Bye.