 Hello and welcome everyone to Act In Flab, live stream number 34.1. It's December 7th, 2021. Welcome to Act In Flab. We are a participatory online lab that is communicating, learning, and practicing applied active inference. You can find us at some of the links here on this page. This is a recorded and an archived live stream, so please provide us with feedback so that we can improve on our work. All backgrounds and perspectives are welcome here, and we'll be following good video etiquette for live streams. All Act In Flab activities are participatory. So let us know if you want to join in or co-organize. We have three organizational units, education, communication and tools and weekly meetings for each of those units. A ton of interesting projects to get involved in. Kota.io at Active Inference Lab slash Act In Flab, you'll find a searchable and sortable site where you can learn more about the past and upcoming live streams that we have. Today in Act In Flab stream number 34.1, we're going to be learning and discussing about this fun paper, the free energy principle. It's not about what it takes. It's about what took you there by actual constant in 2021. And in 34.0, Dane and I had a great time. We talked about the paper, so check out the dot zero. And of course, read the paper if you want to learn more. Today in 34.1, we can go over some of the key claims. We can look at some of the examples again and differentiate the Bayesian approach versus the free energy approach and just really take it anywhere that people think is interesting. So if you're watching live, just ask questions in the live chat and we'll be keeping our eye on that and we will just introduce ourselves, go around and then we'll take it wherever we take it. So we can introduce ourselves. We can say hi and say maybe something that was exciting about the paper that we remembered or something that we'd like to reduce our uncertainty on by the end of this discussion. So I'm Daniel, I'm a researcher in California and I'm actually just looking forward to seeing if anyone in the live chat has some questions to ask because there's so many cool adjacencies to this paper. And just the beginnings of the thread were explored by Dean and I in the dot zero. So exploring the implications and what people see as relevant in this paper I think will be really exciting. And I will pass it to Dean. Thanks, Daniel. I'm Dean, I'm Calgary. I'm just curious again to see how, as you start pulling on a thread, discovering how many things are actually there as opposed to just the thread. So yeah, I think today should be a pretty interesting day and I'll pass it over to David. I'm Dave, I live north of Manila. I did natural language processing with big computers and studied cybernetic learning theory. And I'm excited that he's given us some simple math. Axel Constan is kind of bringing it down to those of us who stalled by the end of Calculus One. Just a sort of lead in question, Dave, in the area of cybernetics. Is there any analogous work? Anyone who said, well, it's not really about being a cybernetic system. It's about how you came to be that cybernetic system. Like, are there any parallels in the cybernetics literature for the kinds of things that are explored in this paper? Well, there are, yeah. In terms of not so much necessarily the origins. Well, yes, in cybernetic learning systems theory, a branch called conversation theory. They completely cut loose the notion of the physical embodiments of systems from the mimetic or functional or conceptual or cognitive aspects of systems. In fact, even types of systems. Very much like what Richard Dawkins did 30 plus years ago on separating genetics from mimetics, which in his case originally was just a proof of principle, not something he wanted to pursue, especially, and it has been pursued. In cybernetic learning theory, the individual is equated to the conversation, is equated with a type of system that originates, that persists, that merges into other systems that may very well function through many bodies. So you have distributed intelligence right out of the gate. As well as internal differentiation, internal self-definition of each individual as a conversation. So definitely the, and there's a lot of math, even going way back to the 60s and 70s, there were cyber neticians who took precise quantitative work very seriously and dug down very carefully on quantified attention, quantified uncertainty, resolution of uncertainty, and the qualitative consequences of deciding what's really happening, what my plan really is, very much the sort of sudden abrupt and often irreversible changes that the world looked at in a lot of depth when catastrophe theory burst upon the world, sudden catastrophic, perhaps an improvement, but a catastrophic, a suddenly relatively irreversible change, and here it is, still working on it in deeper and deeper in detail and wider scope, so good for Mr. Const, Dr. Const. Pretty interesting, it makes me think about like a friendship and how maybe it's not just about what the friendship is or what it takes to have a friendship, but it's about the specific conversations that take you there. So let's just kind of look at the main aims and claims and parts of the paper and then either of you raise your hand of course, maybe some others will join and anyone in the live chat please just ask any question or make a comment or just chill them. So the center topic that's raised and it's a term that's introduced in this paper, not the word but it's usage in this situation is the entailment problem and just like the dog and the tail as they say, the entailment problem is about the relationship, the necessity and the sufficiency relationship, which one is following which and under what situations they're following each other. The entailment problem is the confusion in the entailment relation between free energy minimization and life. So is free energy minimization necessary for life? Is it sufficient? Those are the core pieces of the entailment problem and what happens in this paper is a simple dissolution through a trivial toy simplified example of that entailment problem and we're gonna probably walk through it again today but this numerical example which is very stripped down, it's not full active, there's no action in the loop, it's just a perceptive example and it just shows that free energy minimization does not always lead to the correct choices, it's also conditional upon the priors, in other words, how you got there, it's about what took you there because the priors don't come from nowhere, they come from the past. So this kind of connects this rigorous logic within the action perception loop or in this case just a simplified perception loop to the often ill or illogical way that things happened to get the way they are. So it's gonna be that tension between is this a rational analysis, is it locally rational, maybe globally irrational, all those kinds of topics come up and just to skip to the final words of the paper, the claim in this paper that's explored is that free energy minimization is not sufficient for life. So not that FEP has nothing to say about life, not that ACTIMF is irrelevant, but that free energy minimization is not sufficient, it turns out that it's necessary to be doing something like free energy minimization to persist in a dissipative world but it's not sufficient, it can't be all that you have, the wheels are necessary for the car to go round and round, round and round but they're not sufficient and then that last sentence, the free energy principle is meant to account for all kinds of systems. You, the listener slash reader, the author of the paper and the system of interest, the organism under study in a unifying fashion. So we should also explore like what is the unifying lens or what is the framing under which the research scholar and the person who's just sort of like what is the free energy principle and the bacterium and what is that unifying principle and what does the free energy principle have to do with that. Okay, so Dean where do you think we should jump in or what would be a fun question or place to start here? Well, so I have an interest, I have a little bit of history with this, so what took you there? So if I'm meeting somebody and they're an expert at something, I don't typically ask why are you an expert because that's kind of offensive but to ask what got you here, what got you to the place where you now bear this title, the weight of that title, is an interesting way of going about trying to figure out what what logics they have applied in the past to untangle problems and resolve things that maybe somebody with less expertise struggles with or doesn't seem to be able to overcome. So that's where this paper was really helpful to me. Instead of why do I have to do this or why are you such an expert asking where did you come from and then when being specific in terms of so how did you resolve something that I still find difficult to resolve? Like why do I have to tie my shoes and then the expert says, well, because you're going to trip over your shoelace and fall down. Well, I haven't tripped over my shoelace. Instead of going down that path, figuring it out in a slightly more sophisticated way. Cool. Thank you. I'm not going to read this Blake quote, but of course we have to write it down. It makes me think about how why, which is an important question, often lends itself towards generalization. Like why are you studying machine learning? Well, it's really important and it's going to become increasingly important and I wanted to have a good job and other people told me it was relevant. It kind of generalizes out. Whereas what took you there is always going to be a trajectory. It's kind of a path and that's where you get the answer that's more like, well, I met this mentor and then I was curious about this specific topic and then we did this program. So what took you there is about the specifics and what is broached in why questions is often generalizations and we know that there's not even a single answer to why. Like we've talked about Aristotle's, why's and about Timbergen. Why is always going to be a plurality of answers, whereas what took you there is going to be a specific path. And so that's the tension in living systems is we want to understand something general. Like surely there are some generalizations we can make across single celled life or across mammals, but also we're explaining specific systems. So which aspects are going to be generalizable and which ones aren't. And I think, again, free energy principle as a theory of everything and something that does clearly make generalizations across systems, at least apply to different systems. How does this hyper specific trajectory based way of approaching answering a question relate to more of a mean field approximation averaged over multiple categories. I think it's that tension that's really interesting to describe. So one is this general versus specific tension. And then if we can go to the simple point that's made in the paper, that the free energy principle is not concerned with the sufficient conditions of existence, but rather with what must have been the case given that you exist. So this is related to necessity and sufficiency, which we talked about in the dot zero. They're similar words. So we should use them differently. They're different words. In other words, it's about necessity, but not necessarily sufficiency. And then it's not about what figuring out what it takes to be alive. It's about figuring out what took you there. So the title is restated and turned around many, many different times because it is such a distillable and simple yet important concept. And I think the piece that I'd like to explore a little more is really the pronoun use. I understand why the paper is written from an eye perspective, because that's convention with single author philosophy papers. But it's about figuring out what took you there. So there's a sort of rhetorical element of speaking to a reader, which is returning at the end of the paper. There's last few lines about you, me and the system under study. But then also it's almost like about how the free energy principle is about systems looking at their own timeline and trajectory, and perhaps not as much as looking out. So I'm curious about what does that relational element mean here with a person relating to their past and us as systems relating to other systems? Dean? Yeah, so the necessity might be you show up, you attend in order for things to sort of continue on. But the sufficiency part is how available you are. So you can show up and turn all your senses off and be present, but not necessarily be taking in anything, or at least trying to actively avoid taking information in. And so that's the part, again, I would agree with you. That's the part I found really interesting, because to not understand or appreciate that there's a difference between what is necessary, I'm here, I'm in part of this live stream now, but then I could be distracted in all different directions, not really be available to what is going on around me is a whole different matter. And I think that's what Axl has done a really good job of bringing it down to that personal level. Sufficiency has to be exhaustive to say what's truly sufficient for the bacteria to live. It's going to include things like, well, you need the laws of physics and you need gravity and you need just everything. It's like really you're giving almost a bit by bit, play by play. So sufficiency, it requires an exhaustive account, because if you're missing something that's necessary, you didn't capture what was sufficient. Necessity allows us to just focus on some aspect that we know is relevant, and then talk about the why and the what and the how of necessity while acknowledging that we're just focusing on one piece. Like that one line in that one Kandinsky painting, it's necessary for my interpretation of it. It's not sufficient because the sufficiency of the interpretation of art would just send you off into the whole universe. Maybe you need all of human cultural history, including from cultures that you wouldn't even directly acknowledge for the sufficiency to be garnered in that interpretive setting, but necessity is a lot easier to point your finger to. That relates to what are distinguished as the strong and the weak responses to the entailment problem. So let's look over at this and again anyone with any kind of a question, please just ask it because we're chilling and looking forward to your questions. So this is where in the paper the entailment problem is specified. Arguments in the literature on the free energy principle give the impression that in order to be alive, to count as a living system, one must minimize free energy. And so there's a few ways to think about this relationship between minimizing free energy and life. Sorry if you thought it was going to be a discussion on a different topic because that's what this paper is about and it just again, it's so simple. It's like a shape in your hand, you're turning it over. The shape is simple in some ways, but then the implications of the shape and the way it gets connected to other ideas is really critical. And Axl distinguishes two ways to talk about the relationship, the strong and the weak. And as we'll see, they differ not just in their strength, but also essentially in the direction which the arrow is pointing. So it's kind of like, it's not like there's a little mountain and a big mountain, but the arrow of elevation is pointing the same way. There's something a little bit different happening. The strong claim in response to the entailment problem is that minimizing free energy is a sufficient condition for life. And that's also called in some earlier work, the overly generous claim, which is kind of a framing that already says that it's probably to generous. It's incorrect in that way. And the strong claim is the sufficiency claim. Minimizing free energy, that's all you need. It's sufficient to minimize free energy to be a living system. The implication of the strong claim would be that all we need to do is have that free energy amometer. And then we could determine who's minimizing free energy. We have our meter stick on this left side. And then that would be sufficient to talk about what systems were alive or not. In contrast, the weak claim is a necessity claim. If a system is currently alive, it means that it has minimized its free energy. And such a type of claim does not assume the FEP is designed to set the bar for the sufficient conditions of life or meant to predict what things may or may not be alive. Rather, it limits the scope of the application of the principle to beings that we think are alive now, and enables us to know the necessary conditions under which those beings can be living, i.e. can actively resist the loss of structural integrity. What took them there? So there's a few interesting pieces, just they differ in strength, these two responses or claims that are generated by the entailment problem. I wonder, what is the scope of the FEP? And is it for systems that we have a priori decided are already alive? Are non-living systems under the scope of the FEP? So maybe we can try to like do some duplicate the slide and do a little diagramming and Venn diagrams on what does the minimizing free energy circle look like with respect to the life circle? And then what is life if we don't have the thermometer over here on the left side? So can I ask a question, Daniel? Would it be going too far off the rails if I wondered to take this idea and say, okay, so I want to build out, I'm an academic or I'm a trainer and I want to build out a curriculum. Knowing this, what does that imply in terms of, well, I know what's the difference between something that's a surprise or removal? Because remember, I'm the one that already knows this and I'm trying to pass this along to somebody who doesn't know it. So I'm trying to guess what surprise or removal or minimization means to them. I'm supposed to be guessing what life means to them. I'm supposed to be knowing what is necessary, what absolutely they must know and what will be sufficient. So in that context, how would we order this? Let's kind of lay out a few options. Okay. So we have behind the first door. We'll use letters to talk about them. So we'll have... I'll take the cash, Monty. Are you sure? I can reveal that one of the doors has nothing behind it. Okay. So we're going to have scenario A, which is there are systems that are minimizing free energy. So there are systems outside of that circle. So there may be systems that are not minimizing free energy, but within the set of living things, all of them are minimizing free energy. So let's think about what kind of world that is. Okay. So then we can have scenario B where there's living systems. There's non-living systems as well. So we could ask what puts us in one category, living or non-living, but there's living systems that are minimizing free energy and there's ones that are not. I think we can say that because if we go with the weak claim, then it's fair to say that B is ruled out because this would say that there's living systems that are not minimizing free energy. So we can say like basically weak claim, uh, disreputes this area. The area of that crescent moon that's being sort of invicted against, uh, is disreputes a word? Disputes. We'll go with that. The weak claim disputes this area of the crescent. It's saying, well, all living systems have to be minimizing free energy. It's a necessity, but it's not sufficiency. So then a third scenario would be like, they're the exact same thing. So, well, that would be sort of a, well, you could have minimizing free energy is a sufficient condition for life. So that means that the life circle is, we'll have to, you can't show a total overlap, but the life circle, everything that's minimizing free energy is alive. So here in A, there are free energy minimizing systems that aren't alive. Like a ball falling off the cliff might be minimizing its free energy, just like it's minimizing its potential energy or a burning candle might be minimizing free energy, but it doesn't have to be alive. So that's, these are free energy minimizing systems in the crescent of A that are not alive. In C, we have this perfect relationship between the two things. And that is a very strong claim, which is that if it's minimizing free energy, that's sufficient for being alive. And if it's not minimizing free energy, it's not alive. So this is sort of the weakest claim is on the left side. Doesn't mean it's useless. Doesn't mean it's inadequate, but this is the weakest claim. This is the necessity claim. Then we have the strong claim here that minimizing free energy is sufficient. Oh, well, this is, this is the strongest claim in C that minimizing free energy is sufficient for life. So anything that, you know, if doing X is sufficient for greetings, blue, hello, cool light. We're in the slides on 24, just talking about necessity and sufficiency. C is the strongest case. It's saying it's sufficient. And so you're either going to be minimizing free energy and therefore sufficient to be called alive, or you're doing neither. And then B is a little bit of a middle case that's saying there's some living systems that are not minimizing their free energy, but some are. So these are, now are there other possible layouts? And then Dean, let's return to your question about like educating and how do we think about this? How do we distinguish these scenarios in an education context? Or welcome blue, if you would like to say hello or just what makes you excited for this paper? I loved this paper. I thought it was great. And I have been listening along in the live stream. So I'm kind of up to speed with where you guys are and what you're talking about. But I, yeah, enjoyed Axel's writing. And I like, like the fact that I can implement the free energy without having to ascribe to pentpsychism and also without having to ascribe to the mind-life continuity hypothesis are all of these things that we've been discussing. Like the necessity, sufficiency, it's there. Like I think that Axel sets out a really logical argument or set of arguments and doesn't make any claims that aren't justifiable with math. Yes, using a simple mathematical example. We'll keep this slide around because we can wonder maybe is there some, well, another relationship that we could speculate would be like the kind of something like this. Like life, there's systems that are minimizing free energy or not. And then there's systems that are alive or not. And sometimes they overlap, but they don't have to overlap. But there's some special living systems that are doing it. And there's some that aren't. And there's some that are neither. So this is the most, we'll just call this one situation D. D is the most is the most flexible scenario. Because you could have A or B, well, I won't use those letters. You could have life, free energy minimizing, both or neither. D is the most flexible case. C is the most restrictive case. And then we have two possible nesting relationships where one is nested inside of the other. And it's kind of interesting to wonder what kind of world do each of these exist in? Or where is the mind bubble? Yes, let's do it around the whole slide. Is it the mind, the the modular? Is that why there's that relational or personal framing of the paper? Because the mind is the mapping of these different Venn diagrams? Dean? And I think it's really interesting to Daniel that in real time, we now have a slide 23 and now a new slide 24. And although we now know the order that they presented themselves to us as could we have started with 24 and worked our way back to 23? That's when it gets really kind of weird and twisted. But it's also what I think the paper speaking to if we're actually thinking about this as another example of a post potentially post-addiction way of viewing things historically now? Yes. So 23, which now it has a little logo on it for those who don't have a slide deck at hand, 23 was derived heavily from the paper. Right. It was quotes from the paper and then just a visual representation. Now that was not in the paper, but it could have been interesting. And then that's right, through the specifics of who joined when and what we were thinking about when that's what led to 24. But maybe we'll add another slide before 23. And then they'll both be a different number. But it does bring up an interesting question. I think we could look at, well, first, Dave, thanks for writing in the chat. Do you want to read what you wrote? Or would you like me to read what you have in the YouTube live chat? Yeah. Could I expand on it just slightly? But I would have posted if I didn't have the big limit. Title necessary versus sufficient. Two of the original concepts of general systems theory have recently as of 1975 been resuscitated. There's an admirable discussion by Russell Acoff. The first of the two concepts is that whereas causal argument considers necessary and sufficient or double implication conditions, systematic theory or cyber systems theory or cybernetic argument deals primarily with a logic of necessary conditions. See Fred Singer's discussion of the producer and product relations. And in summer off this is called goal relations. And I'd add this deals with intentional action where there are goals. There's an old painful discussion going back to way before Darwin that the world exists only as push Galileo wanted to absolutely banish goals of any kind, any value had to be absolutely driven from science. If you say for the purpose of his get out, you're not a scientist. I continue this notion underlies the more sophisticated treatment of goals and intentions conceivably partly specified intentions. Fuzzy goals, goals with ill defined criteria, which you might or might not be aware of, I would say. The other concept of about the same importance is the presupposition of a systematic universe. There is a tacit assumption that things, objects and other elementary entities are interdependent rather than being isolated units, which is the assumption behind the majority of sciences. Further, as a result of their interdependence or with the same meaning of the supposition that things, objects and so on are not really unitary, these entities form systems and it is systems which may be observed and manipulate. So the notion that there's one and only one way to approach an argument, one and only one order among real things that are scientifically respectable, utterly wrong, utterly wrong. If you can't work your argument in at least two different directions, you haven't thought about what your argument is. You don't know the boundaries, you don't know what it is, you don't know what a jellyfish is, you think you know what a jellyfish is, but you don't know the difference between the inside and the outside, you may end up in the inside. So it's almost like if there's only one way to run through the argument, that's when necessity and sufficiency are easily confused. So if somebody just drives in one lane from the beginning to the end and I made my case and that's all I have to say, well then you've implicitly through your actions, your enacted behavior, you've acted consistently as if what you had to say was sufficient for making your case, the prosecution rests. And also it was necessary, you think I've included everything necessary and sufficient. So that's actually where you get, and here's maybe where we should duplicate the slide again, you get, that's an overlap of necessity and sufficiency. But then if you did two parallel arguments, just you did one from start to finish and the other one from start to finish, now you could say, well, both of them are in their own timeline, they were both necessary, even if they have totally disjoint information that was being used. And then what is the relationship of the goals there? And that is kind of like Dean's multiple eyes open, giving us the perspective to look at goals when we have multiple realizations, because it does help us distinguish necessity and sufficiency, because we literally see, well, it could have been done a different way. So it being done in way one can't be the whole story, because we just saw it done in way two. Dean? Sorry, Daniel, you dropped off on me for a second there, but so I wanted to ask David a question about. Yeah, Dean kind of drops in and out. He was giving some funny descriptors of his internet hardware earlier. It was pretty funny. Sorry. Sorry. So I agree with what David was saying. And there's a book called The Origins of Order, Project and System. And what the author basically says is that projects are easy to get trapped in goal directed behavior, whereas where if you can also hold up the system, you can see the sort of the multitudes of different directions all at once. And so I was wondering, does that sort of fit in with what David now was trying to speak to? Trying to find out the author's name here. Well, Stuart Kaufman, who is, I think he has become a little more active recently. I know they're pushing his book Origins of Order really hard. That's something I read in like the 80s. Who's that? This is a new one. This is Paul Kahn. Oh, okay. Yeah. Okay. K-A-H-N? Yeah, K-A-H-N. And he's basically trying to use the argument that you can look at the Constitution or you can look at the actual laws. And you have to hold up both. One is goal directed, the project piece, whereas the other one is trying to work from within and around. So that's sort of between and inside argument. And that's not the same as projecting yourself to a particular outcome. Yeah, I can't answer that right now, but I do want to tell a little quick story. When I was working as a senior programmer analyst, I was organizing this project. It was going to take a couple of months. And my super is kind of looking over my shoulder and I have the criteria and I have a big flow chart. And he says, Oh, so Dave, this is how you're going to do it. And I says, I haven't decided. He said, Well, wait a minute, what's missing? I mean, you've got the whole thing laid out. It looks like it ought to work. And he says, Yeah, I haven't decided. I've only done it one way. There's no decisions involved in that. I've just stated the requirements. I didn't decide how to do it. Oh, and he went out and had a smoke. I ended up using a much better technique that saved us a huge amount of rewrite that nobody would have noticed because they were said, Oh, we have the answer. Let's go for it. It's almost, it's the zero to one, which we've talked about. And it's also related to act, infer, serve. Infer, you'll have something to infer about when you've acted, whether getting that first draft out or just doing a private voice memo or whatever stigma G you need, you'll have something interesting to infer if you act, I promise. And then maybe you'll be closer to service than you expect. But infer is still within the realm of like, well, what would be sufficient to get the job done? But if you go, well, what would be necessary to do this one way? This is what would be necessary. We could walk from California to New York. That would be necessary for getting to New York. Okay, it's going to take too long. I just don't have that kind of time. What would be another way to do it? So there is a almost a necessity and a sufficiency in action and inference, where action has a lot more to do what's needed. And inference or thought maybe moves us more towards thinking about what's sufficient. Not sure about that, but also thanks for sharing this resource. And back to what David was saying, in the work that I was doing, I often had people who are very well versed in setting up all kinds of sophisticated projects asking me, where's the project in this and me having to jump up and down and wave hands and say, this isn't a project. That's why we're doing it. We don't know how it's going to turn out until we do. And then we can look back on it and say, well, did we minimize free energy? We weren't using minimizing free energy, but because I wasn't, you know, first in that lexicon yet, but that's essentially what we were doing. We wouldn't know until we'd arrived that we'd arrived. And that's really uncomfortable for people who are constantly saying, show me the milestones. Okay, well, I can show you the stages. And I can show you how identity is now turning into identifying as people are constantly processing who they are as they're on board this idea. But I can't tell you what it's like because it's not necessarily goal directed other than the fact that we want to try to keep it open as long as we can and figure out what logic people are using in order to be able to untangle and pull things out of hats that others don't know how to do. Yeah. One other thing that makes me think about is grants applications. So a grant application might frame things in terms implicitly or explicitly in terms of sufficiency. If you fund me, we're going to fix this disease like this is the grant that will be sufficient for fixing this disease. Whereas saying that it's a necessary step, like we do need to understand how protein 123 works, it'll be necessary, but it's not the whole picture. It's not sufficient. So so much when we really turn it over is related to necessity because there's so many things that are necessary and so little save for totally exhaustive examinations of really contrived systems. So rarely do we find sufficiency because if you can define sufficiency, you can make that bacteria in the test tube to know, though. And then life, it's kind of like puts a little twist on it, like what would be necessary and sufficient for having a bacteria in a test tube, putting a preexisting bacteria in and having it replicate. So life as this paper explores, it sort of threads the needle of necessity and sufficiency because the past is necessary and sufficient for living systems to be in the state that they're in now. And so that's like the generalization as well as the minute particulars. If anyone wants to ask a question, otherwise I think it's good to take a second, look at the numerical example. Yes, blue first. So just like what got us to this point, the past is necessary, but not sufficient, but it just echoes the title, like how did we even get here, right? Yes. So let's look at that bacteria. And then, of course, we can jump around anywhere else. Okay, so the numerical example is of a sensing and feeling, a sentient bacteria, not necessarily experiencing an action isn't even in the loop here, but it's a sensing bacteria. And so the organism is inferring its prior beliefs about the cause of the observations it makes. So it has a prior, let's just go over the sort of letters again. So there's a class of event for the organism. So this is like, we need a special color or some metadata tag, like how could we tag map? And how can we tag territory? Let's just use like one set of colors or one font, like map, comic sans, territory, some calligraphy script or something like that, for the organism, map of the organism. Now it's our map of the organism's map. A and B are events that are part of the class R. And so R is like the receptor state. And then A and B are inferred downstream of a chemical signal, which is S. So S is like for signal, and S is whether the actual molecule on the outside is the A or the B. So this is kind of a discrete binary choice. It's briefly discussed in the paper, like what happens if there's multiple molecules or one could imagine that what's being inferred is a continuous variable, like blood sugar, but alpha and beta are S. That's the outside stuff. And then A and B are the organism's receptor inference. Blue? Go ahead. And then I'll just look at it. Yeah. So just to sort of look at it in figure one, or in the formalism block that's in the paper, the organism has a prior, and it's the specification of priors that differentiate Bayesian statistics versus frequentist statistics among a few other pieces. But that's really the key piece. And so the prior P of R is 80-20 in favor of A. So a priori before, how did you get there? But a priori, it's 80% likely that it's A and 20% B before any info comes in. Then there's a likelihood mapping. So that's the distribution of the signal conditioned on the receptor state. So we're talking about what's the probability of alpha outside conditioned on A and B. And so it's 70 A and 30 for B and vice versa. So what's interesting about this is that's a pretty noisy signal. It's really only saying it's like if it were 50-50, it'd be a useless signal. Because it wouldn't knowing it one way, well, it would also depend a little bit on the frequency of alpha and beta. But this is a very noisy signal. Like 30% of the time, the receptor is giving the wrong answer. So this is a way to show that with multiple uncertainties connected the right way, you can actually tighten your estimate. Because in this example, with the 80% prior on A, and then a 70% likelihood mapping, which is also A, that's more like classic actinth jargon. After observing alpha, it turns out that the estimate of A, the estimate of alpha, the estimate of state being A conditioned on alpha being observed, tightens that prior from 0.8 to 0.9. So there's still a tightening of the prior by a significant amount, even though the mapping between the molecule and the receptor state is somewhat noisy. So that's like a Bayesian bacteria. It's just applying Bayes' rule to this simple binary outcome that it's doing inference on. Blue, any thoughts there? Yeah. So, you know, I, this goes back to something that Dean brought up in the .0. Like is, oh, and it's actually something that Axel wrote in the paper, like is the prior subjective or objective? And like the miss dysfunctional priors and the survivability of organisms with dysfunctional priors. So I just feel like, I mean, I know Axel says it's subjective, but I feel like it's nested. So I feel like there's the prior must be a subjective prior over an objective frequency, right? Like, so if I frequently observe, you know, sunshine outside when it's warm, or something like that, like if mostly when it's warm, when I feel warm, I observe sunshine. So when I have these two things linked, like based on that frequency, I then make a subjective assumption. And that's how I feel that the structure of the prior kind of has to be. Otherwise, like it doesn't make sense to put the math into the prior. Like in my mind, like if you can't, if an objective priors based on frequencies, but a subjective priors not a subjective priors, a guess, but it's a guess over the frequencies. I don't know. I feel like there's like an extra layer that needs to be tucked into the subjective prior if you're going to put math there. One way that that's dealt with empirically, and then Dean is what's called parametric empirical bays. And in parametric empirical bays, you kind of start with just the prior being what you observed in the past. So if you had eight sunny days and two rainy days, that's where that 0.8 and 0.2 prior comes from. So it's not a way around specifying a prior, it just starts your prior in a place that's empirically grounded. Dean? Yeah. So I think in the 0.02, I was trying to figure out how the likelihood is really the kind of background that we have. And that's what blue is kind of talking to you. And I was wondering about in terms of context, because if I see a black dot on a white background, I think a period, whereas if I see a white dot in a black background with a whole bunch of white dots, I think how many light years away is that concentration of hydrogen, right? So again, we could decontextualize this, but if we do decontextualize, and this is where it gets really hard for me, I get lost in all of the terms, but if we do decontextualize it, does that help us in any way understand both the subjective and objective factors in play? That's all I'm wondering about. And again, I get lost in the number of different ways to be able to describe this. That's part of my problem. I have an upper limit on how much of this stuff I can do at once. If you want me to focus on days, I can do that. That's hard enough for me to concentrate on that. But if you actually think about this, okay, so what does this actually imply in terms of entailment, then it gets really, really tricky because I can't just focus on one thing at once. I got to sort of think about what all the implications of that are. And man, can you ever go off in different directions all at the same time? I don't want to be too tangential here, but I'm still not sure whether the subjectivity in isolation can work. I'm not saying it can't. I don't know. Thanks, Dean. Blue, then Dave. So in the empirical parametric base, that would be like an objective prior. Do you have subjectivity in empirical parametric base? Because that's what Axel is describing in the paper. You do. You can have an objective prior based on frequencies. It's 80% of the time sunny and 20% of the time rainy. And so you get these frequencies and that's what goes into constructing an objective prior. But when it comes to an organism and a choice or even a scientist conducting an experiment, why would you even do the experiment in the first place? You have a guess. You have something that you think will be the outcome. And so a totally subjective prior is like, well, I think 60% of the time it's going to be blah, blah, blah, blah. And that's a completely subjective prior. It's just your best guess, which if you're going to try to really do objective Bayesian analysis, you need to have a flat prior, not your best guess. Your prior needs to come from previous observations and not you guessing. But you could very well not do a very empirical or objective Bayesian analysis and just say, my guess is this. And then you count, make the observations, observe the frequencies, you go through the entire time step. And then at your next time step, there is that tightening between the likelihood, the previous likelihood and the prior. So that is, I mean, as far as I understand it now, I'm definitely not an expert. But so I think the subjectivity and objectivity is, I don't know, I think that there's some link there. I get it for scientific Bayesian data analysis. You want to have a flat prior, you want to do objective analysis, you want to use the previous frequencies or whatever. But I think that in terms of a mental calculation, it's also over frequencies. Or it's your best guess. What do you think will be the outcome? And it's not always, it doesn't always line up, right? Thanks, blue, Dave. Yeah, the term decontextualized, I kind of that shocked me when Professor Jiren started using that as decontextualized. Well, that's just some, some jerk opens his mouth and decontextualized his feelings at me. And, you know, if I want to know what's going on, I hyper contextualize up and particularly attended to my own prejudices and my own viewpoint. Think about what goes, you know, Einstein hated the word relativity. He wanted to come up with something like hyper contextualized or the real world or the, the, the, the tensor description where you don't have to ask, where are you because you have the structure of the world? Um, what do people mean by, by decontextualize? Is it, is it a term of praise as well as a term of contempt? Or have I just always misunderstood it? Blue? So I love this term hyper contextualize. And I don't know that I would use it in the way that you use it, Dave. I would use it like, I would play out all of the potential, like, what ifs, like put, like, let's talk about counterfactuals. Can we, can we put it into, you know, how many different contexts will it fit? And that's when I, when I think about hyper contextualization, it's like playing out all the potential scenarios for me. Thanks, Dean. Yeah. And building up, building on what blue just said. So I, I saw a YouTube video or a guide figured out a way to make a target move so that no matter where you threw the dart, as long as you were directing it towards the dart board, the target would create the algorithm would create the target moving so that you hit a bull's eye every time. And I was a phys ed teacher. I was like, Oh, this is amazing. And I took it to one of my colleagues who shall remain nameless. And I said, we should show this video to the kids. And he was like, why? And I said, well, think about it. Every time a person moves their hands to catch the ball, they're making the same kind of adjustment. And it was like a light went off. A lot of the things that I was, that I was coaching were then decontextualized. And the proof was in the fact that when you saw something out of context that was running the same parallel function, you didn't necessarily recognize it. So this was a really, I love hyper contextualized. So I wonder if that's like abstraction or de irrelevant contextualizing, you take away the irrelevant part of that setting, which is Oh, it's hands or some other specific system. And then that paradoxically gets you to hyper contextualize because it does highlight the parts of the context that matter Dave. And there's that famous phrase from 1923 consciousness of abstracting bringing people to think scientifically. One of the first things you do is get them really aware that they're how much they're missing. You're abstracting away from at least 90% of whatever you're looking at. And since I've got the mic, there was the question of how a field or catches a baseball, a fly ball here it comes, it's coming out of the sky at you. And, you know, a good field or knows how to do that, you ask him how he's well, I just, I just go to where the ball is going to be, but he can't tell you how and there were a bunch of people, mathematicians and programmers and physiologists and philosophers trying to how in the world are they doing this? It looks like they can't because we can't come up with an algorithm that does it. And they finally watched them. And I guess they got one one, one guy that's high on whatever it is openness, the big five. And, and this is what are you doing is how do you know that you're going to catch the ball? Oh, I know I'm going to catch the ball when I look at it and it doesn't move. It doesn't move. Yeah, it just hangs there. And I put up my, my mitt and it since it's, I know where it's going, it's going on my mitt. And it's not moving. And it's in my mitt and it didn't move then either. That's how they get there. They figure out what point in the field makes it look like the ball's not going to move as it comes toward there. And that's where you go. And then just hangs in the field til it's in your mitt. Classic baseball physics. The other thought was, have you ever been high on openness? But then it's true. There is a simple algorithm like if you're looking and it's to the left, then you should go to the left. If you're looking, it's to the right. And then you're cool, active baseball and anticipation and the path of least action blue. You want to talk about high on openness? I don't know that it reminds me of our conversations with Adam Saffron, right? And so like what the psychedelic like, I would like to see like the psychedelic like comparison with big five, like the integration of those two kind of things in terms of active inference, because I do think that the openness like increases a lot under, under the psychedelic influence. And so it's, it's not have you been high on openness, but has being high ever made you more open maybe is a different question. And well, a few things. There's sort of the Rebus, Sevis, Albus relaxation of beliefs under psychedelic strengthening or just the altering of beliefs with Friston, Carhar, Harris, Saffron. And what is being, what priors are being opened, which priors are being tightened. And that was sort of at the heart of the Rebus versus Sevis and then the sort of Hegelian synthesis in Albus. And then also in terms of how we got here, the first two weeks of January was live stream number 13 with Adam and colleague on the cybernetic big five. And so that actually sort of started our year with thinking about how frameworks that a priori have nothing to do with ACTIMF or FEP. Like big five was a principal component analysis based variance decomp, various partitioning on psychometric data. Yet we saw how ACTIMF could help us reimagine that and generalize it while also bringing action into the loop. So then it's kind of cool to see how the personality traits come back into play. Dave, and then we're going to return to Bayes and then we're going to move into the variational free energy. Well, stop me if I already mentioned this, but there's been some chronological studies of the way psychedelics are used by as a recruiting tool of violent cults. You go to a rally in favor of I will not say whom. And if somebody seems like they're really grooving and really into it, you give them a little ecstasy. And that locks them into the mentality. And if he's kind of getting fired up, but he's not quite with it, you take him home, give him some LSD and just take him through the session. And that moves him out of what he was in and moves him into where he needs to be in order to do the right things in life. Maybe another little bit of ecstasy now and then over subsequent months to just get him locked back in where he needs to be. Just like, just like Saffron was saying, you either get him out of the where he is, or you get him right back right deep into where he is. Well, there's a long history of people engaging in experiences, taking substances to move their psychodynamics one way or the other. I think the active streams verge on it, but we also remind people to be responsible and legal with their behavior and with the one and only brain in mind that they have. Let's return to the Bayesian maths and then see how that Bayesian bacteria moves into the variational free energy bacteria. But I pulled out that one quote from a online statistics resource. So we can, you know, the 0.0 is the frequentist bacteria. And so the frequentist is basically looking at the outcome of an experiment. You flip the coin 10 times, it comes up seven heads and three tails. Now that's not the most likely possible outcome or a fair coin, but it's totally not an unexpected outcome. So in that case, the frequentist can produce a 95% confidence interval, given the variance granted by the sample size and say, okay, if I get within these two numbers, then I'm going to fail to reject the hypothesis that it's fair. If I'm outside of those two numbers with my empirical results, I will reject at a given p-value, a given alpha. And so for the frequentist, there's no sense in asking about the probability that the coin is fair. It is either fair or not fair. So that's like, accept the hypothesis implicitly because of the failure to reject or reject the null hypothesis in favor of a specified alternative. The frequentist makes statements about the probability of a sample after making an assumption about the population parameter, which is just the kind of average out approach, average out value of the probability of tossing a heads. The Bayesian in contrast starts with information about coins. So specifies some distribution. So it could be like a bell curve, a Gaussian centered at 0.5, could be a super tight prior, could be a very loose prior, and then the extreme case of the Gaussian with infinite variance is the uniform distribution. And so that is like flat across all the possible values. That probability assessment of the heads proportion is called the prior probability. Each person might have their individual assessment based on their personal experience called a subjective prior. Alternatively, a prior probability distribution might be selected on good properties of the resulting estimates called an objective prior. I don't know if this is the only way to use subjective and objective in the Bayesian context, but this is just an important difference between frequentism and Bayesian. So here is our Bayesian bacteria. Again, it has some a matrix that's mapping the signal condition on the receptor state and then it has a prior over those receptor states. That Bayesian bacteria, then again, it's a Bayesian perceiving bacteria, we're not bringing action into the loop yet, but we're just assuming like if you had the right estimate, then it would be easier to know what to do correctly. Um, this is called exact base because the exact numbers are plugged in and then there's some exact number that comes out and it's pretty much just granted by Bayes rule. What's the issue? Well, it's unclear whether living systems have sufficient computational power to accomplish that, especially when there's just massive multivariate landscapes. So what is the answer in the FEP? Well, instead of doing exact Bayes, there's what's called variational Bayesian inference. So here we're going to propose this Q function and the Q function is going to be from a family that is specified a priori and it is more easily fit than just sort of the exact Bayes. And then the way that we compare is not by comparing at the end the posterior likelihood, but rather we can compare different options by comparing their free energy. And this is how free energy is being defined here. It's the negative sum over R, those are the receptors, states A and B, of some Q distribution and then the natural log of the joint distribution of the states R and the signal alpha multiplied then divided by QR. And so then in dot zero, we explore a little bit about how that's like the recognition model. That's like we're going contingent on alpha and now we're doing a hyper parameter estimate like alpha going to A. And then what we're not going to be exploring here is like a generative model. And then we talked about how those map onto perception and action in the world. So then the numerical example shows that okay, so the Bayesian bacteria, it did the right thing. It didn't go to that rally that Dave was telling it to go to. But we can also have a free energy minimizing creature, bacterium, and we're going to see a case in which the free energy minimization does the right thing. And we're going to see a case where the free energy minimization does the wrong thing. The difference between these two cases is going to be the prior that sets. And so that is the crux of the argument that minimizing free energy is not sufficient to be alive, aka make the right decisions, because both of these examples are going to minimize free energy and one of them is going to minimize free energy in the right direction and make the right choice because it had appropriate priors. A maladaptive priors can minimize free energy, but still result in the wrong outcome. So in this case, we have the free energy calculation for this 80-20 prior with a favor of A and contingent on observing alpha, the free energy of this is 0.47, which doesn't have an interpretation as a probability, importantly. So we're not within that zero to one bounded space. And then in contrast, the representation of B, so that QR setting that to B, that has a free energy of 2.2. So free energy minimization saves the day in that context. However, it's not simply because the free energy was minimized that the right decision was reached. Here is a creature with a prior on B of 0.8. And then we see that just barely upon observing alpha, the free energy of representing B after sensing A is 0.96. And the F when representing A after sensing alpha is 1.094. So the ending posterior likelihood is 63% in favor of B and 37% in favor of A. So we sort of did move the needle towards A because we observed alpha, but our prior was so far off that just one cycle of free energy minimization still results in the wrong outcome. That's the crux of the numerical example provided in this paper. And the only point that it's being made, not this is how you'd model a bacteria with the FEP, the point it's making is free energy minimization is not sufficient for life. Dean. So Daniel, I got a question to ask. So I don't know, because we're just talking about sensing, we're not talking about you mentioned doing, but is it doing or is it directing? So because of the sample sense and because of what the math points to, should we think of this kind of analogous to a compass, we're pointed in the right direction, we still haven't actually acted or developed a feeling from the sense yet. We're we're still pre-doing. We're orienting. Is that or am I wrong? So if anyone else has a thought after this, I think there's a few aspects to this compass and orienting approach. So this little toy motif about the recognition model, about having a prior overstates in the world and then getting some input that could be plugged into an action model in a slightly more elaborate case. So for example, as written in the caption for figure one, more biologically realistic descriptions of behavior require discussion of active inference. They require it. It's necessary and sufficient. You just have to do it. Behavior is the result of a different inference process that have an action policy. So pie is not here in this bacteria. This involves more priors, namely about the transition between hidden states and often about preferred sensory outcomes. So inactive inference with a partially observable Markov decision process, we've seen how there's like a matrix of the transition frequency B between the states at one time and the states at the next time. And then policies are inferred based upon which policies minimize free energy, which ones are going to have the best chance through time of bringing states into alignment with preference. The other thing that it made me think about with the compass is like the exact base just returns a probability. And that is sort of like one compass needle giving you just saying, okay, you're going north by Northwest or whatever it happens to be. This is your one estimate. And you could be wildly off, but given where you were, and then just one blip of information, that's where your compass needle is now pointing to free energy. It's almost like there's two compass needles in this binary choice. And then we're going to select which one has a lower free energy. Because we're looking the fact that this is 0.478 F. And this was 2.27. They don't have a natural interpretation, just like entropies, you can talk about the difference in entropy, but something having like four doesn't have an interpretation in the world. And so it's kind of maybe like, there's two different compass needles, and we're looking at the difference between them, or we're going to select the one that has a lower free energy. Dean? Yeah, I don't want to say that orienting isn't a form of doing as well. It's just that what this shows me is that I cut out again. But when we're trying to determine A and B with the background, we haven't taken an action step, we haven't developed a feeling yet. Do we categorize that differently than if we were to look at this through an active inference lens where we would then push a policy forward, right? So I'm not sure it is the orienting fitting. You fit into the policy piece or is it pre-policy? That's what I'm trying to get clear in my mind. Good question. It makes me think about the UDA, observe, orient, decide, act, loop, and about our recent paper that connected ACTIMF to the UDA stages. Orienting, I think in this reduced example, the North Pole, North Star, whatever it is, is the prior. Like that is what is oriented relative to. So that's a purely recognition model sense of orienting. Whereas in the physical realized worlds, orienting has much to do with action and wayfinding. But this is sort of in the recognition toy. There is still an orientation of the prior. But I'm not 100% sure. What are you thinking? What I find interesting is that when I look at this, and I see a .8 and a .2, it's very clear that we're somewhere between 0 and 1. But when we're actually getting to that wayfinding space, and we're talking about entailment, one hole over two holes is different than two over one. So as soon as we move past one into some of the more, as you say, material realities that we have to face down, that's when those actions become material. Like we can see what the process ontology is. But what Axl did was he brought the process ontology down to it even closer to zero point because he was using free energy principle, and he's talking about free energy principle and Bayesian stuff. So again, sometimes I have a general sense of this stuff. And just enough to get me into trouble and being insurgent and and point at people who are really, really smart and kind of go, I think I know what you're on to, but it doesn't get me deep enough to be able to write a parallel paper to Axl's and sort of either build them up or tear them down. So yeah, I don't know. That's why I'm asking. I don't know where orienting fits in this because I think it fits in the zero to one space. And I think it fits into the one to three space as we as the dimensionality blows out. That's back to the subjective of objective crossing back and forth. And again, I just come off suddenly like a complete idiot here because there's somebody way smarter than me who can blow my argument up. But at least I'm putting it out there and showing up. Oh, you're you're by being on a live stream and talking about it, you're like 80% of the way to the paper. Dave, is there something in this context that would correspond to urgency? And I'm particularly thinking in the terms that Floyd was concentrating on when he talked about how to get people to be less neurotic. He says, if you're the point that a person is in real trouble as a neurotic is when the dominant factor in his life is reducing his anxiety. And what we want to do is to get him to tolerate his anxiety, tolerate, for instance, ambiguity, not knowing is my wife mad at me or not, but don't get crazy. Sometimes you just can't do anything about it. Calm down, tolerate the anxiety. Is there something and I'm just taking that as a case of urgency, urgency seems like something that would fit into this literature there. Blue. So I just want to like, so I don't know about urgency. I'm going to kind of derail off of that and more hook on to this toleration of anxiety or toleration of circumstances that are not preferable for like a later more preferable outcome. And so this is something that is taught like, I don't know, in self-development, a lot, right? Like you have to put yourself out of your comfort zone. But I mean, also in the meditative traditions, learning to be with that discomforting feeling and not like you don't have to get all upset and do anything about it. So I just wonder if that fits, it's a different thing, different than urgency. But what about like, where's the temporal, there has to be like a loop, right? Like, I'm going to tolerate this bad thing for something that I think will be like a more predictive outcome later. Like, how can I base that? That would be like an interesting mathematical thing to try to work out. Cool. I added a slide here. Here's the Eisenhower matrix. So the x-axis is urgent to not urgent. And then the y-axis is important to not important. So one of the reasons why this is such a useful and well-known tool is it separates urgency from importance. Because there's things that are urgent, like a chat message, that urge you to do something or they have sort of a short time horizon. But then there's things that are important or not important. And then there's an orientation when you're figuring out what is in what quadrant, kind of like the Rumsfeld matrix as well. And then that has implications for action. So let's see urgency and anxiety. Dave, you mentioned, I don't know about the Freudian urgency concept much, but it's of course with Solms, Friston, and the project for a scientific psychiatry, there's some clear links there. So it wouldn't be unexpected. And then the idea that the imperative is to reduce our uncertainty through updating our model, learning, and through action. So the reduction of anxiety is in imperative. And then sometimes it crosses a threshold towards being truly urgent. So it's kind of always in the background, but then other times it really is urgent, but it's always important. And then, Blue, I agree, delayed gratification, it's like having the time horizon depth to take the right policy selection today so that your expected free energy is higher in the future. So it's like we'd rather have $1 every day or $1,000 in one year. Depending on how badly you need the money today and what you think you'll do with it in the meantime and all these other things, one person, given their priors, might minimize free energy and come to two different conclusions. And I think that's where we return to this subjective versus objective FEP take, which is like a person could rationally evaluate given their subjective priors and choose to take that marshmallow. Maybe that is just the waiting that they have on their priors or maybe somebody does the opposite and that is just where their priors and that stimuli that's the action selection that they take. And so, not sure how all these pieces tie together, but I think it speaks to the logic of action and inference. And we want to have a logic of action and inference or a framework for it that is broad enough to accommodate like making bad decisions. If our action and inference framework is only about tightly constrained scenarios that are addressable, then it doesn't really help us as much with real world scenarios. But it will be very cool to see how a lot of these daily phenomena connect to the things that we're talking about here. So let's see what else we had in the pieces. And also, if Axl is listening or wants to join for a .2 next week or for a future .3, I think there's probably many things we could ask and what brought you to write this paper. And then where do you go after this one? But pretty much the paper delivers the mic drop, the coup de gras with the free energy minimizing bacteria which has bad priors or let's just say maladaptive priors and it minimizes free energy and makes the wrong decisions. So that's where that section concludes. Then there's a jump back to the more philosophical framing. So although free energy minimization is not sufficient for life, there might be an entailment relation that goes the other way around. If you are alive, it might very well be because you did something like minimizing free energy. That entailment relation is that which corresponds to the weak version of the life free energy entailment relation. So remember way back when when we were looking at the relationship between life and minimizing free energy, this example has decimated the strong claim blown out, make a YouTube reaction video. But the weak claim is actually supported because it seems like making adaptive decisions is consistent with free energy minimization just like it would be with making adaptive Bayesian decisions. But we can say that it's not sufficient to do so. And then the name of the section, free energy on a wing and a prior. So yuck yuck yuck. Anyone can give a comment on that. But we just sort of were reminded of two winged creatures that we have talked about only a few weeks ago in 32. There's the winged snowflake and then the Lepidoptera to a flame, the moth to a flame over here. And about how in this snowflake example, so go to 32 to check it out in more detail. But it's like the snowflakes with wings that act adaptively are going to persist. And those are the ones that are going to be looking as if they've minimized free energy in selecting the correct action policies. Whereas the ones that don't exist just aren't. If it makes bad decisions, it's going to fail to exist. So it's an interesting section. And it uses sort of evolutionary logic in a little bit of a different way because the evolutionary answer for everything is just sort of like that's what was favored by evolution slash selection. Why does this organ have this shape? It's like, well, that's what was selected on. And so that's just that's just why it is that way. Even though we've talked about Timbergen other things, we know there's multiple whys. But here we can now see the adaptivity of priors instead of relative fitness merely blue, as we know, more information here soon. But the the adaptivity of the priors will be looking as if it has been minimizing free energy at the population level. So whether we frame natural selection as increasing individuals with fitness or increasing the mean fitness of the population, but not necessarily the relative fitness, we can also do the same. But rather with a Malthusian exponent or some other kind of economic fitness parameter, we can think about it in terms of the free energy minimization. Dean. Yeah, I keep falling off here. So I'm going to have to get past all the hidden stuff that you guys talked about when I wasn't staying on on the broadcast. But what this points to me is interesting is because back to the previous slide you talked about my first RIP Dean, please F in chat for Dean. He'll rejoin in just a few seconds. Sorry guys, continue. Yeah, is it a wing? My first question is, is it a wing as wing? And then second, what but does it still serve a function like could I use it to fly? And so that's where the adaptive piece actually comes in for me. So again, there's two questions, necessity and sufficiency, right? And you can even see it in the representations. Is there enough information in the representation that I identify as wing? And then the second part of it is, well, then I can see it can carry a moth or I can see it can carry a snowflake. That's the function that it serves. And so that to me, again, is the minimum of two, and making sure that we don't blend those two things up because it's the fact that we hold both of them as discrete and each serving a different purpose that helps. I'm going to jump off prematurely because I'm tired of having to catch up and slow you guys down, but I'll watch the video and see in the point too. Thank you. Peace, Dean. All right. Okay, cool. So we walk through much of the paper. It's not a very long paper. And it has just one core numerical example, again, with a Bayesian walkthrough, and then two variational bays, one variational bays that basically recapitulates the first Bayesian example of an adaptive prior, and then a second counter example that provides a maladaptive prior. That is connected to evolution. And to the idea that we're only going to see adaptive priors moving forward, because if that model of recognition or that action model is maladaptive, it will be seen less and less as time goes on. Blue? So something that Axel didn't mention in the paper, or maybe didn't mention, or if he did mention it, I didn't catch it. But so the adaptive priors always fit a certain niche, right? And so you're only adaptive, if you're a thermophilic bacteria, right? Like you're functional in a hot spring, maybe not in like an ice patch, right? So I think that there's, you know, if you take the thermophilic bacteria out of the hot environment and put it somewhere cold, then it's maladaptive. Its priors are maladaptive, but only because it's just not fit to the correct niche environment. And so I think that there's this, you know, the importance of the niche environment. And also it makes me wonder a lot in terms of evolutionary, like conceptually, like, you know, what if it's always we always think about like the best fit to the niche, but but what if it's like the best generalist? Like is there like the evolutionary, I mean, I guess that's like the cockroach, right? Like the evolutionary generalist that can go anywhere, be anywhere, survive in any niche, survive on anything. Like is that the best? Like how, how, how does that like, like a prior that fits no niche or fits all niches? How does that like play into this evolutionary paradigm? I mean, I guess the cockroach will prevail. Good question. Maybe this is related to our discussion about context. Having the prior of 8020, and then we observed alpha, but we could have lived that if it was all of a sudden dipped into the beta ocean, then that prior would be as wrong as the maladaptive prior was in this paper. So whether or not it is adaptive, as you point out, it is absolutely about the statistical regularities of the niche. And that's the fitness of the model, fitness of the organism to the generative process is the generative model going to be fit to the generative process. And then generative processes change, things change out there in the world. Dave? Yeah, if the the work on my medics going back to Dawkins and the English lady that did a lot more development of that notion. If that can be adapted to free energy, that will give a lot of very detailed discussion of generation after generation of the co evolution of adaptability and that which is adapting the notion that the brain has been adapted to imitate and to remember and to pass on and to recast concepts into many different modalities. So to what blue was asking. Yes. Agreed with that. We can then just maybe look at the last few pieces in the future directions. And then I think, see if we have any ideas for things to discuss in point two. And then other than that, just invite everybody who's interested in listening to join us for point two, because we've covered much in point one in zero. So it's our last journal discussion of 2021. Come join and share how you got here. And then what action selections you're going to take in 22. The last section is free energy minimization as a historical scientific principle. So that's a very fascinating idea, because it's about anticipatory systems. Yet it's a historical principle. That's where we get into the difference between prediction and post diction. Postictive scientific statements are concerned with what must have been the case instead of what will be the case. A statement such as the minimization of free energy may be necessary if not sufficient characteristic of evolutionary systems, which was from the first in in Stefan 2007, is probably such a postictive statement. And then Axel claims that that statement should be interpreted according to the weak claim rather than according to the strong claim. So interesting that even back then, the necessity and sufficiency was being used and arguably used properly. So then what happened? How did we get off the rails? Or how did the strong claim come to be? And the paper, I think, makes a powerful argument for the FEP as a postictive model, but that doesn't mean that it can't be used in an anticipatory way for anticipatory systems moving forward and doing inference on the consequences of our action, which are also in the future. So how the past and the present and the future are linked, that's more of a .2. How are the historical sciences anticipatory sciences? And I don't know, is there now sciences? No, that's what I call science 21. What else would people like to discuss in .1? So I would like to maybe get into more about the living versus non-living and maybe talk about this idea of maintaining structural integrity. Is that a quality of life or do things always maintain structural integrity? And does the FEP apply when structural integrity is maintained and there's not life? We talk about free energy minimization and chemistry all the time. You form the product that minimizes the free energy. And so I know that there's some differences or some a lot of extrapolation that has to be done to get from thermodynamics to the FEP or not extrapolation, but it's retrofit is maybe the right word. So I don't know, I would like to maybe see some of that. Where does the FEP apply in situations that are not animate? Is it computationally, is it relevant? Like when we're talking about robotics? Yeah, that type of thing. Yeah, and is animate the same as sentient? Like animate, sentient, inanimate, not moving, not animated, insentient? Like, can we just have an active model that doesn't have any affordances or where it only has one affordance? So it's still doing a recognition model, but then its policy selection is extremely constrained? Or like, so I mean, I don't even like to like use life and non-life because there are some things that like blur that boundary also like it gets into like, well, what is a virus? Is a virus alive? And a virus is very exploitative, like has DNA, like there's some things, some qualities of viruses that make me feel like they're alive, but they're technically not. So like, what about if we just say DNA containing, but then it's like it's this test tube full of DNA alive? Like, I don't know. It's hard. And so instead of even sentient and animate, like alive and not alive. I don't know. Yeah, I think that the paper's succinctness and clarity of argument hides a big question, which is, okay, so if it's alive, who told you that? So if we can't go from minimizing free energy to inferring what's alive, then how did we get to hear? Right. Or what does even alive mean? Like, I don't think it's fully defined. Like, if we can't fully define death, like, I mean, how do we fully define life? Right. So I don't know. It's this, there's this big sticky, I don't know, weird area. I mean, we kind of instinctually know what is life and what is, what is alive and what is not alive. Anyway. But the, I mean, are the crystals in bone alive? Well, they're part of a bigger system that people usually don't have a problem saying is alive. But then why should what people think matter? Like, especially if we believe in evolution, and we think that our cognition is oriented, not necessarily towards, like, ontological realism, then why should it matter what 1% or 99% or even 100% of people agree upon? Or even if we zoom, like, way out and get super meta, like, how does life maintain the structural integrity of life? That's an interesting one to think about. Like, because I mean, it's dependent upon the niche environment, which includes many non alive things like the crystals that are inside of bone, or inside the inner ear, which help us maintain balance, right? So there's all of those things. Yeah, I think following with this quote, from the point of view of the FEP, the life cycle is that which corresponds to the thing whose integrity is maintained over evolutionary time. So then the ant's nest architecture is as much recapitulated as is the casing of the larva, as is the larva itself. So how will we define life? I guess we can just add that as a good, you know, just sort of just something quick to hit at the end of 32, or of 34.2, just, you know, what is life? And then hash Schrodinger's question been answered. I mean, it's been a few years since 2018, and Ram said it all. But has it been answered? What is life? 1944? Okay, so I think we will conclude there. I hope if you're listening that you would like to join us in the dot to where we'll talk a little more about this paper and how we got here, and then where we're going to go in 22. Anything else, Dave or blue? Okay, thanks for the fun convo. Talk to you all later.