 Hello everyone, welcome to Act in Flab, guest stream number 18.1. We're here with Julian Keverstein and Michael Kirchhoff as well as Dave Dean and Stephen from Act in Flab. And we're really appreciative for the authors for joining today to be discussing some of their work. We're going to have a presentation by the authors and then following the presentation we'll have ample time for discussion. So please feel free to write any questions down in the live chat and we'll get to it. So without further ado, Julian and Michael, thanks a lot for joining and please take it away. Thanks so much for inviting us. We've been looking forward to this particular session and thanks for staying up late in the US and whoever you are in Europe for getting up early. It's only 3.30 in the afternoon here for us. So that's more manageable in Australia. I'll just share my screen, let me see, yeah, is that right? Yeah, you guys can see that? Bring that up. So that should all be clear in a second, is that right? Looks good. Great. So the paper is called The Literalist Fallacy and the Free Energy Principle. It's a philosophical paper on the topic of scientific realism, scientific anti-realism slash instrumentalism in the context of the FEP or the Free Energy Principle. I suppose just as a primer, this is a revised and resubmitted stage for our paper this particular moment in time. So any comments that we might be able to get to further improve the quality of the paper will of course be welcomed. So let's start. There will be a little bit of, I suppose, conceptual stage setting at the start just given the nature of this particular paper and also because it sort of speaks to a few other issues that go beyond just a sort of narrow focus on the Free Energy Principle. So this is the debate as it sits, or at least as we try to sketch it in the paper between scientific realism and instrumentalism. So let's just say briefly what scientific realism is and briefly what instrumentalism is. And then I will say a little bit more about how to think about scientific realism. Overall, the point of the paper was to sort of try to marshal a set of arguments for why at least a version of scientific realism can be accommodated by FEP models. And it was motivated in part because it seems like in the last few years there's been a flurry of papers in one way or another arguing for scientific anti-realism with respect to the FEP or how to interpret parts of that particular model. So that's the kind of setting we're in. So scientific realism, that's the view that one reasonable goal of a currently best scientific theories, scientific models is to offer literally true or probably true or approximately true or probably approximately true descriptions and explanations of target systems. I have here in bracket, for example, natural phenomena, but it could also be artificial systems in the context of artificial intelligence, artificial life, for example. Now, by contrast then instrumentalism or scientific anti-realism, that's the view that scientific theories and models are nothing but instruments for prediction of observable behavior of target systems. So here the claim is something like this from an instrumentalist perspective, theory slash models, they have explanatory power and that gets them some utility, but they're not in any sense of the word truth producing. So that when it's not to think about instrumentalists or the instrumentalist position as putting forth statements about how the world is on the basis of their models by which you can say something, say true or false about how the world is. So the main question now is for this particular paper we are considering, is the FEP truth producing in a way that's consistent with scientific realism? And of course, as I mentioned just a minute or so ago, we think it is indeed consistent with a particular way of conceiving of scientific realism. Maybe a footnote here that's important, both scientific realism and instrumentalism are quite a multiple, there's multiple different versions of each particular framework in the philosophy of science. So we're trying to offer a reasonable general characterization of these particular frameworks in order to sort of get some traction on the debate. So as I said, it's a reasonable goal of our best scientific theories and models to offer these two or probably two or approximately two descriptions of target systems. Now here, we draw inspiration from Peter Goffrey Smith, his book here, Miriam Reality, that tells us how perhaps to interpret that particular general claim. Now he says the following, he says, it's a mistake to express the scientific realist position in a way that depends on the accuracy of our current scientific theories. So if we express scientific realism by asserting the real existence of entities, recognized by science at this particular point in time, then if those particular theories turn out to be false, well, then the whole framework of scientific realism turn out to be false. So we shouldn't think that's what you have to do in order to advocate for a kind of scientific realism about scientific models, theories and their relation to target systems. So building on this, we can say. So again, this notion here is that an actual unreasonable aim of science is to give us these accurate descriptions. Well, that's not the claim that scientific theories must be true now and make truth producing claims about target systems now. It means a nice thing if they do, but it is not a necessary condition for one to adopt scientific realism about scientific theories and models. There's no commitment to the success of the individual sciences. So there can be trouble in biology, but we can still hold, for instance, scientific realism about, for instance, the FEP applied to the cognitive sciences, for example. Now importantly, and this is why I'm just spending a little bit of time initially on scientific realism, the kind of version of scientific realism we're interested in is going to allow for theories and models to posit idealizations, for instance, understood as distortions, approximations understood here as perhaps inexactness of scientific models. They can pose fictional entities, for instance, entities that are not actual and thus like in the here and now. But the important point is that you can keep that view and still endorse scientific realism because one aspect here is that the long-term aim of working with your scientific models is for them to become true for using or to become more and more aligned with the target systems that we're interested in and will finesse this view as we sort of move along. Now, this issue between the scientific realist and the instrumentalist as we are considering it in the context of the FEP comes up when we consider what we call the map problem. And this is not a term of art that we have invented. There are other authors in the both in the FEP and elsewhere that may use of this, for instance, Mel Andrews in her nice paper from 2021 in biology and philosophy that speaks of this particular problem. So this is just a quote from the paper that just gets us thinking about what our focus is. So we're going to ask what is the relationship between scientific models constructing using the FEP and the realities these models report to represent? Our focus is especially on the FEP and what, if anything, it tells us about the systems that is used to model. And this is what we call the map problem. How does the map, your theories, your models relate to the territory, real world target systems of which those theories and models are maps? Now, if there's anybody joining on the live stream, this is just a very brief statement about what the free energy principle says. Julian and I and as well as Ian are working under the assumption that that doesn't need to take long to introduce that in this particular context. But very briefly, then the FEP states that a self-organizing system that tends towards maintaining a non-equilibrium steady state for this environment must minimize free energy. Where minimizing free energy, at least under a certain Bayesian interpretation, is to maximize model evidence. That is, maximize the likelihood of some data given a model of the systems into actions with its environment. Now, how does the map problem arise in the context of the FEP? Well, it might arise if you're a, say, a researcher working in artificial intelligence and the aim is to build robots that engage in active inference, for example, navigating or making decisions, et cetera. So here, the question becomes, what's the relationship between our FEP model understood as an active inference model and then how one engineers, if you like, artificial agents to perform in accordance with those particular models, what the model specifies in terms of, say, decision-making, belief updating, and selecting action policies for adaptive behavior? A different way the map problem might arise would be from the computational neuroscientific perspective, where one might be interested in tracking transitions across belief, perhaps, in deep generative models. And the map problem then is, well, how does that theoretical specification of free energy minimization under active inference, how does that map onto what the brain actually does in order to, say, shift from beliefs to beliefs or to make decisions about what to do in the world? So that's just a few examples of where this particular issue becomes prevalent. Now, just to situate this discussion a little bit, because there's a few tweaks and turns in the paper, and it's a large and somewhat complex conceptual landscape we're working in, or at least it seems to me. We're just going to try to situate this a little bit more so you get a feel for this. We've got a nice reviewer comment that sort of introduces the issue here. So the comment is something like, if we take the free energy principle, like many other theories across different sciences, then we can think about that as a kind of model-based approach to science. And if you look at the little image over here, which is the modification of years, sort of a little description of how this works, modified by Goughy Smith in a 2009 paper, we can initially start to develop a description of a model and to say something about what the model system question is. And the comment is something like this, we can study models for their own sake, as Andrews has pointed out, now that I referred to earlier, any model that can be treated realistically can also be treated instrumentally. Sciences can disagree about which bits of a model are meant to be real. Now, this is important in understanding how the map problem arises. So Andrews in the 2021 paper gives us a whole list of different ways one can see the kind of mathematical structure of the FEP being interpreted as a scientific model. The work of Michael Weisberg is influential in the context of philosophy of science, philosophy of biology. So she and he, they can with sort of at minimally three different models that we're interested in, the FEP essay sort of targetless model, the FEP is a generalized model, and then the FEP is a target directed model. Now, you can work purely on the mathematics if you like, of the free energy principle, for the sake of just understanding the model for getting your model description right of your model system as per that image. Now here, the point is that that in and of itself doesn't give rise to anything like the map problem because you can disagree, you can come at that, that work from an instrument to this perspective, you can come at it from a scientific realist perspective, but doing that kind of work doesn't really allow you to adjudicate between the two positions, or so it seems to us. So that would be the instance of a targetless model. You're not asking questions that relate from the model to a target system. You can treat the FEP as a generalized model. You can perhaps think of it as you could model theories of natural selection from the point of view of the FEP, but we're not saying anything about the involvement of any particular kind of species. That's one to, for instance, a different discussion as well, which we consider, but we won't talk to unless it comes up in the Q&A about the role of the Markov blanket notation. Now as we put it in the paper, that gives you a kind of generalized approach for delineating boundaries, but it doesn't give you a approach for delineating boundaries for any particular individual. In that sense, it's a generalist take on FEP models on the basis of the Markov blanket formalism. Now the map problem for us gets interesting when you pursue a target directed model, when you try to implement the mathematical structures positive by the FEP in terms of a process theory, such as active inference. So you're sort of moving from a mathematically general characterization of the adaptive behavior to specifying what a particular individual or a set of individuals must do in order to engage adaptively in the world. So you're sort of committing yourself to getting the sort of the right hand sign of this model or this little image of your life, saying something about in which way do your model now resemble the systems that you're actually aiming to explain or understand something about. Okay, so we should, so once we have that, we should give the argument for the instrumentalists. And then once we've done that, we'll see the kind of problem that we raise for the instrumentalists that we think is a kind of a fallacy of inference from certain considerations to certain claims about the FEP. So premise one here is that active inference models on the FEP introduce distortions in representing biological and cognitive systems via idealizations. The one way to think about that might be the introduction of variational free energy as a kind of distortion into the framework in order to, in order to kind of solve a computational problem about how surprising it's minimized. As well as inexactness via approximation. And one example might be something like introducing this idea that free energy minimization can be construed mathematically as akin to a kind of Bayesian inference would be an inexactness introduced into the formalism primarily because it was, so it seems nobody thinks that the brain quite literally encodes something like Bayes' theorem. Okay, premise two then is that scientific realism requires that models provide descriptions of target systems that are literally true. So that's a somewhat of a denial if you're like, if you were to hold scientific realism with respect to premise one. Premise three then is that active inference models are not true and accurate representations of biological and cognitive systems precisely because of the issues raised in premise one. Therefore, active inference models are false. There are best useful fictions. And just to give you the references to where we are drawing work from in terms of defining the term idealization and the term approximation. Here we're using a nice paper by McMullen on Galilean idealizations to think about idealization. And we are using a canonical paper by Norton to define approximation. These somewhat contentious, primarily because there's, as one might expect, there's all kinds of other takes on it, but you have to come down on one side, I suppose. So once we have the instrumentalist claim up and running, this is the kind of fallacy that we're arguing that a set of papers defending instrumentalism in the context of the FEP is making. And we coined it the literalist fallacy. And just the prime use of premise two here is the claim that scientific realism requires that models provide descriptions of target systems that are literally true. Hence the claim, the literalist fallacy, because we're going to show that's not a necessary condition upon which the base one's defense of scientific realism. So what we say is the mistake, the literalist fallacy is the mistake of accepting or affirming about the FEP from the fact that active inference models are not literally mapped onto biological and cognitive systems. Slightly different formulation that we have here is that in this context of the FEP, some find apparent discrepancy between the map and the territory, a compelling reason to defend instrumentalism about the FEP. And we take that to be problematic given this particular kind of fallacy. We identify this particular fallacy by those defending instrumentalism about the FEP. And we call it the literalist fallacy. Here's the second variation of it, if you like. This is the fallacy of inferring the truth of instrumentalism based on the claim that the properties of FEP models do not literally map onto real-world target systems. So one question that might be interesting to discuss in the Q&A is whether or not it's indeed the problem for a instrumentalist to infer the truth of their own position on the basis of the claim that the properties of FEP models do not literally map onto real-world target systems. So the pushback here on the second formulation is something like this, instrumentalists are not asserting the truth of their own position that will be self-contradictory. Now, there's a set of conceptual issues here, but I just wanted to flag the issue and perhaps come back to it in the Q&A. It would take us too long to get into it and they did somewhat philosophical conceptual. Here's some evidence, if you like. A set of papers across some different topics that we argue and are all committing this kind of fallacy. Here's a 2020 paper by Venice and Polito, that state, they state that it remains disputed whether it's the FEP statistical models of scientific tools to describe non-equilibrium steady-state systems, which we call the instrumentalist reading, or are literally implemented and utilized by those systems, the so-called realist reading. And what they're going to conclude is that since FEP models are not true and accurate descriptions of these target systems, instrumentalism turns out to be the only option. So for us, that's the literal fallacy because it assumes that the realist position is that your theoretical models literally have to be implemented in target systems for that kind of framework and the philosophy of science to be up and running. We'll get back to 2021, presumably updated to 2022 at some point this year in the target article on BBS focused on the ontological status of the Markov blanket formula in the FEP. Now, they argue that much of the literature on the FEP implies that organisms literally instantiate the mathematical structure of Markov blankets. They argue that such a use of the formulism conflates a model with its target system. Now, if you think that in order to use the Markov blanket formulism productively, even from the scientific realist point of view, that the mathematical formulation of that particular formulism has to be implemented in the actual target system, then you're committing the literalist fallacy because even the scientific realist is going to push back against that. Colombo and Palacios in a 2021 article in biology and philosophy target the issue of ergodicity in the FEP, namely that one can realistically model biological systems as having an ergodic density. Now, they deny that ergodicity captures properties of biological systems and therefore the FEP is biologically implausible. Now, a line on that is that's a kin or a species of the literalist fallacy, precisely because it's going to assume that the theorists actually think that it's part and parcel target systems that they have an ergodic density. Our take on it is, no, that particular approach to modeling state transitions under the FEP is precisely that. It's an approximation with scientific convenience in order to say something about how systems occupy states over time, but you shouldn't conflate that particular perspective with saying that's literally the case of target systems. I'll skip the last quote here, just for time considerations, we can come back to it. There's any issues. Okay, so let's turn to some examples and some cases of this literalist fallacy and how to think about it than in terms of scientific realism. Though in the paper, we consider a bunch of these sort of cases or examples. We consider the notion of variational free energy. We consider the Markov-Blanket Formulism. We consider the role of Bayesian inference in these sorts of formulations. And then finally, we turn to consider the topic of ergodicity. Now, for this particular presentation, if you like, we're only gonna focus on the first variational free energy and then the latter, ergodicity. And perhaps we can come back to the other cases in Q&A. It strikes us there's two different ways of interpreting the variational free energy notation primed here on the big consideration that it seems as if organisms are minimizing variational free energy in the long run. So the as if formulation here which seems ever pervasive now in the literature is not, I think, a one-way street. There's a couple of options here that we now want to try to unpack, especially because we're doing the presentation and especially because we've been prompted to do so I think by a set of review comments. But let's get into it. This is the sort of first option if you like that's gonna deliver a set of reasons for thinking that instrumentalism is the case for the FEP and we're gonna push back on those particular reasons. So variational free energy is an information theoretic construction used to compute how organisms are able to resist decay by positing variational free energy at the bottom of the revival. Now, under the FEP for an organism to exist, it must keep its states within certain bounds. Okay, so that's the generic opener. So I'll give you some examples now of the as if formulation sort of prompted by this quote from Friston. The FEP is a mathematical formulation of how adaptive systems are connected to decay. Ramstead and colleagues 20-20 paper say this means that internal and active states will look as if they're trying to minimize the same quantity, namely the surprise of states that constitutes the thing, a particle or some preacher. So here we have the as if statement. They appear to perform this kind of minimization of variational free energy. Similarly, in a 2019 paper, any system that avoids surprising exchanges with the world will look as if it's predicting, tracking and minimizing a quantity called variational free energy on average and over time. Now I've included this because we're gonna speak to this one, this particular paper as a second option for how to think of the as if formulation and how perhaps the square as if formulations with a kind of scientific realism. Then as 2020, the system does not actually predict and for track or minimize the quantity called variational free energy, but it merely looks as if the probabilistic model merely tracks certain real statistical relations in the organism environment system. So this particular reading here, this first reading, the as if formulation of variational free energy needs to scientific realism about active inference models under the FEP. And it does so precisely because we have to sort of treat systems only as if, but not actually minimizing this particular quantity. Yeah. Just jump in on this a minute. So the Ramsted quotes there are saying that what it looks as if the system is minimizing is surprising, whereas van S shifts to talking about variational free energy. A good point. So yeah. I think we could accept that it looks as if the system is minimizing surprising because variational free energy is this bound on surprising. Yes. So the Ramsted quotes are actually consistent with realism. Yes. They could be read as allowing that the system really does minimize variational free energy and it looks as if in doing so, it's minimizing surprising. So perhaps we can come back to that in the discussion. Yeah. I don't think that Ramsted and colleagues are and necessarily best read as endorsing instrumentalism. I think it's quite consistent with realism actually. Yeah. And I think we'll see that come out as sort of recently explicit when we turn to the second option to just step in if I don't actually manage to pull that out explicitly. Very good. Yeah. So basically what we can now assume is that variational free energy on this first test reading is a mathematical fiction. At least that's what we're told. These sorts of constructs are used for fictions but they do not actually exist. So how to think about that then? Because we tend to agree with that particular point. So you can think that variational free energy a bit like a Galilean frictionless plane, for example, is a concrete entity. As in it has some mathematical concreteness about it. But it's not one that actually exists in target systems. Here the mass is not the territory. So even though we can kind of think of frictionless planes as being concrete in one sort of sense of its mathematical concreteness, one wouldn't want to infer that Julian is a frictionless plane, for example, that would be false and it would ultimately end up in a bad set of inferences. Now here again we draw on the work of Gottlieb Smith in order to think sort of more generally about the use of fictional entities in the context of science more generally. But here's what he says, generally speaking, he doesn't address anything to do with the FEP just to make that clear. Now fictions do not exist, he says, but at least many of them might have existed. And if they had, they would have been concrete physical things located in space and time and engaging in causal relations. So there's kind of counterfactual tests you can run on fictional entities and science from the point of view given here. Now you can say fictional entities such as variational free energy that might lack actual existence and target systems can be useful for understanding target systems nevertheless. So you might say that active influence models provide insight into the workings of biological systems. For instance, with helping to understand what a system has to do in order to maintain homeostasis. You can also say something like the following, namely that under a kind of Bayesian interpretation of how one minimizes this particular quantity, you can draw some similarities to how that, to how surprising is minimized in actual target systems by understanding something about the minimization of variational free energy. But it's a kind of similarity relation one can draw between depositing or fictional entities in the theoretical models and then what actual target systems are doing. Just to give you a sense of this, so motivated by active influence models on the DFEP, PAH and colleagues say the following about the significance of their work on active influence. I'll just give you the quote and we can kind of see this coming in to being. In brief, we use Magno encephalography in combination with eye tracking to assess the neural correlates of a form of short-term memory doing a doc cancellation task using dynamic modeling to quantify changes in effective connectivity. We found evidence that the coupling between the dorsal and ventral attention networks changed during this catech interrogation of a simple visual scene. Now they say here and I've highlighted this intuitively, this is consistent with the idea that these neuronal connections may encode beliefs about what I would see if I were to look there and that this mapping is optimized, new data are obtained with each fixation. So the idea here is that fictional entities such as variational free energy allows for the further understanding of target systems by making precise how target systems may minimize free energy, whether or not that be variational free energy or expected free energy in the sense of minimizing their surprises and not directly in the sense of minimizing these two notations. So the summary here that we give in the article is that the FEP comprises what is known as indirect representation of a target system. So it says something indirectly about the operations of target systems, even if it involves the use of abstract entities, it's an idealized approach representing complex or unknown processes in the world. It's standard practice to view scientific models as indirect representations of the real world target system. Now furthermore, most theoretical models are composed of a set of mixed claims. Now this is rather important for us to sort of preserve a notion of scientific realism about the FEP. Basically, and we use the work of Spielart here, this means that the model will posit if true the presence of what he calls both okay entities such as electrons and their ilk and supposedly non-okay entities such as numbers or theoretical ideals. Now we think the FEP is something like such a model, so a model post or composed of mixed claims. It put forward all kinds of okay entities, neurons, reflexes, et cetera. And perhaps what you might think of as non-okay entities such as the variational free energy notation, where non-okay entities has to be understood in terms of not being literally true of their target systems. So this gets us to the second formulation that Julian alluded to earlier of the ASIF formulation. Now, take us an example here from Dennett's 1996 book, Kinds of Minds. And I think thanks to an anonymous reviewer here for pointing us to this idea. Then it's so-called properian creatures. These creatures are said to be able to select from possible action policies, although it's not put exactly in those terms, in order to prune away inferior options to avoid fatal consequences. So in this sense, these creatures have some kind of in environment where they can select amongst possible actions. Now, here's the Tales of Two Densities paper that I co-authored with Max Ramstad and Carl Fiston. I'm not gonna read you the quote here, but the basic idea is going to be something like this, that one can prune away unwanted action policies by testing what kind of expected sensory observations one might encounter were one to elicit certain kinds of actions. But whether or not one prunes away action policies, the crucial part here is that that kind of beliefs transitioning over possible sensory outcomes given certain kinds of actions can itself be characterized in a fictional way. In a form of fictionalism, as Julia mentioned earlier, that is entirely consistent as far as we can tell with scientific realism. So for the properian active inference feature, one can think of trajectory is pursued over possible action policies in terms of fictions, as I just mentioned. And now we come back to our formulations of fictional entities and science from a couple of slides ago. Now, you might say that action policies are concrete belief states, but they're not actual. They do not actually exist. They do not actually exist given their counterfactual hypothetical characteristics. Yet, were they to exist, they would be actual with some causal powers. There would be implications for inferring one over the other. Arguably, some of the hypothesis come to fruition and therefore exist. So what we have here is an as-if formulation of FEP explanations that has a fictional characteristic to it, so part of the entities that compose our FEP explanations are fictional entities, but they're entirely consistent with scientific realism about FEP explanations, given the considerations about the status of fictional entities and science in general. Okay, so let's see, we've been going for 35 minutes. I'll start wrapping this up so we can get to the Q and A. Second case, one slide really quickly about ergodicity, just to get us thinking about this at least and we can come back to it. Now, first and defines ergodicity in the following way, one can interpret the average amount of time a state is occupied as the probability of a system being in that state when it's observed at random. Okay, now, Colombo and Palacios in this 2021 paper in biology and philosophy, they say that base spaces of biological systems are structurally unstable, they're always changing and unpredictable, making them non-ergodic. So isn't a set of fixed states, if you like, that the system keeps returning to if such a system is a biological system on this particular account by Colombo and Palacios? Now, they say that the FEP trades off realism for generality and mathematical precision. And they do so in a sense to sort of put pressure on this idea that the FEP can be thought of in terms that are consistent with scientific realism. Now, it's true if you take the approach to the FEP as a generalized strategy to modeling that the FEP trades off realism for generality and in the same time maximizing mathematical precision. But I think here's the twist. So this would be the first way of responding to this particular claim. If you turn the FEP, the mathematical structure, into a target directed system, what we get here, we get to work with specific systems. So now you get to increase the level of realism in the relationship, if you like, between your model and your target system. A second way of thinking about ergodicity is that, yeah, okay, it's true that they don't map up, they don't have biological plausibility. But you can think of ergodicity as a convenient modeling assumption. So the FEP doesn't claim that biological systems are literally ergodic. There's even a loosening in the literature now to saying locally ergodic, or there's even a further loosening to shifting talk of ergodicity entirely to talk about kind of nest densities and so on, where you can actually still retain this kind of meta-stability, this level of change and this degree of unpredictability. So to conclude, arguments for instrumentalism about the FEP, or at least the ones that we are considering, I'm sure there's one or two we have not considered, trade on the literalist fallacy, they accept or affirm instrumentalism on the grounds that the properties of active inference models under the FEP do not literally map onto biological systems. If we're correct, then there is a version of scientific realism that is a live and tenable option with respect to the FEP. Thank you. Awesome. Thank you. Okay, great times. Perhaps for the first word, we'll go to Julian. Just feel free to maybe say hi, give your first take, and then we'll go to the Act in Flab participants. Yeah, so thanks for having us. It's gonna be fun to talk about this paper. I think we should just jump straight into discussion rather than me add anything onto what Michael already said. He gave an excellent summary of the paper. So let's see what you guys make of it. Okay, so age or beauty? Where do we begin? Dave or Dean or Stephen? Who would like to go first? All right, Stephen first, and then either Dave or Dean. Well, I'll just let's say thanks for the book that you've published a few years ago about the third wave. It was actually the key book that kind of made me keep going with active inference after I first heard about it, because it was a really, really cool of some of the boundaries, because I've been looking at inactive, embodied approaches, and so just that's the first piece. And so in there, you talk about the idea of Mark of Blankets. You're not there, but the idea of things being dynamic and changing. And I'm curious whether when you're thinking about instrumentalism, there's a, in some ways, the danger of instrumentalism or is it starts to, everything gets referred back to maths, and then the maths isn't tethered to reality, because it's instrumental. And it can be kind of a challenge then as we're trying to bring in the four E's to think about, well, how does this start to map onto the way that organisms are trying to get a grip on the world in a bit more of a literal sense outside of the model. So, yeah, I'll be interested in your thoughts about this question of the ability for the modeling to be used in sort of almost a way to get into phenomenon that might be happening during this minimization process. Michael, do you want to start? I can start. Well, I think one thing that's important there, just stepping back and thanks for the nice words on the book that's always a pleasure to hear. The instrumentalist, I suppose, is just sort of, as I said, making a truth claim is off the table for an instrumentalist, partly because the definition of instrumentalism is the sort of not to put forward any kinds of statements about the world that can be true or false. Now, they can say that you have observations. You know, you take in observations about how systems work, and that's fine. One line there that's been pushed a bit in the literature now is to say, well, you can then, you can be agnostic about whether those kinds of observations can sort of truthfully reflect or oppositely, falsely reflect anything about the actual world. Now, my sense is that, and this is just my sense, that scientists, given different tasks and activities, questions and focus point will shift a bit between these. And that's in a sense why I had this notion, or why we have this notion of a targetless model. I think precisely because that's the domain we don't need to come down on one or the other side, query the model for the sake of its, for gaining more understanding. You know, a nice example I was talking to an ecologist on the weekend, is just thinking about these massive climate change models, right, with this hyper practical goal, but yet they're so wonderfully complex that one spends one's career figuring out how the model works. And that's okay in a sense, but I'm shifting it and trying to answer your question. I think some of the work that Julie and I have been doing on this sort of ever or on occasion, this softened that shifting boundaries, delineated by appeal to Markov blankets, tells us something concretely about such systems. So we can start thinking a bit about the differences between say the spider and the spider web relative to whether or not the Australian orb weaver has just eaten its web, or whether it's producing its web. So in that sense, you might think that, you know, it's sensory, if he is shrinking and expanding, depending on, you know, what time of day it is. So yeah, I might add to that. I'd like to think that, but I'd like to think that it's the ambition of even theoretical neuro biologists, or if, you know, from depending on the domain or issue their targeting is to, if you're working in the context of psychiatry on the basis of FEP models, do you want to understand something about well, actual psychiatric disorders that we as human beings may go through? And then you want to understand the mechanisms of how that works, so you can intervene in the right kind of way. Like, if you want to do that, presumably you have to say something about how those mechanisms work. And so you can quite literally, as I said, manipulate if that's possible in one way or another. Anyway, so that's my initial take on that. I want to go in a slightly different direction on how does the free energy principle relate to these ideas about embodied inactive extended cognition. Well, Thomas Van Ess and Ines Hippolito both argue for an instrumentalist reading of the free energy principle in order to show how it could be consistent with a non-representational understanding of cognition. So they're keen on avoiding the idea that the brain literally represents the world by encoding a generative model. And the way that they avoid that interpretation of the free energy principle is by endorsing the claim that the free energy principle is just a useful fiction for modeling the brain. And we think there's a couple of different issues that are getting mixed up here. So one is representational readings of the free energy principle. And Carl Friston often says that the brain doesn't have a model but is a model that the organism as a whole is a model of its environment. And here he's picking up on ideas from cybernetics like the good regulator theorem, the law of requisite variety, is it, from Ashby. So that idea of the organism being a model of its environment, we think can be read in a realist way actually, but also as suggesting a non-representationalism. So this came up a little bit when Michael was talking about Popperian creatures where they're able to, because of the deep hierarchical nature of the generative model, try out possibilities before they act on them. And that's sometimes taken to imply in the philosophical literature that the organism, if it's able to do that kind of counterfactual inference, it must have some kind of model of these possible states of affairs that it's carrying our inference over. So that gets you back into a kind of representational reading of the free energy principle again. But we are offering an interpretation of the free energy principle where we take very seriously this idea that the organism is a model, but doesn't have a model. And therefore that opens up the space for a realist interpretation of the free energy principle that is also non-representationalist or inactive. And that's the, I think the space that Thomas Fanes, for instance, thinks is not available to proponents of the free energy principle. But rather than go into why we think he's wrong, let me just stop there because I've said quite a lot already. Thank you for the responses. So Dean? Well, I think I'll ask both gentlemen for your opinion on this. I think if you set your argument up as a literal one, you're gonna find yourself into trouble because I think this thing kind of gets resolved. There's no debate if it's metaphorical because you have to have a minimum of two. I mean, with these Markov language you've got internal and external. You need minimum of two just so that you have discrimination just so that you can tell variation in the world. And that's what metaphors allow for. So I'm just wondering why we're not arguing for both that it's the synthesis of the instrumentalist and that way to measure and filter what's in and what's out because we do measure that and filter for that. And also the relational piece. I still don't like to bump into edges. So I agree with you. As an organism, I am the model until I fall off the cliff, right? So if we looked at this as a metaphor instead of literally, wouldn't that kind of at least smooth out some of the differences instead of verses, it's and this equals that in a metaphorical sense. What are your thoughts on that? I don't think you can have both instrumentalism and realism, although Max Ramsted has said things along those lines that he wants to try to... Well, then I'm with Max best of both worlds, yeah. But the difficulty is that these positions actually come into conflict, contradict each other because one says that the free energy principle does not map onto anything in the world. So we can't make any claims based on the free energy principle about how cognitive biological systems are organized. It's literally just a modeling tool. So it can help us to predict, make predictions about the behavior that we can observe of biological cognitive systems. But the free energy principle doesn't correspond to, doesn't map onto anything in the behavior or the organization of biological and cognitive systems. And that to me seems in conflict with the idea that the organism is a generative model. If you say the organism is a generative model, then that seems to me to commit you to the idea that the free energy principle is describing something about how biological cognitive systems are organized. And therefore it ought to be construed in realist terms, not in instrumentalist terms. Now you said something else about metaphor there. Yeah. And that science introduces all kinds of metaphors. And I think that introduces a kind of distinct set of issues really from the instrumentalism and versus realism debate. Because we can still ask, well, how do we interpret those metaphors? So no doubt it's true that science engages in metaphorical talk all the time. But it's once you get into the question of how to interpret the metaphors that then the realism instrumentalism issue comes back again. So can I just do one quick follow up? If I focus on the instrumentalism or the realism and not focus on the link with the two arrow ends, the metaphor, I absolutely appreciate what you're saying, Gillian. If I pick one camp or the other, then I'm not focused on the metaphor. But what if I'm just exclusively looking at that link as opposed to the opposing camps? Can I, I'm very comfortable, I'm a contradiction. I'm a human being, right? So I'm not really sure why I have to pick a side. You haven't convinced me about that yet, but. I mean, depends a bit about how you flesh out the term metaphor here, right? So if we went back to the little image I had from gear or you had your model description, model system and some target system, they say, for instance, you possibly metaphorical entities. Is that what you have in mind there by metaphor? I'm talking about the process of the way to be able to link to discrete or discriminatory ideas or objects or because I wanna be able to hold up the relationship and the measure of that relationship. I don't wanna say that I have to discard one in order to take up the other. And maybe it's useful to bring in the Markov blanket here as a metaphor for the boundary. So the blanket seems like a good metaphor for capturing the boundary of a particle or an organism. Right. And so then we can ask, well, what is the relation between that Markov blanket concept where you can, where the Markov blankets are nested? So you can find them at multiple scales of organization, multiple temples, spatial scales. What's the relationship between that nesting of Markov blankets and the self-organizing systems that we're using them to model? So I think that's where you get to when you try to unpack the metaphor. There's a nested Markov blankets is a metaphor. And how do we then interpret that? And for Michael and I, the way to interpret that metaphor is to think of self-organizing systems literally having this, well, not literally, that's us, they can use some words, isn't it? But as to think that there is a nested organization within self-organizing systems that can be modeled and described indirectly using the mathematics of the free energy principle or using the formalism of Markov blankets or using the Bayes causal nets. All right. So on that note, you use these particular constructs, I think, on the basis of what Julian just said to actually guide your research in the sense of figuring out something about how your target systems are organized. So the one way of casting this in the jargon we've been using that you would distort, you introduce a distortion in the thing you need to take literally, for instance, the mathematics on the lines particular Markov blanket formalism, but you use that in order to guide and direct your search. Now that's important because that's the kind of key to our particular, to the way of thinking for us about scientific realism that we can have all these distortions and all these points of inexactness in our theoretical models. However, the ambition is to say something about the target system. That's where the sort of conversation between the instrumentalists, or at least a version of that and a version of the scientific realist framework come apart. Now, I should say there's a set of good reasons for thinking that the two positions in philosophy of science are mutually exclusive. They can't be true at the same time. And one is the positing of a set of metaphysical claims in behalf of the scientific realist. That's typically the case that one was committed to the following. Some kind of mind-independent reality. Now, that can't be the commitment of the instrumentalists between the campos of that campos. You have no, you're not producing the truth about how the world is. The second metaphysical commitment is that we can somehow get a grip on unobservable causes of our observations. But that's not an option for the instrumentalists. They can say something about their observations of your empiricist, for example. We can't start making inferences from that on unobservable causes of those observations. Metaphysically, there's something at stake when you have a discussion that goes and that considers these kinds of sort of broader considerations in philosophy of science. But those are metaphysical commitments in that. So, of course, we can argue for those that should like or maybe we should not. So to make it more concrete, again, going back to the nested Markov blankets and the reason that we describe self-organizing systems in those terms is because of how the faster processes are enslaved by slower processes. So you see this circular causality that's described in complex systems theory. So there's a dynamics that you can observe in living self-organizing systems that can be inexactly and approximately described using the formalism of Markov blankets. And it's because of that similarity or resemblance between the formal apparatus that we're using when we employ base nets and the modeling formalism of Markov blankets. It's because there's that kind of similarity or resemblance between the models that we're making as scientists and the dynamics of self-organizing systems that we think it makes sense to interpret that metaphor in realist terms. Does that make sense? Well, yeah, I, again, I wouldn't argue that if you do place yourself in one camp, you reject what maybe the other side is arguing. What I'm thinking of is the statistical probability as a distribution allows for perforations, meaning the metaphor, I wouldn't say it's a bridge, but it's kind of an escape patch to boundary cross. And I mean, quite literally, if I use a metaphor, I get out of the constraint of not having that other person share the idea unless we can arrive at some convergence around something that we both understand from the past in order to be able to project into that, that new set of congruencies. So I don't want to take up all of this conversation. I just wanted to see if there's a great way to resolve the image. I think that the issue here is really something to do with what philosophers do, where we like to disagree and argue with each other, right? Right. And that works for some people, depending on their personality and for other people, it doesn't work so well. They prefer to try and find agreement or to find a way of communicating with each other. For us as philosophers, I think disagreeing and arguing is not to fail to communicate. It's just what we do as philosophers. What that made me think about is very few people are in the position of putting their name and a year and a version on an idea and being like the avatar of the idea, like Godfrey Smith 2009 and Longinot 2013. Here's what the idea is. And Dean or anyone who might be interested in applying these ideas might have less of a qualm with shifting their regime of attention, even within a sentence from one usage to another. And it's in the fine art and science of philosophy, where some of these minds become fleshed out and we can really see like how deep the logic goes. And that's like kind of like higher overtones in the symphony of thought. And then, you know, Dean's just fiddling away on the subway trying to get whatever currency they get up there in Canada. Let me go to Dave with a question, because you haven't gone yet. And then Stephen, afterwards. Okay, coming at this whole question, particularly the question of metaphor, my background is I'm coming at this from General Semantics through the cybernetics theory of learning teaching through George Lekoff's grounded metaphor theory and through Mark Solm's conscious ego. So when you say metaphor dot, dot, dot, I say, well, who's actually driving the analysis? Who is it that has to be appealed to? And I would say, look, the interpretation is done by the metaphoric structure. And why is that a good grounding? Because the fundamental grounding of every living control system, the speed of updating, the question of how existentially imperative that you not violate your priors down through very delicate questions of mathematical formalisms. It's all grounded in the client of avoidance or approach and the client of pain versus pleasure. So it's all grounded. It's got a basis, your entire structure of inference, of induction, your whole induction is already grounded. So don't worry about what the metaphor means in terms of theories. Ask, what do the theories mean in terms of metaphor? Don't be shy about telling me I've gone off the deep end. Every last person I respect has been accused of going off the deep end. I might let Julian take first steps here so I can sort of get myself into my own heads. All right, so I like how you describe metaphors and the place that they have within the free energy principle where they're grounded in these perception action and motion processes. So for FEP that what everything is organized around is these imperatives to resist disintegration. And so it makes sense to me that the metaphors we use in communicating with each other and thinking are in the end gonna inherit there and be grounded in life processes. That's something that we talk about in terms of the life mind continuity thesis coming out of inactive ideas in philosophy of biology. So there's an interesting connection there between the embodied cognition ideas that Lackoff and Johnson are using and the 4E ideas. So I think that's what you were just describing. But that comes out of this from a slightly different direction to what we're concerned with in our paper. Where what we're looking at is say a MATLAB model that somebody might make in simulating active vision like Thomas Parr has done or Micah Allen when he's looking at how to model the interactions between heart rates and fear perception. And he's constructing like a simulation and in silico model using the mathematics of the free energy principle to describe those kinds of interactions between intraceptive processes like monitoring of your heart rate and extraceptive perceptions say of fearful images of faces making fearful expressions. So that's a kind of in silico model that's been made there. And what can we learn from those sorts of simulations about say interactions between heart rate and visual perception? And that's where our sorts of questions come up because you can think while the in silico simulations that are being made, they're just instruments for making predictions about say heart rates, how heart rates can interact with perception. And the simulation can be useful for predicting the kinds of data that we could measure. But the simulation itself isn't gonna give us knowledge of how the intraceptive system interacts with visual perception. And we just say, no, that's wrong. The simulation can actually tell you something important give you a kind of knowledge maybe in the long term. So we have to look over time and look at all of the refinements that are gonna happen in active inference models before we can actually arrive at something that is a true accurate description. But in the meantime, the kinds of approximations, idealizations that we're operating with when we make these kinds of in silico simulations are still telling us something important about how how the mind is organized, how living systems are organized. So that's the sort of claim that we're making. Can you see that that's sort of a little bit of a tangent from what you were asking about metaphor? Maybe it isn't and you can explain why you see how you see them coming together. I have probably stated my point in an over-political way. I was very excited. I was really, I am very excited about what you're doing. Kind of like when the opponent finally lands a really solid blow on Muhammad Ali and Ali says, okay, enough playing around. Let's put it back. No, I'm very sympathetic. I invited Professor Solms to this discussion. I certainly hope he's gonna watch the film. No, I'm very much on your side. I'm just saying, I think the embodied perspective, the pre-Galiland perspective where valence, where value, where teleology are viewed as a foundation or at very minimum a co-equal foundation of the entire intellectual structure, the entire research structure. I think that is exactly correct. And I'm really pleased. For instance, a subtle British nuance perhaps that Professor Christon uses the term teleology so often in such an important sense. I think that already by itself involves hitting back against it. But he always says he's depleting teleology, doesn't he? He loves this word deflation. Deflation, I guess. That's something that you can see in physics as well. Michael found something very interesting in a YouTube video from Christon about where he was responding to some presentations that he'd heard where he describes how this variational free energy relates to Gibbs free energy. Michael, did you wanna come in on that? Yeah, I mean, this is definitely well beyond sort of my pay grade with social media. Yeah, me too. Probably with some people in our group here and in our audience, we'll be able to... The basic gist of it is something like this, that you can describe transitionings across leaf states by appeal to variational free energy. And that's nice. That gives you a nice sort of computational modeling language to speak about state transitions across different leaves. So the claim is here, and the claim is interesting because it allows for an even tighter grip between your fictional constructs and actual systems that you can give a formulation of those state transitions by mathematically showing an equivalence relation across the variational free energy and the Gibbs free energy, which then by definition allows you to show in energy spent per joule or something like that, what it actually costs to change one's beliefs, for example, in the context of the FEP. Now, that's great if that's the case because that brings you even closer from your model space to your target system space. But that's a bit like when somebody asks me, okay, now how does the octopus mechanistically select its action policies? Well, I'm not a marine biologist and I'm not a specialist in octopi. I mean, that's not for me as a philosopher to tell you. And I'm gonna cheat out and provide the same sort of reason for not going any further than that. But it's a good example, I think. Yeah, the reason I brought it up was because you were talking about teleology and what Kristen does in deflating that notion is actually bringing it back to something in physics as well, like Gibbs free energy. The free energy principle shows you how you can take something that looks like it's going back to Aristotle but actually show how to bring it into contact with the mathematics that we now have for understanding self-organizing dynamical systems. I mean, I think it's just a important note there. So even if the term teleology is being used in the literature, one ought not to confuse it with the sort of notion of a final cause as per the Aristotelian system, right? So we have sort of a final cause and ultimate end point that one necessarily strives towards, right, but rather that self-organization is somehow inherently purposeful. And to me, that strikes me as right, but it also strikes me as a deflated notion of teleology. Yeah, but there's lots more we could say on this but probably it's for another session, isn't it? So a little bit of a tangent from what we're currently talking about. True. Stephen, and then I'll have a question and then we can sort of land on it. Yeah, I think this question about what we can model with the FEP or what FEP is able to be used to do modeling on using variational free energy and what the model is. And I think you've kind of resolved this to some extent by distinguishing modeling from models. So there's instrumental models, there's realist models and there's this process of modeling. And one of the things about non, and I think in some ways you don't say non-linear, I mean, non-linear dynamical systems might be best. The thing with those systems is, and is that they are in this case anyway, generative. And we've been looking at and I was actually talking to Daniels in our tools session like about how these Bayesian graphs can be thought of. And one of the things is that generally of a model and you try to fit things to your model, like in psychology, you fit someone to the model of psychology, right? And that can be the problem because now this person's being fit to sit all these models. Well, in this case, these models generally, not always generate the data, but they generate a little bit differently each time because it's stochastic. So you see the way they evolve and that's what we're sort of getting at. So building the model can be a useful design thinking tool to think about how it'd be structured. And then you get this idea of modeling. And this is actually maybe tying this with what Dave was saying a little bit my belief is that at the moment, all the ways that we are able to model are very low dimensional, like extremely low dimensional. However, I do believe, I've been looking at stuff where phenomenologically it would be possible to use experience as the raw instrument. So I think high dimensionality can be done through, there is a potential for high dimensionality. So there's this question of like, when we're modeling using models which are low dimensional to such an extent and even like often are partially observable mark of decision processes. So they're not even actually saying the data coming in has been a chain necessarily through, you know, you either have these psychological models which are very low dimensional and a partially observable. So it's gone up and it's hit the point of beliefs and then they start to look or they have these incredibly particular models starting from like the first atoms of life or whatever. And I think that your intuition is probably right. Actually, in terms of the off then because I've been really looking at some of this in terms of phenomenologically there is there ways that experiential information can be modeled in or put into these models. And so it doesn't only have to be these very low dimensional ones which may be so low dimensional that maybe that's part of the problem. It's not that the FEP is real, it's just what they're using to model it is still so low dimensional. Yeah, that's a nice question. Let me at least attempt the form of response. So Julian and I, Ian met the other day and we was just sort of chit chatting here. And since you did mention markup precision processes as opposed one way of conceiving of this if you like low dimensional versus high dimensional or a kind of simplicity versus complexity or kind of inaccuracy versus more accuracy distinction would be just to sort of quite often one finds that those sorts of processes are modeled in discrete states. And my assumption is that that's a simplification because it makes work somehow more tractable. So it's manageable, you can write the code on the basis of the discrete state transitions, for example. And still learn something even if you sort of think that maybe state transitions are continuous, for example, right? Another thing that's more meta as opposed to that you is just sort of the following. So we're quite inspired by a philosopher by the name of Michael Weisberg. So his name appears frequently now in the paper. And he has this really nice paper from 2006, I believe it is 07 in biology and philosophy where he gives you two different approaches to a model-based science. We follow one in part because we don't think it's very feasible to do the other. One of the first two is a kind of brute force approach. Now I suppose this is gonna give you all the high dimensionality that you can imagine that given computational power to pull it off. On that approach, the goal is to have the values of all the variables we can identify in your target system reflected in your model. This is your super high degree, if you like, of correspondence between your model system and your target system. But the catch is that that particular ambition is extremely difficult given the sheer complexity of real-world systems. So the alternative is to say, well, then any other kind of model-based approach in the sciences that you can achieve that degree of completeness, if you like, is to use idealizations. So by necessity, almost, you have to simplify. I suppose that will be one way of thinking about why we have low-dimensional models and why one uses low-dimensional models. Can you use experience to improve those models? Yeah, I'm not sure. Perhaps it seems like you ought to be able to. I would ask a slightly different question, but I see Stephen wants to come back. So I'll hold my comment for a minute. Yeah, cool. Well, I just wonder, I think, I suppose the dimensionality is so low that it's almost like even if you made it a thousand times more, it still would be, it's still useful because it shows the dynamics shifting, which is a little bit different to most models. However, yeah, there's like, I don't know how many zeros it has been magnified by in terms of multi-client complexity, but that might be anyway. So I think it's a good point you're making though, yeah. Well, let's take the Casper-Hess deeply felt effect model. I don't know if you know that one. So there's a variable in that model, a matrix, the G matrix, which is trying to capture the minimization of expected free energy and the role that that plays in selecting between action policies. And the idea is that that G matrix is something that is changing over time, such that you can do better or worse than expected at minimizing expected free energy. So there's a variable there, which I guess is still there, gonna yield models which are low dimensional in your sense, but it's still being used to model something that is, I think, phenomenological in the end, namely valence. And so the model I would draw from that is that something that's an idealization and approximation because it's low dimensional, doesn't necessarily mean that that modeling tool that you're using there that has this low dimensionality can't still be used to describe something high level and phenomenological, namely valence. And that goes back to what Dave was saying before, I think, but the valence is actually something that's built into free energy minimizing systems because variational free energy is a way of describing, a way of indirectly representing something that's fundamental to life, which is this tendency to avoid decay or disintegration. And so the low dimensionality, I think, doesn't stand in the way of using this as a model very high dimensional, phenomenologically deep, thick concepts like valence. Yeah, that's a really good example, actually. Just one point on that. And it also, we're talking about general, because I do agree with the idea of, if we're saying what are generally tractable dynamical processes, which maybe is another type of realism in a way, or it's realism, but to think of that is, it does show that affect becomes a plausible way, which pretty much hasn't, I don't know any other way to think about it previously, it's why we start to get into consciousness as well with Mark Soames is, yeah, it gives this plausible way where what was relegated in our scientific world, because basically consciousness was taken away by psychology and feelings were kind of this lower down ideas, like, well, wait a sec, this is actually the all encompassing way to know what you think. So that's actually, because I actually do a lot of work with metaphors. And one of the things that happens with metaphors is people, what's the metaphor of knowing when you get something right? Like, what's it like to know something? So if you ask a mathematician what the answer is to a problem, they don't get an answer on a computer screen, they actually feel the answer being right. So they've kind of somehow, and then what's that like? And then you can process that as a spatial metaphor in the room, but the doubt you said, it's affective. So yeah, I do agree with what you're saying. And what I'm interested in as well is working backwards from an affective state and trying to plot out people's low dimensionality another way, but that's for another conversation. But I do agree with what you're saying, but I don't know whether you would say that the dynamics is different to realism in physical ways. So in the model that the G vector is being used to set precision on action policy. So which action policies can you be confident in? That's a kind of metacognitive process in that it's not just acting based on some gut feeling in the way that you described, but using that feedback from the body to assign precision so to look at how reliable a prediction error is in relation to your predictions when the quantity that you're minimizing is this expected free energy quantity. So there's something global going on there, I think, which gets you closer to what you're looking for and you're thinking about consciousness and something in the global dynamics when you're thinking about the G vector. So I'll just read out one question from the chat and the answer could be as simple as read the paper or give a thought. And then I'll ask a closing question for the authors. So the question is from Matthew McTig and they wrote, how do we resolve the controversy of the conflation between heuristic and Markov blankets? What metaphysical legwork should be done to defend an ontological Markov blanket? Okay, so the question is something like, we don't really take that off to it. One sneaky move might be to say, go read the way back it L paper, but moving from heuristics to ontology, I think Julian has already touched on that quite a bit when speaking on the sort of multi-scale characteristics of self-organizing systems. So if you have as part of your model, and this is our preferred approach here, the formalism like the Markov blanket formalism, then of course you're not going to be saying that heuristic, this would be the literalist fallacy, is going to just be mappable on the multi-scale structure that comprises the system. So now you have to interpret, that's the next step in this modeling move. You have to interpret these sorts of notations under a certain description, so that you can start thinking about what it is about biological organization that reflects in that indirect sense that we are working with these sorts of heuristics. So that's the work that one should do. But it's not the kind of thing that we discuss at length in one section of the paper we call the section, the paper moves across four different domains, that has to be done with some swiftness and level of generality to bring the philosophical point home. But I mean, I think that's a separate paper actually, that little question. So thank you, Matthew, for that. Awesome. So just one closing question, as you are viewing and revising the paper, who is your audience and how would it impact the next several months or years of active inference and free energy principle research and application if people were to really direct the regime of attention and understand what you're talking about and update their generative models accordingly? William, do you want me or you to go first? I'm happy to go first with initial thoughts. So I wanted to bring up the recent paper by Miguel Aguilera and Chris Buckley and others where the problems that they raised there are about how to take the free energy principle and use it to model real biological cognitive systems. And they take very simple system and then show what kinds of assumptions you need to build into the free energy principle in order to use it to model those systems. And they argue that those assumptions are unrealistic and they pick out a very small class of systems and those systems are unlikely to be biological cognitive systems. So that looks like bad news for the free energy principle but not from the perspective of our paper because what our paper shows is that all of those assumptions that they identify are actually exactly the kinds of idealizations and approximations that don't rule out taking the free energy principle to indirectly represent biological cognitive systems and so the contribution of our paper to the community, I think, is to just point out that even if we look at the assumptions that underlie the free energy principle, I think while those don't literally apply to biological and cognitive systems, that doesn't rule out the possibility of still making active inference models and using them to learn important things about biological cognitive systems. It would only rule that out if you think that those systems literally have to instantiate the assumptions that are built into the FEP before you can apply the FEP. But we've argued no, those assumptions don't need to literally obtain. They can just be idealizations and approximations. So the contribution I think of our paper to the community is to say, well, there are these real challenges, but they don't stand in the way of using the free energy principle to carry on doing active inference models and learn something important from those models. Okay, if we can achieve that, that's certainly a job well done. I think in terms of the audience, perhaps outside the technical, very specific niche of the free energy principle. I mean, one would like to hope that a bit like when you had Steven on a few weeks back talking about free energy, a user's guide, which we crafted as a sort of ambition in part was of course to introduce these sorts of conversations into the context of philosophy of biology. And then hopefully they will see the light of day. I think similarly here, and I'm sure Julian will agree that I think these sorts of papers that take up like broad picture discussions in the philosophy of science with say an emphasis on the free energy principle will bring an audience from the philosophy of science into contact with the kind of area of speciality that we're working in. But I would like to see that as an aim as well. I think secondly as well, or second just that, just working through with some precision these kinds of conceptual details in this highly mathematical and sometimes really abstract domain that is the free energy principle is important. Precisely also I think for the sciences working on it to have a feel for what is the conceptual assumptions of making some claim, right? So you're also like conceptually sort of engineering or this debate in a particular sense that can be fruitful I think for non-philosophers working on it and a bit like what Julian was touching on that. Awesome. Well, Michael and Julian, thanks so much for the paper and also Ian for co-authoring on the paper and best of luck with it. Dave, Dean and Steven and also Matthew, thanks for participating. This is a really interesting discussion and we hope to continue the conversation however you all would like. Thanks guys. Thanks guys. Bye bye. Good night. See you later everybody. Thanks guys. Bye bye.