 Hello and welcome to the Active Inference Lab to our first ever applied Active Inference Symposium. Today it's June 21st, 2021. And we're very honored to be here with Professor Karl Friston and many of our lab participants. Just as a way of quick introduction, the Active Inference Lab is a nonprofit organization that is a participatory open science laboratory. And we're working to curate and develop applications related to the Active Inference Framework, something that hopefully we'll be going into a lot more in detail today. And this is a screenshot of our website. As far as the overview of this symposium, there are three organizational units in the lab, dot edu, education, dot comms, communication, and dot tools. And each of these units are going to facilitate a 45 minute or so session and we'll have a short break in between sessions. So in our weekly meetings over the past weeks for each organizational unit, we've been developing questions and getting excited about things that we wanted to talk to you about. As far as a few overarching themes that were kind of spoken to really through the whole journey of our lab, but also across organizational units, the first theme is applying Active Inference across systems, again, something that will come up probably in all sections. The idea of research debt, the idea that we don't want to be developing research frameworks that have a huge burden on those who are learning and applying and that especially early in the formalization of frameworks, it's extremely valuable to increase the accessibility so that we don't end up with major headaches and incompatibilities later on. Collective intelligence and the ways in which that is manifest across different systems. Transdisciplinary teams, projects and communities, which are kind of like nested levels of organization, but transdisciplinarity is something that is necessary for the type of work that we're all interested in. And also just modern challenges and opportunities for research and all that that means related to online and everything else. And of course, anything else that you have tumbling around and wanted to bring to the table thematically. So there we are with our sort of lab overview and introduction. Let's go to our first organizationalunit.edu. The goal of .edu is to scaffold and create a participatory and dynamic active inference body of knowledge, which we'll talk more about in a second. And our progress and actions this year have been to release a terms list V1, which benefited greatly from your feedback. And also we're now updating the terms list to version two, which now includes five complete language translations and many references and citations for the terms. The way that we're approaching the development of the terms is by using approaches that place ontology and progressively more formalized versions of ontologies as kind of the backbone of an educational body of knowledge. So we started on the left side here with a terms list in the first quarter of 2021. And the ontology working group is like a train that's pushing to the right as they're learning ontology by doing and developing progressively stronger and stronger ways of relating the terms and the concepts that are essential for understanding active inference. And this will help us develop principled educational material that's also able to be translated rapidly. Alex, do you wanna give a quick thought on where knowledge engineering comes into play? Yeah, thanks. At this slide we are showing this work with ontology with system engineering approach that we are also using in the lab and considering possible deliverables of working on educational materials and creating them. We should have at some point of time, textbooks and educational courses. And actually maybe this lab is started from the idea that textbook for active inference should be created. Also we see some connection that can be applied to organizational management for creating translations to make it multi-language from the beginning. And also we should see for some domain-specific use cases that we can understand in terms of that ontology that we are going to create. Thanks, Alex. So on to the questions section. We're gonna start off pretty broad here in the .edu. How do we go about determining the core ideas and terms for active inference? This will be the format of the question slides, Carl. So feel free to jump in. I guess it would be structured around the key ideas and essentially ingredients that underwrite the free energy principle and how that translates into active inference. So without thinking about it too deeply, my mind just goes to what are the things, what are the basic ingredients that you need to explain to somebody and what active inference is and why it works. And it normally starts off with the notion of a generative model. And then from that, you spin off all the appropriate mathematical ideas and the constructs and descriptions that would attend that. I mean, it may be best to reflect the question back to you. So this is the idea of having an ontology and it's certainly my experience that people are entertained by sometimes the poetic use of phrases and descriptions such as epistemic affordance when trying to grapple with what are the fundamental ideas behind active inference. Some of them are fundamental and some of them are not. So it certainly is an interesting idea to try and tie down the ontology. But let me ask you, this ontology just means what it says in the sense that you're trying to define the essential concepts and how they relate to each other. Is that the basic idea? Yep, going back to this slide here, we want to have a continuum from a list of terms, potentially that could be developed into coherent and again, principled course material and competencies but also develop a logic and we're developing within the sumo ontology development framework which defines not just relational edges but actually a actual logic. And so we hope to be able to ask is this a complete active inference model? We really checked off all the boxes and use those kinds of logical tools that are accessible to the well-developed ontological frameworks. Okay, well that's very compelling and very clear. It strikes me then that it would be useful to link that operational ontology to the underlying maths. So you're much of the conceptual steps both in understanding and implementing active inference usually in terms of simulating interesting behavior or using it as an observation model to explain some empirical data from a study. Much of it can be developed in terms of a series of moods that usually, or in fact, almost universally are inherent from the, are framed in terms of either information theory or linear algebra or differential equations and you can just build the story from that. So if you're looking for that degree of formal and useful detail then it would be one principle you might refer to is basically where does one equality assertion or description or variable or object where does it come from in terms of inheriting from the more basic formalism. So what I'm thinking of here is where does active inference start and how do you get to the calculus and the Bayesian mechanics that you would associate with active inference. And my guess is given the structure or the way that you have approached the ontology you've probably actually done that already or are in the process of doing that. Are you going to go through some examples that would sort of highlight the strategy and the problems which are usually more illuminating than the solutions that you've encountered so far? Sure, I'll switch here to this screenshot of the current state of what it looks like and we're starting just in tabular form by compiling up to five references and citable definitions. First just looking for exact cases where a term is used and then we'll go from how the term has been used towards synthetic definitions that capture different senses of the term and then along with the concise narrative of the field and also ontology experts who are here with us we're going to then be working to make the actual logical underpinnings elucidated in terms of specifiable code rather than just concise English definitions and then from that sort of generator of the formal relationships we'll be able to descend into mathematical formalisms or other natural human languages. We'll keep you posted on this project though, for sure. Let's go to this next question and imagine that we had that set of terms in developments. It's going to be a work in progress our whole lives. How would we go from core terms and ideas to an interactive and enlivening education that speaks to people from many different backgrounds? So I'm going to answer this question from the point of view of my experience as a supervisor which is probably a little bit of a narrow remake from your more general ambition. And I imagine that this is related to this notion of research debt, I can't remember that, but this notion that you don't want to put too much pressure on people when becoming acquainted with the utility and application of either the code or all the ideas. So in my experience in an academic setting just having toy simulations is usually the best way to give people a feel for what this approach does and how it can be used. So it's enormously potent in terms of demystifying and also illustrating the functionality at hand or that can be accessed. And it also having a sort of a working or at least a toy model sort of provides a proof of principle that can strip away the magic as well. And I think your ambition to try and make this accessible to people who are not necessarily fluent in the underlying information theory or dynamical systems is very lawlable and perfectly feasible. So again, in my experience, some of the most creative applications of active inference can be by people who don't really necessarily wonder too much what's underneath the hood. It all comes back again to the design of the generative model. So if you get the generative model right and it's apt to describe the thing that you want to understand or to simulate, then usually everything else follows suit. And I mean that in the sense that you can just take off the shelf software, which I presume that your ultimate ambition is to make available and make it work in the service of sort of saying, well, what would this agent or this synthetic creature or person do in exchange with her environment if this was the generative model and this was the generative process. So a lot of this really, I imagine in terms of answering your question, how do we go from core terms to interactive in the library education is just establishing a language, a lexicon that allows you to talk through somebody in constructing their own simulations that speak to the issues that engage them either academically or beyond academia. So clearly then the core terms play the role of literally a language in terms of communication which brings us back again to the importance of the ontology and having the terms linked in a formal way to the mathematical expressions and also procedures and processes. So I guess that a precondition to use the core terms in an interactive and in lively, educative sense will rest upon getting that ontology right. In my experience, the best way to get the ontology right in the sense of it being enabling is just to talk about the terms until everybody, until there's some consensus and everybody understands them in terms of either their teleology. Well, sorry, no, both in terms of their teleology but also in terms of where they come from the point of view of the code and ultimately the maths that underwrites all this. Is that the sort of answer you're looking for here or are you thinking along the same lines? It sounds great. There's so many dimensions there. Just to provide a summary or just jump in at one place, what is active inference and what does active inference do? Right, so that's perfect because I was just thinking it would be really useful just to go down the terms that you had in the previous but one slide highlighted in green because I think all the heavy lifting here is really just shouting about what are the core aspects and claims or the core things that you're trying to communicate with any one of those terms. So for me, active inference would be a description of a process that can be seen as something that arises from the free energy principle. So you can either tell that story from the point of view of a physicist and say that active inference is a theological description of processes that systems that self-organized must possess or you can tell the story or define active inference from the point of view of neurobiology and ethology, from the point of view of say predictive processing and describes what it entails. I've used the word Bayesian mechanics before because from the point of view of the physics definition, it would be a telerogical description of a Bayesian mechanics that necessarily arises with certain assumptions from any self-organizing system. One key thing about active inference which I think would be important to putting the definition in the ontology. I'm not sure it's already there, but if you're in charge of sculpting the ontology, then you're in the position to make sure it's there is it's about, it's beyond predictive processing. It's beyond sentience and it emphasizes or reflects the pragmatic term at the beginning of the century really sort of epitomized by the sort of the four E's you know, embodied, embedded, extended and the like to make it clear that sentience is active and that you are talking about the circular causality of engagement of any particle, personal plant with whatever is out there. So that would be certainly one thing to emphasize in terms of what active inference means. The inference is interesting in the sense that it does imply a process and a process with purpose which is to infer which is why I keep using the word, a theological description of something that's actually underneath the hood from the point of view of physics. One final point here is there's an easy confusion I think between first of all active inference and passive inference. So that's certainly something which probably needs resolving certainly in the philosophical literature. So I often come across the philosophers who say, well, there's passive inference or perceptual inference which is just basically inferring states of affairs in the world on the basis of some sensory evidence and then there's the extra bit which is the active bit which is now you're in charge of gathering that sensory evidence upon which you are now going to prosecute your perceptual inference. That's an interesting dichotomy which I'm not sure is a correct dichotomy. If it's not right, I'm not sure that it is not right in the sense that it is a useful distinction but certainly is not what active inference was originally termed to mean. By conjoining active and inference, there were a number of motivations. First it was a generalization of David McKay's active learning but probably more importantly, it was a nod to the notion of active sensing and active perception that perception is in and of itself and an active process, a constructive process that you have to put policies, plans and action into the game. So that I think would be one important aspect of active inference to define and I don't know that it has been defined so far so perhaps it's your job to define that. The other thing which is important I think in terms of emphasizing what active inference entails actually comes from that inactive perspective which is inference about the consequences of action and that has an important but really simple concomitant that the consequences of action are in the future and that means you now have to think if you're thinking about active inference in terms of of teleology or as a normative theory of behavior, of sentiment behavior, you have to now think about, I'm sorry and I should say qualify, when I say normative, I mean it can be operationally defined in terms of an optimization process that in turn requires you to define the objective function or functional and that's important practically because if you're now thinking about sentient behavior or active inference and it's inference about things that haven't yet happened because you haven't yet acted, then you're necessarily talking about objective functions or functionals that are about states of affairs in the future and that is an important move and something that active inference embraces which goes beyond predictive coding. So much of the literature in the 20th, in the 1990s and subsequent, much of the literature that inspired that sort of inactive perception or active sensing take situated combination take on sentience originated in things like predictive coding but predictive coding is not what is meant by active inference. You can do predictive coding just by, if you're a statistician, just minimizing variational free energy that's only half the game once you move into the world of active inference from a theological perspective, all your, you have to do that, you have to form beliefs about hidden states of affairs in the world using sort of the perceptual side of perceptual inference, but that is only in the service of running out into the future and deciding what the best thing is to do next and that running out into the future and deciding clearly calls for an objective function. So in active inference, that would be the expected free energy which may or may not be unfortunately named but that's what it is. And therefore, active inference sort of implies that you are committed to optimizing an expected free energy and implicitly, it's all about choosing the next thing to do. So for me, those would be two sort of cardinal things that should be embraced by definition of active inference. That transcend other normative approaches. So for example, reinforcement learning and behavioral psychology would be all about what the good things are to do and you commit to a loss of function or value function of states. If that was the kind of behavior that you were trying to describe, if on the other hand you were all about the psychophysics of perception or just building base or digit or terrorist recognition systems where you weren't in charge of gathering those data, then your objective functions would be very, very different. But what active inference says is, well, you can't carve up the two problem domains because they're just both sides of the same coin and thereby you're now facing the problem of defining an objective function that is fit for purpose, that does both the belief updating about latent states or hidden states generating the data and also the best way to solicit or cause those data or outcomes under some prior preferences or some goal-directed constraints. So that's what you're trying to do. Thank you for the comprehensive answer. It leads directly to our next questions, which are what is the free energy principle and especially what is the relationship between active inference and the free energy principle? Well, that's, I think, a slightly easier question to answer. I think it's a good question. I think it's a good question. That's, I think, a slightly easier question to answer. And the free energy principle is just a variational principle of leased action. Why is it special or not formally identical to all the other variational principles that we use if you look under the hood, right from quantum through statistical and stochastic to classical mechanics? Well, the only thing that differentiates through the variational principle of leased action that is the free energy principle is that you're paying careful attention to the separation of states to which you apply that principle, the separation of states into the states of an agent or a particle or a person and the outside states. So technically, if you were in statistical thermodynamics, for example, you'd normally assume that separation in terms of some idealized gas that was contained within the container or heat reservoir or a heat bath without really worrying about where the heat bath or the heat reservoir came from. But the free energy principle says, well, no, you can't really do that. You've really got to attend very carefully to what licenses a separation of different kinds of states that you can assign to the inside of something and the outside of something. And the states that mediate, the exchange between the inside and the outside and then you get into the Markov blanket and Markov boundary literature. So just to summarize, a free energy principle is just a principle of least action by which I mean that there is a description of dynamics in terms of the most likely paths any system will take that is the special provenance of a partitioning or a separation of the states of some universe into the states that are owned by an agent or a particle and those that are not. And the states that mediate the exchange between them. So that would be the free energy principle. Active inference, as I say, is a sort of theological spin-off from the free energy principle in the sense that in the same sense that you have now at hand a principle of least action that allows you to identify, simulate, define the paths or trajectories or the narratives that a system, the most likely paths, trajectories or narratives that a system will pursue under certain conditions and those conditions are just that there is an attracting set of states to which that system will converge to or will look as if it's attracted to. So what I was working towards was the notion of an attracting set as a metaphor for equipping that physics with a teleology and that teleology is nicely illustrated by the notion of attraction. When mathematicians talk about attractors in the particular case of the free energy principle these are sort of pull-back attractors or the kind of attractors that you get in random dynamical systems. There's a proper and natural tendency to think that these particular states of the attracting set literally attract in the sense of gravitational attraction or any other kind of attraction they pull and they pull states towards them. So that to me would be a teological interpretation which I think is much closer to active inference that you're saying that inference is a process that has a purpose and the underlying free energy principle looks and allows you to say the way it looks as if self-organizing systems show these certain properties they're attracted to certain states they show they're attracted to certain paths and we can describe those in terms of the teological ontology and that would be active inference. What practical difference between active inference and the free energy principle is that the free energy principle is just a principle it's neither right nor wrong it's just like there are millages noted it's like a know-the-theorem or Hamilton's principle of least action but as soon as you start to say well I think that this principle applies to this population or person or particle that suddenly commits you or requires you to define the attracting set of states and the principle back attractor in another jargon the equivalent would be a gerative model and as soon as you commit to a gerative model to explain the tealogy of this system of this agent or this person then you've moved into the world of non-falsifiable principles into falsifiable hypotheses because you could have chosen the wrong gerative model and thereby there will be evidence for choosing this gerative model or that gerative model so the relationship between active inference and the free energy principle is operationally quite simple active inference is the application of the free energy principle to a particular system but in that application you're bringing a lot of tealogy to the table and more specifically you're having to commit to a particular gerative model and as soon as you do that that becomes your theory or your hypothesis about what is an app description for this system so a number of interesting distinctions in terms of the relationship to an active inference and free energy principle that I imagine your ontology is already addressed or it's certainly addressing well we'll get there thank you for that excellent answer for the next question Lorena please read it out oh hi so still in the spirit of broad questions and broad terms and that I think comes in line when it came before so how and where does the idea of information play a role in the free energy principle and how does it relate to this active inference in the sense what is something to keep in mind when thinking about information dynamics in active inference right well these are great questions I'm getting the hang of this now you just want me to talk I have presented a question to a lot of them talking which I'm very happy to do are you sure you want me to do that or should this be a conversation perhaps this will turn into a conversation at some point anyway so information so it plays a dual role in the sense that information theoretic formulations underpin most of the derivations behind that principle of least action and it can be no other way in the sense that all mechanics from physics is really articulated in terms of probability densities of distributions and as soon as you have a mechanics that is or a calculus of probability distributions they are effectively in the world of information theory and you see that at many different levels so one nice example of this is the central quantity that we often use to score the likelihood of being in a particular state if you're a statistician that would be the marginal likelihood if you were fluent with an FEP on TRG it would be a surprise or a more simple surprise that is just basically the self-information if you're a physicist you'd look at this as a potential it's a negative locked probability so you start really when thinking about the physics with this central concept of self-information which EIP can be read as a potential function or a surprise or function and it is the thing that the variational free energy is a bound approximation to at that level and then every other move you make mathematically in terms of the expected self-information being the entropy and why that is important as a characterization of various probability distributions in the setting of self-organization would testify to the fact that information theory is absolutely central to all the maths that underlies the physics of the sentience that emerges from having a distinction between the states of the system and states that are not in the system namely the Markov blanket having said that information to most people's minds usually means more certainly in the folk psychology context it's really information about something and the FEP active inference has I think something quite special to bring to the table here that goes beyond information theoretic treatments of that you get in communication and signal processing and rate distortion theorems all of that kind of information is just your extensions of information theory that inherit from self-information or the implausibility of a particular event or message or in more abstract domains such as sentience and consciousness you would go to something like integrated information theory but that is all about this shalom-esque kind of information opposite the other kind of information about something so what I wanted to try and put on the table is the very fact that you've got this Markov blanket or separation of states on the inside and states on the outside means that now you can equip the states on the inside with the role of encoding posterior or conditional Bayesian beliefs and that introduces technically a different kind of information geometry a different kind of information theory where crucially now you can read the internal dynamics containing or having information about what's going on on the outside and this is a really important move equipping your neural dynamics or variational message passing or belief propagation in a computer with an information geometry that now allows you to read off the state of the computer or the state of the neural activity in terms of what it is encoding or the information it contains about the outside and so that sort of dual aspect information geometry has been celebrated to a minor extent in the philosophy literature by Vanja Weiss asking the question is this really the maths of sentience where you now have information about things and in a sense that really is the heart of the free energy principle or active inference anyway in the sense that it equips that dual information geometry, I mean technically what you are saying is that any particular internal state of a computer or a person or a brain now can be read as encoding a basian or a posterior belief about other states namely hidden or latent causes outside the Markov blanket and that defines technically something called a statistical manifold and as soon as there is a statistical manifold there is an information geometry and any movement on that manifold necessarily implies a change in your basian beliefs namely basian belief updating which means now there is an interpretation of neuronal dynamics movements on a statistical manifold on the inside in terms of belief updating the notion of active inference as the process of belief updating really rests upon this fundamental notion that there is information about stuff going on about stuff that is encoded or parameterised by the internal machinations and the mechanics and the dynamics of the inside so I think it might be quite important to if you are trying to describe or educate people in terms of how they should understand information as playing a central role in sentience I think it would be important to differentiate between the mathematical notions of Shannon information and self information and the calculus of probability and neutral information for example the kind of information that is implicit in an information geometry and the sentience that is afforded by active inference when now you are understanding neuronal dynamics or message passing in a computer on some sort of factor graph because in this instance each of those messages or those neuronal dynamics can now be read as belief updating namely changing your mind about other things so that the stuff on the inside has information about stuff on the outside Thanks for this important answer and we are going to pass over a few questions and go to 18 with Steven to continue on this theme about the separation of the inside and the outside so thank you Steven and please read off 18 Thank you I was going to ask how can the integrity of the active inference process theory be maintained when blanket states and generative models are being interpreted in novel ways so we were thinking about what do you think of discussions around Markov or Perl Friston blankets etc That's an excellent question I have quite a technical answer so if it's going too technical tell me I'll try and get back to what you were really trying to unearth so this is a not a fast moving field but certainly been a delicate and important area of discussion over the past few years so in the original introduction of Markov blankets there was an explicit nod to Perl's construction of Markov blankets and how Markov blankets are used practically in terms of simplifying message passing in computer science however that may have been something of an oversimplification from the point of view of the free energy principle it is the kind of causality that the free energy principle deals with is not the kind of causality particularly people like Perl but also people dealing with things like range of causality deal with so from the point of view of the free energy principle that starts with a stochastic differential equation or a random dynamical system written as a random dynamical equation OU processes being simple examples in physics these will be launched from like equations common to all of these starting points is time and evolution and dynamics now there is nothing in Perl's formulations well certainly there is nothing in Perl's book on causality that deals with time and I know that because before the days of PDFs and being able to go and search particular words I had to go through and find out there is one paragraph that mentions dynamics so if you were in statistics of computer science this will be the world of dynamic Bayesian nets is their take on something which is actually much more universal which is basically the universe as a Markovian dynamical process so just stepping back the challenge now is to articulate independence that underwrite Markov blankets in the sense of Perl in terms of dynamics so you've now got to link two quite distinct fields which is basically the fields of dynamics and launch found processes things that have paths of least action to the world of statistics and Perl-esque independences and causality casters interventions that have observable consequences the problem in doing that linking is that you have to really abandon the notion of causality in the world of range of causality and Perl because causality is baked into and is herant in writing down any differential equation be it stochastic or random or deterministic in the sense that states cause motion so the causality in this context would be a more controlled theoretic causality so that means that you can't then use the causality concept later on but it does mean that you've now got to derive from a dynamical Markovian calculus the necessary conditions that would lead to the conditional independences that are necessary to define Markov boundaries just to slip in here the Markov blanket is composed of minimal blankets namely boundaries in the sense of Perl and on most recent analyses it looks as if the blanket is actually two Markov boundaries in the sense of Perl but to get to the sense of Perl you've got to think very carefully about what are the constraints that lead to the conditional independences where those constraints are specified in terms of equations of motion and things like the amplitude of random fluctuations so once you've seen that that is the link that needs to be made that actually simplifies things in the sense there's no real attitude for interpretation so I'm going back to the part of your question blankets and geometry models are being interpreted in novel ways I don't think there's any latitude for any novel interpretation other than the sorry, if in novel ways you mean the best way or the correct way and we just haven't gotten there yet then I would concur entirely with that if you think that there is some tube, there's some library of insightful reinterpretations and redefinitions all of which have an equal veracity then I would suggest that's not the case, there's only one way there's only one Markov blanket or there's only one particular partition that can be articulated in terms of Markov blankets and the only novelness there is really in time down very precisely and defensively how you get from a long van formulation to a Markov blanket at the moment the novel way of doing that looks as if it's a the conditional independences arise from sparse dynamical coupling so if you read the causality as the influence that a state has on the motion of itself or any other state in this sort of minimal launch van like description of the universe then it is a sparsity of influence on the sparsity of coupling that leads to conditional independences so if the system has a sufficiently rich sparsity of conditional independences and implicitly coupling then it will have a particular partition and if it has that particular partition then the free energy principle holds so I think the discussions around Markov, Perl, Friston and blankets are essential, they're fascinating the conclusions of those discussions are going to probably have to refer back to the underlying maths and that maths is all about connecting launch van formulations of physics to the kind of calculus that Perl has established in a more statistical sense Thank you for the educational answer this brings us almost to the end of the .edu section so I will pass to the final question to be read by Dean who had several excellent points and questions so Dean feel free to ask however you would like Good morning, so the question is what's the difference between a subject matter expert and a prediction matter expert and how does this relate to your motive interaction you're going to have to unpack what subject and prediction matter experts means for me Yeah, so for me interesting you become a subject matter expert by gaining a certain amount of concentration in a particular field or area and you become a prediction matter expert when you are able to think more in terms of attributively more dispersively and so I think what when I read some of the things and listen to some of the stuff that I've heard you talk about you brought these two worlds together and so I'm kind of interested in hearing what you think in terms of introducing some of the ideas and principles that you brought into a world where you focused on concentrating whether it's materializing something from an engineering perspective or deciding what's in and what's out you've brought in another aspect to look at and I'm just curious what you think of that Okay, that's a fascinating distinction I'm not sure it's terribly important when I think about it because clearly you're the expert on this but it certainly would be fascinating to consider the conditions under which you were able to simulate the emergence of a subject matter versus a prediction matter expert in silica for example just as a proof of principle that these are both effectively base optimal ways of responding to a particular environment and my guess is that you would be able to do that relatively easily by appealing to the ideas that you find in applying some of active inference notions to structure learning and development where the basic idea is if you've got a very volatile environment by which I mean there's lots of uncertainty in the contingencies or possibly there are lots of random fluctuations that are irreducible in terms of your ability to predict the outcome of the trajectory of latent states of the world in which you are becoming an expert then when you parameterize your uncertainty you're usually formally in terms of the precision of various likelihood mappings or probability transition matrices in a sort of discrete state space generative models when you parameterize your beliefs about that uncertainty, irreducible uncertainty and volatility then agents that believe or have inferred that they are in a very volatile changeable capricious world usually become better at the prediction side of things in the sense that they rely less upon deep past experience and assign more precision or more potency to the more recent evidence so they have a different style of evidence accumulation that enables and also they have the right level of uncertainty about what we'll have next so it looks as if in their predictive engagement and epistemic foraging in that world it looks as if they are better at predicting changes because they're not committed to a particular explanation or understanding of how their world works On the other hand, if you create a world which is incredibly predictable and learnable then over time the natural pressure to minimize free energy translates into a pressure to minimize complexity named a way of modeling your world and your exchange with it in the simplest way possible and what that leads to is somebody who becomes a subject matter expert so the subject matter is their lived world that has now become so predictable that they do not entertain other outcomes because they have precise beliefs about the way that things will unfold and they can make very wise, very parsimonious or using parsimonious degrees of freedom they can make moons and become very expert in the way that this particular non-volatile predictable i.e precise world works and the link with aging here is that if you allow for the fact that we create our own environments where all those active inference will permit or is a way of framing our eco-niche construction the story people tend to tell is that as you get older, you basically make your world more predictable and you become a subject matter expert in your own lived world I no longer go bungee jumping, I go to discos because my world is very predictable I am very much an expert because my world is basically my conservatory, my study and my bedroom so I am a complete subject matter expert you take me out to a disco and I will not be able to predict what is going to happen next because I am old where as adolescents and children and certainly newborn infants or newborn artefacts discovering their world they are not yet subject experts and the epistemic pressures or motivation for them to learn about what happens if I do that and what can I control and what can't I control that will make them very quickly into prediction experts until they become sufficiently fluent that they can now engineer their world to make it non-volatile and then they presumably will become subject matter experts so I am sure that would be fairly simple to simulate using all the toy active inference schemes that we currently use and it would be really interesting if these two different kinds of synthetic agents did develop cognitive styles and confidence in what they were doing that looked exactly like the distinction you're talking about. I'm not sure it would work but if it does that would be an illuminating proof of principle Thanks for this answer and for this session from the lab and .edu that last answer really spoke to the importance also of intergenerational learning at this point we will take a 5 minute break and we will return for .coms everybody in 5 minutes right here