 Hello, welcome to Active Inference Lab and Active Inference Livestream 17.2. It's March 16th, 2021. Welcome to the Active Inference Lab, everyone. We're a participatory online lab that is communicating, learning and practicing applied active inference. You can find us at our links on this slide. This is a recorded and an archived livestream. So please provide us with feedback so that we can be improving on our work. All backgrounds and perspectives are welcome here and we'll be following video etiquette for livestreams. At this shortened link, you'll find our calendar for Active Inference Livestreams for 2021. And here we are in the follow-up to 17.1 in the same paper. And definitely recommended to check out 17.0 and 17.1 if you haven't. But this will hopefully be a cool discussion either way. And you can also see the future livestreams. Cool. Well, today in 17.2, we are going to be just continuing to discuss and learn this really interesting paper. And it will be fun to speak with whoever joins or doesn't for this stream and also hopefully to get some questions from the live chat. But we're just going to introduce ourselves, go over some general questions and I think seeing as it's a .2 and we had a lot of the background in .0 and then we had Chris Fields, the author, come on for .1. Let's just think about where we can take this fun discussion. Okay. So here, especially for our first-time participants, we can introduce ourselves and just warm up. So first we can just introduce ourselves. I'm Daniel. I'm in Northern California and I'll pass it to Stephen. Hello there. I'm Stephen. I'm in Toronto, Canada. And I'm interested in a number of areas that relate to active inference, particularly those that can form community-based development and the way that we make meaning in complex contexts. And I'll pass it over to, is that Alex there? Oh, Dean. Dean, sorry. Hi, I'm Dean. I don't know how to pass things yet. I'm pretty new to this. I'm in Calgary. So hi, Stephen in Toronto. And I decided to step into this because of a reach out that Daniel did on ResearchGate and a follow-up and then the opportunity to look at last week's 17.1 presentation. And for about two hours, I was completely absorbed in the conversation and thought, this is really interesting. So I took the next step and decided that I would try and integrate myself more with what's going on here. Cool. Well, let's maybe learn more about your perspective as we walk through some of these warm-up questions and just go through at whatever pace we want, just sort of the topics in the paper as well. So the warm-up questions are, what is something you're excited about today? What's something you liked or remembered about the paper or also the discussion 17.01? And what's something you're wondering about or would like to have resolved? So maybe since we all listened to it and hopefully those listening will have too, what was something about 17.1 that sort of stood out in people's mind or something that changed how they thought about active inference or how they thought about something that wasn't active inference? Yep, Stephen. Yeah, one of the things that was quite interesting was that there are other ways of people looking at the math side of it, which the math itself is normally quite a challenging area to get your head down, but to realize there's also some other thoughts happening at some foundational level. So that was interesting. I didn't know about that particularly or the extent that was happening. So that was particularly one area. And that tied into this idea of quantum contextuality, which I hadn't heard of actually, and really was very interesting. Dean, what do you think? I guess the biggest thing for me was, and I use terms that may not, you might have to unpack them, but one of the things that I was really interested in listening to was the fact that the perspective piece and how you, every time you're observing something, the perspective that you hold, you may not be aware of how important that is, but the fact that Chris sort of brought that out again and again. Well, that really resonated with me because it's something that I've been working with people for a long time. Cool. So a few thoughts that I had from 17.1 and also what you just said. So this quantum contextuality and then the part that was really novel and exciting to me was thinking about quantum, not as just simply about electrons and protons and, okay, you got strong force, weak force, electrons, protons, Maxwell's, you know, just throw the regular physics, particle physics or something, and then it's just equations that have different efficacy over different spatial scales. That's one presentation of quantum. And so then it's like, well, if you were that size or there's something about the system about that specific system or that size of system that makes it, quote, quantum. And this just expanded the definition of quantum to me to this kind of measurement communication, encryption, encoding, and ultimately also bi-directional meaning and semantic information all the way down. So that was a very different way to think about the relationship of like the molecule to the electron orbital. It's like it's quantum. It's not the way that the electrons would be behaving by themselves, but maybe that's not just because they're electrons. Maybe that's something about structure in general. So that was pretty cool. And then also you mentioned and Chris did, Dean observation is always coming from a perspective. And that was something that we talked a lot about with the projective consciousness model from an earlier discussion. And so that was saying, yes, you get all these cybernetic functions and all these functional attributes of consciousness from a projective geometry where the observers at the center at this actual special point and then things are projecting out in a certain way. So that's like kind of physical analog. It's geometric. And then, Stephen, you said there was a different sort of math that they were presenting. They didn't use projective geometry. This was from like a different branch of the game with the category theory and with these other theories that we just were barely scratching the surface on. So, you know, what does that mean that those different branches can reintegrate in a certain way and then potentially not just through an equation, but through a process theory. So that was some interesting stuff. Any other thoughts or I mean, really? Yeah. Go ahead. No, that's really interesting. I'm just letting it sit because it takes a couple of seconds to land. But I hear what you're saying. I like that this idea of the perspective and this idea, you know, the idea that this quantum idea can be much more across scale. And he did talk about, I think he called it quantum noise. There was the use of noise, which then gives a rationale, I think, for how this quantum processes could permeate through everything, even if you don't see quantum transitions in the simple sense, like you said, with electrons and stuff, this is permeating. And that then comes to that idea of the contextuality, as well as perspective when we think we take a different perspective in a very, okay, I can see I've moved from here to here and I've taken a perspective, but perspective at this kind of phenomenological deep level is kind of happening at all scales. Like the context is changing beyond what we can take a perspective on, if that makes sense, except we use the word perspective because that's what we can get a handle on. But it's changing in so many ways of which some of which we just can't know. So I thought that was really interesting. And maybe that's something that our biology is knowing, but a deeper level, maybe these nested levels, there's ways of which that sort of knowing is being harvested once for better term. So yeah, thanks. The body knowing, which is something we hear about and something you just said, it's almost like, well, if the body knows through action, like the tendon knows to do this because of its biomechanical properties, but that's how the knowledge, it's not a Turing computer. It's not doing some trigonometry, but it involves angles, but it's because of how it's embodied. So that's it. It's not the knowledge on a hard drive. So that's sort of why some of these information science approaches are probably on the cutting edge of what information means because we're having to think about how there is information like in the body of the insect that helps it walk or in the nervous system of the insect. There's some pattern generators, for example, that that's where the knowledge of walking is. So it can't do every walking pattern, but it is like this attractor for walking. And then that's where the knowledge is, but it's not all knowledge. And it's not a text file or script that is implemented on how to walk or not even really an algorithm. So it's kind of like good. And also, yeah, knowing is on the inference side. And then what is the other side of that with action? Interesting stuff. I'm getting started. Yeah. That's a good question. So I use sometimes use the frame of prisms and prismatics. And you got energy in and then you got a rainbow on the other side. Daniel, are you kind of talking about seeing both at once? Like knowing what both are doing? I mean, it's going to go through this triangle, right? But I mean, just being aware that what we see on one side is not necessarily what we see on the other side. And that doesn't make one more important than the other. It's what's important is knowing how things are looking to the person who's got this perspective now. Okay, I'll try to address it. Yeah. I seem to remember Shawna saying the word prismatic or chromatic or something about. Yeah, I use that all the time. Yeah. Okay. It probably has an interesting technical definition. So it's interesting just that it's like a vivid sensory representation of all the opportunities, all the possibilities, all the colors of the rainbow and even beyond with the electromagnetic spectrum. Greetings, blue. Welcome. And then there's like, yes, the prism focusing different rays of light or just optics like the rays of light are focusing at the center point. And then that's where the focus is. That's where the regime of photons is. And then that's what enables the function. Greetings, blue. Do you want to say hi and give any thoughts while I just resize some stuff? Sure. Good morning, everyone. My name is Blue Knight. I am an independent research consultant based out of New Mexico. And yeah, this paper was, I don't know. It was like really difficult to work through, I thought. And also really well written for like, I'm not like, want to sit and read like math, Blemma, this and theorem, that and like, that's always like not my thing. I mean, I always find it just difficult to digest. I like to do math as problems but not read it as like reading material. But I thought that for like the amount of information that was contained in this paper, it was like really well written and easy to relate to. I did think that it could have used some additional like practical examples of stuff. But then like considering the length of the paper, I was like, man, if they like put practical examples for all of these different things, it would have been like a 70 page paper. So those are my thoughts on the paper. And it was great having Chris here with us last week to answer all of our questions and definitely something I really enjoyed. So cool. Yeah, I wonder if we can keep that question about examples of this in mind as we maybe look through the keywords or look through the roadmap. Let's just reacquaint with the paper. So this is the information flow in context dependent hierarchical Bayesian inference by Charles and Glazebrook, one of their papers and collaborations together. And they basically lay out that there's two goals. They're kind of a twin purpose with this paper. So one piece is related to what Stephen was bringing up earlier with taking this quantum contextuality approach and then generalizing it in a scale free, a.k.a. multi scale way using techniques that are scale free. Apparently such as two spaces and channel theory, although also say like a linear regression is scale free. There's no a priori scale for a linear regression. It could be over little things. It could be over big things. So in a way, just saying scale free is almost like saying we made it about how the modeler approaches the system so that we didn't get tied down in attributes of the system that we don't want to be making scale free. And then the second side of this scale free approach to contextuality is they're going to demonstrate a connection between that and a lot of recent work in Bayesian statistics and hierarchical Bayesian models, which is also where the most direct connection to active inference comes into play. Because we've also seen hierarchical Bayesian models as implementing active inference. And then that's being tied to something that's at a different or a higher level of mathematical generality, which potentially could allow an active inference model that you'd make for this specific task. If you make it within that framework, you might be able to lift it out and put it somewhere totally different. Like you might be able to use it to describe systems and fit parameters or generate data. That's something that not all models can do. But there's probably other modeling maneuvers that will be possible. And so it's kind of like if you have Gaussian assumptions, certain things and statistics work really well together. Like, you know, certain relationships between multiplying things and adding variances. So this is kind of like a higher level statistical relationship. And I think that has to do with the commuting of the cones or the co-cones. But who knows. Okay. On the roadmap, were there any sections that like sprung to mind or anything somebody wants to say on this? All right. Let's go to why it matters and think about the examples. So, yeah, Steven. Yeah. Okay. Good point you just made. I think that the, that example that was just given actually about the prism. I think it was a good one. I think that might even itself be something that could even be a topic of discussion. I think that opens up things, but it also, it gets some other stuff because one thing that's going on here is the Bayesian inference. It's kind of inside the situation that's at play. And what I think that if we're going to say why this matters, I think it might relate interestingly to a paper that's going to be actually coming up in the next week, which talks about the way that we understand global workspace or theory of the brain and the idea that we have this phenomenological consciousness, which is somehow attuned to the flux of sensory data. And then we have what we're actually often modeling, which is what we're able to consciously be aware of. So this flux that's at the, you know, which sort of starts to get into the Markovian monism theories of, of, um, Friston, this seems to sort of give another way to go from the flux up. Maybe there's the, you know, in the way that David Bowen talks about the implicit order of nature. And I got a message from Tim as well, just through saying about to think that, you know, we've got this idea of quantum noise and classical noise, you know, and, you know, this nonlinear dynamical systems are kind of working with what could be seen as noise or changes in noise. So, um, so I think that that part of the, that part of the, the story is, um, is sort of important across all of active inference in a way, because at some level, there has to be a way to extract meaning from uncertainty. And so it seems that this gives some other way into that. Lou or Dean, here's a thought. Okay. Go ahead, Dean. I'm just going to say that I, so back to the perspective thing. Um, so one of the hardest things that when I was working with, with people trying to find themselves in, in new situations was you, you can, you can either inside of the box or you're outside of it. And I like the word that Stephen used there, extraction, because I think that that, I think what this work does is it maybe enables that extraction piece a little easier if, if people know what you're talking about in terms of stepping back, we have a tendency to step into things and analyze things from a, from an inside out perspective. And I think how this, how this could potentially work is as a way of, of gaining a sense of removal, which is actually advantageous at times. I mean, if you're looking at something that's complex, it's very difficult sometimes not to get caught up in the, in the moment. I was speaking just before the live stream started about white water rafting. When you're, when you're going down the river and you're in a class threes rapid, you're thinking about that. You're not thinking about the perspective of being up on the cliff, watching the rap go down the river. I think this allows people to get outside of that moment. I think that's my, one of my, hopefully why this matters contributions. Stephen. Well, just sort of with, with that, I think in some ways I, I agree with what you're saying, but I would also say what's interesting is maybe it allows you to remove yourself more than we normally can because you, but also show that actually when you think you're immersed in it, you aren't as immersed as you think you are. Right. Cause there's these other levels of immersion, which are you're beyond your conscious awareness. And there's probably even other levels which are even beyond your phenomenological and biological systems ability to tap into. So there's a, maybe it's both ends. If that makes sense, you go, you can go further in if the word in is meant that way and further out if the word out is meant that way. So on this, why it matters. Here's one thought and then anyone else or it could go for a second. So Stephen, when you said extraction and I know you have a background in chemistry. So that's sort of like it's separating into the different fractions. So it's, it's not just extracting like, you know, the, the, the, um, from the, or it's actually like, we can have a discussion. What are the affordances? What is on the table? What are we just going to just say is our option space for policy and then what are our preferences and what kind of uncertainties are we each bringing to the table from our perspective? So it is, um, not like we're extracting the essence of the conflict and then we're going to fix the essence or the reduction of the conflict or one dimension of the conflict perceived and then that will propagate through the other aspects of the situation. It's kind of like we're going to come with uncertainty to a minimal model and then if one person has one narrative for the policy and somebody else has a different narrative, that's off the table. So that is a way where people can bring their different ways of seeing a common model differently. Like in the quantum measurement case, if both people are measuring the quantum particle and they're both fine with it, that's one type of experiment. Steven? Yeah, I like the point you picked up there on extraction. It was interesting because that sort of came up so strongly and I think the other thing that's interesting is the idea, even in chemistry, there's the idea is still you extract some thing, right? It's quite thing like, but the idea that you extract in something from noise, because in chemistry it's normally like basically you calculate everything down and then say what is negligible? What can you say is beyond concern? So you just say it's beyond worries, you know, it's within tolerance, so to speak, and then you just work within that. Whereas in this case, it's like, well, all that noise, so to speak, is what we're extracting the information from. It feels to us like the stuff's out there and we're getting thinness, so to speak, but we're actually getting signals and, I suppose, extracting enough variational free energy changes in the noise to then perceive it as, you know, knowledge, but it actually has to start out as fluctuations in, I don't know if the word noise is maybe the wrong word, but certainly fluctuations in nonlinear dynamics. So that changes a lot of things about how we understand knowing the world. Interesting. Here's kind of another point on the signal to noise. So we're thinking in the Shannon information, AKA Shannon disinfo theory, nothing against it framework. It's about the telegraph and the wire and getting the exact recapitulation of the signal on one end of the wire as the other end of the wire. That was the second paragraph of the 1948 paper by Shannon. We talked about it in the dot zero video. He said the question of communication is about perfectly reconstituting a signal. And so in that framework, if you're doing a chemical synthesis and there's one product you want, then you can calculate like the yield of that product. And then similarly, if there's one thing that you know that you want, you know, you can calculate really exact statistics like information. And then once you open it up to the semantics, then the noise has a different perspective, just like you are highlighting care. There's a lot of richness in what you already said, but yes, noise is richer than just decay in the copper wire because we're not trying to recapitulate anything in the person's mind. It's not a bitwise transfer through speech. It's actually like the differences in the way that we speak are conveying meaning. So it's not just error from some perceived pattern of speech. It's like just the meaning that the person's conveying through their action. So that I think is also something that really matters when we think about differences in how people are going to be learning or communicating or something like that. Steven? Yeah. And the interesting thing is that while to some extent you heat up a liquid or something, it expands, and you sort of, the energy put in shows itself as the work. You particularly see, say, when water freezes, that there's this whole question of entropy and how it's somehow tied to some rearrangement which is not shown as work in the system. You know, you try and heat up ice and it hits a point. There's something going on there and we can't quite know what it is and it's energy hungry in some instances or energy yielding. But that process is beyond our direct perception. So you just have to build it in with Gibbs free energy and the entropy and enthalpy terms to see what will happen with reaction. So that actually then actually ties into Friston's work and actually is the sort of foundation of his work in a way. So I'm not sure how entropy was named in this paper. Maybe I don't think that's directly part of this math. This is another way to come in without maybe coming in with the word entropy. But I think it's interesting. Thanks, Dean. I just, again, just try to draw from examples when I would talk about this with people. And there's a privilege always given over to gap closing. Like we want to close the gap. We want to understand. We want to find the signal within the noise. And I think what this does is it sort of gives over a certain amount of credibility just to gap respecting, just leaving the thing that's out of it for a bit so that you can actually take in more information back to what Steven was and what you're both alluding to. How do we give ourselves a little bit of slack by saying gap respecting is just as important as gap closing? And if we give ourselves that, it's a kind of a privilege to do that because it's twice as much effort as just gap closing. I think the benefits can be really quite profound. Nice point. Thank you. So, Sarah, oh, Steven, do you have anything to add here? Well, just one last point on that is that could then where you start to get into abductive growth-based knowing is maybe we're used to because humans have got this capacity for deductive and rapid exclusion of what's out there and inductive trying to narrow the gap between us and a goal. But when we're growing, when we think about sometimes I need to just sit back and heal. There's certain situations in the world which I can never process psychologically from a, I can't close the gap on certain things. I just have to sit back and heal. Right? Because they're hard. And so that same piece you're talking about there then starts to tap into, well, that's the main game in town for organisms. Yeah, the game in town of closing gaps and is a nice add-on that we have. But it isn't the main game. Lou? So, that just brings up to mine's like time as a gap closer, right? Like when you just sit back and heal and ultimately time as a gap closer, if you think about the time value of money and time is ultimately the thing that will close the gap on all prediction and we don't really see that in the models or in the math at all. It'd be interesting to see the temporal dimension and how that plays in or if people have thought about that. I mean time like can it flow forward and backward? And I think that was something that was brought up maybe in the last live stream. That's just interesting. Yeah. Nice, Dean. Just one last thing. I completely agree with you because again the privilege goes on to time on task instead of task or task on time instead of just seeing time in and of itself. I have this crazy theory that time is learning not just a way of measuring learning and you can't get there unless you can get off that treadmill. So thank you. That's a great point. And Dean what you said about learning as time it's like the difference between kairos and chronos. So chronos, the chronometer, the decimal time, it's about the wall time, the clock time and then action is happening in a space of meaning of action. It's time for dinner and that's an action, affordance and so that is something that's we can bring in the encultured component whereas the other approaches know the wall time is just simply how it is and we're going to like pro crusties shrink, expand, use heuristics, use coarse graining to get around this sort of fundamental linearizing from our perspective just at the scale we're at which makes it not scale free which is why perhaps quantum only applies to certain so-called spatial levels of what we measure. Steven? Yeah, sorry. Give me a sec. I had it and then I just went for a sec there. Let me ask Sarah's question because it's really a nice point in chat. So Sarah wrote could we look at choose space slide and get a broader perspective view to maximize signal to noise sort of speak. So maybe I didn't perfectly read it but I think it was choose spaces, can we enter into here a little bit and connect that to some of the bigger discussion points and maybe even the other useful points. So Steven, do you want to give a thought or is there anyone who wants to give a thought or is there a space piece here? Yeah, go ahead. I'm not sure exactly how but this gives a way that things can transform from one space to another. So it gives you that capacity. I mean sort of keep the idea of time and I know this paper is not specific active inference. Without time you're not able to make inferences from noise so to speak information content in random fluctuations. You only get that because you've got time because then you can tell if there's been a fluctuation so one snapshot can't do that. Now this seems to maybe give a way to okay you've now got a shape of the information space what happens once it has a shape now but beyond that it's I'm not entirely sure. Yeah Blue anything you'd say on this um topic otherwise I think walking through kind of just the way they laid it out and the definitions could be helpful in some of the examples they gave because like Steven you you mentioned this morphism right? And it seemed like there's the true space and that's like a matrix but one that can be various kinds of things it's a matrix but just like we can represent a network with a matrix so we can think about like the space of all the networks all the social networks of five people it's like a five by five matrix and ones and zeros or weighted numbers so that's the matrix representation of the network and then that is one type of meaning of the matrix but also this can describe probability and it can describe programming languages, Bayesian networks like if each cell had information about a conditional dependence or not so that's like a true space is the space of the possible of these things and then this is the part where maybe we'll just say it's beyond the experience on this conversation but the Wikipedia has like understood statically it's a relation and then understood dynamically and then it has another definition so it's like it's interesting it has a static or a dynamic interpretation so I think it's unless anyone in the chat wants to jump in and help or somebody with expertise wants to come on later how can we connect that to some of these use cases blue or that on anything else go ahead so I think like the dynamic aspect of the true space and the two morphisms is that like so any if you have a set a matrix and it's like any series of transformations that can be done to form a new matrix so that in my mind is like the dynamism of it all like if I take set A and I do this, that, this, that, this, that in this specific order so again here's like the temporal we're alluding to time as a variable or time as a gap but like the temporal sequence is important because you know if you do you know if you multiply before you subtract or if you get your order of operations wrong it's going to be different right so the temporal order of things is that the morphism to make the next set thanks for that another interesting sentence is that it's like the true transform or the morphism is a pair it kind of comes into existence as two spaces and a mapping between them or a map between them and so that is what ties it really broadly to functions and computation and these sort of transformation, transmutation or input output seeming, communication seeming processes, measurement like processes because it's about some space and some other space and so when we think about the active inference model we have hidden states that inference is being done on and then we have observations and it's almost like there's a matrix that connects the two of them I'm not saying that that's going to be all at this level of true space but it's kind of like there's a mapping where you can run the model from the sense data to the state and do inference or you can generate from the state likely sensory data and that is a bi-directional it's like a two-way Bayesian machine because it can take in observations and update expectations of hidden state variables or it can generate so that generative and receptive element of Bayesian computation is I think just it's like whatever is the topic that includes that and that network matrix and a few other things that's kind of where this topic can go which is why I think it's a really rich area and that's why the papers are written in 2020 and that's yeah, Stephen and then Dean I think this idea of moving between spaces is also really valuable because as you know I'm very interested in understanding how we can look at awareness and sort of indigeneity is very much about awareness and a lot of the active inference work tends to be about whatness, what's this, what's that how could we understand what this is understood to do and think about but this when you start to get to this broader understanding of how we can make meaning and taking that idea at a very broad level is the ability to transform between spaces and engage with spaces and that might extend up to our conscious awareness space is well below the level of consciousness you know down to the level of cells you know but it seems to have and play out the importance of spatial transformation over whatness thank you Stephen, Dean yeah what I took away from this was that when we discriminate when we separate this sort of gives you a sense of why we might do that I think of some of the stuff that we looked at in some of the work that I did we would ask people to watch something maybe a YouTube clip and instead of trying to take all the information in as is classically recorded through notes we'd ask them to columnize and row in under three headings, tools and rules and pools now that was the mnemonic but pools is basically where the gradient is what's aggregating and so we'd ask people to classify something and what people found was something could fit into two categories something could be a tool and a rule sometimes it could go across all three it could be a tool and a rule and a pool like there was a way of being able to see relationship and I think that's what this two spaces does it says we can discriminate we can separate but sometimes things fit into multiple categories and that's that's something we need to be aware of our minds are able to sort of pull apart and then reorganize that's what I took away from that nice very nice Dean and I think that's sort of reflected by this cone and potentially even your prism topic because it's like yes there's a unity of C in this case but it is going to be manifest in these different spaces and this is a really informative slide so it's talking again about these pair of two spaces that are related through a transform the transform A to B this is from section or example in the paper the transform A to B in this case can be viewed as transporting the information encoded in valid flow formulas from A to B and can thus be thought of informally as a channel from A to B implicitly providing a sense of spatial and or temporal separation between A and B so I thought like there's the maps of meaning and so there's my internal map just so to speak just using it instrumentally and then there's person B and then if we're playing 20 questions and it's like okay is the house bigger or smaller than a bread box there's some semantic mapping internal to me and then there's a channel with the sense data and then there's a semantic flow related to whether they're going to give me a yes no which is like Chris said that's quanta because we're quantizing the response is it bigger or not we're making a measurement of the response or they could be wrong or they could you know we couldn't hear them or something but it's a binary question and then there's a semantic flow and then there's a back and forth potentially otherwise you're talking about different things and so if somebody is thinking maybe I was thinking of a little toy house smaller than a bread box so if you have a different frame of reference then even the communication is not going to work because the holograph is going to be read too differently by the people even with the same information so that's like the separation of cognition across participants in a conversation and then that's happening implicitly through time but then it seems like also things can be just more abstractly described as being separated through time and so something that's separated in time but not in space is like memory which is where this whole discussion of memory comes into play because it's a communication through time at that spot like a time capsule it's like 50 year message now debate whether you're in the absolute position of the time capsule is the same but it's like relative to it it's a message through time live so it's funny that you bring up a time capsule like I've been really thinking about you know scale carrying information forward and backward from cells to person to a group of people and it's like what do we carry forward and when we used to make time capsules every 10 years I don't know if we still do but then we would dig them up and dig them up every 20 or 30 years and what do we put into a time capsule and does that effectively capture the culture of the society at that time like a CD or like you know these different things that go into the time capsules like this is actually like what we're carrying forward and sometimes it's like dumb stuff like I don't know it's just funny to think about what goes forward right in time and why that and like how the selection process works and if it's like a systemic or generalized thing or I don't know it's just something that I've been thinking about how that is transmitted across time great question and very prescient of course with so much digital information and the potentially fluctuating cost of storage wildly cheap and then wildly expensive and that could lead to some very unfortunate information flows evolution has sort of an answer to that which is like what's passed forward is what is fit to its niche not biggest bench press strongest fittest but fit to the niche adapt skilled to the niche so we don't need to go too semantic on that but I think the term actually does fit and where evolution by natural selection as well as active inference come into play is by formalizing a process theory you kind of step away or at least you can maybe get some distance or some model ability of these complex processes so in evolution the process theory is basically you have heritable variation in phenotype that's found in a population where there's heritable variation of fitness and if those are all linked up in the right way at least locally that's the way that the phenotype and the population moves and so active inference it's similar but kind of like with information just saying okay since data comes in action goes out from the generative model into the niche back into sense something happens with the generative model it's great to see information engages in policy so it's like if somebody listens to a lecture and they only write the notes of five sentences and then they pass that off the process theory explained why there was an information bottleneck there because we just modeled the situation and we don't need to go into its tragic there was only five sentences or there was more than you know it was five deep sentences though or something like that we just say what it was that was the actual transmission and then that is a starting point for the kinds of value oriented analyses that you might want to do afterwards Dean yeah I'm just curious what other people's thinking is on this because typically between the inference and the model we not necessarily the people here but oftentimes what's parked in between those two things as a plan and what you just described is not a plan so I'm curious what other people think is between the inference and the model I have my ideas but I'm because I'm trying to get to know you folks I'd like to know what you guys think is between the inference and the model Steven yeah well this is that's a six million dollar question I think that the introduction of the quantum approach at certain scales does yield something very plausible and Chris's work with Mike Levin could tie into that because he does a lot of work with bioelectrics on cells membranes so I could see that this question where it's more directly attributed to quantum fluctuations and what that means you know at the level of the cell and then the bioelectrical or the chemical electrics or the bioelectrics of the membrane and then the next level that I can kind of see more plausible is brain waves and you've got this overlap maybe there's some quantum effect but then in between that it's probably more action is more classical in terms of how you pick things up it's literally like I just keep doing it and in a very rough and ready way I extract some sense of how it is to pick up an apple or something referring things in a more classical way so I think there's a mixture of a classical and not only partially knowable set of information transfer and maybe some more quantifiable stuff which maybe is I don't know if it gives you more accuracy at that level but might be more quantum like but that's my thoughts OK just to Dean the question was what's in between the model and the inference? OK is that what you asked? Yeah so let's just kind of go into that so what by model you mean the actual code as you wrote it on the computer or the actual systems diagram like the model you mean what's on the computer or what's on the paper is that what you mean that's the product of the model and then the inference is the computation that the model does or is the inference how we then look at that model and then decide how we should do policy because it's literally it's moment by moment it's the smallest of time frames relative to the end game or the end product how do people typically they put a plan in between those two bookends that's how they parameterize and so what I'm curious is what other people think actually is in there because I know we see a plan where we're given a plan but I'm not sure that that's exactly what's going on so I was just curious what other people thought is going on Lou so like I'm not sure that I'm also like understanding the questions so there's like the generative model and the predictive model so there's these two models and so the generative model is like the model that you have Daniel probably have a better way to say that and then the predictive model is in my mind that's the plan like I predict that if I invest in crypto I will have 10% more money at the end of the year or whatever right like so that's my prediction that's my predictive model that's my my plan so in my mind those two things are synonymous but maybe Daniel can clarify or speak to that in a better way I think I try to just take what you wrote annotated a little bit the internal states is a generative model of the niche and that includes the agent's action in the niche like I can see myself going on a walk every morning that's like a combination of a trajectory of action and affordances and a niche and other regularities and other availabilities shoes or something like that and then there's the actual generative process and the generative process is like the part that is the play and also where cybernetics theories like the quote good regulator theorem come into play because the cybernetics is like well you have to be effective you have to have all the variables in your environment but then we can look at for example alternate understandings of astronomy and there might be evolutionarily adequate different understandings of astronomy that allow individuals to deal with the regularities in their environment for the niche that they're in given their affordances but the sort of realism take would be that it's the same moon and sun at the chart you know the fibonacci spiral all these things you can do but it's about the narrative of policy under that generative model the niche but it's the same market that it gives me an ability to act in a non a surprised way as opposed to using surprise all which is this kind of metric which I can kind of use both in my retina and my nerves and everything is this kind of general awareness of how much of a variation is applied but surprise and you know something can happen to me once and I make big decisions on that and that is sort of basically sitting on top of all this activity sort of need to know as an animal because and it's and there's a great presentation on this and I can't where at one point in history basically we have to add in who into our environment so up to then it was just like how part of sometimes things fit multiple categories so how do we explain that with a plan no because that's fluid that's just dependent upon how you're looking at it and sometimes you'll you'll talk to the person beside you and say do you see what I see which then you've you've already pointed that out a couple of times in this conversation so I'm just trying to figure out what's in there is I think it's more than a plan to default just to that or reduce just to that because I think that's part of the problem when we're trying to figure this stuff out thanks Dean and blue so definitely there is a who and I think that that really like loops us back around to the point of this fields and Glazebook paper that there's this intrinsic contextuality and so you can't really effectively model contextuality unless it's you can remove the intrinsic component right like so when you can you when you can remove the subjective aspect like I mean we had a conversation last week or two weeks ago about like at a dinner party like how was the food did you have a good time like or not like how was the food was it fun and so all of this is like there's no objective answer to these questions it's going to be different for all the 15 people that were at the dinner party right like some person liked the food some person had fun one person didn't one person you know thought the food was cold or didn't like the dessert or whatever right there's the subjective experience so it's really difficult to separate the who from contextuality nice questions and answer Stephen yeah this is very this is very important actually what you're saying I mean I think this also ties into this idea of what it means when we do planning and also how we think of planning and start to extrapolate that so generally speaking in the everyday sense of planning and we have like some sort of shared something in the niche like we have a plan we often make plans if not it's somehow there in some verbal semantic thing which isn't just in our head it is in the niche it's sort of in this semantic niche and but the idea for instance of how does my body know the plan for getting on a streetcar right when the doors open right I I know how to do that and it's not but that kind of plan has to come from somewhere and it's much more there you know I mean so they get all merged together but the and that maybe is partly in our niche in the way that we have the right sort of shoes on we we set up the design you know because we have some control but then even when you get into other areas of the niche okay I come across a particular cliff in a mountain and I you know I'm not from the modern world so I haven't got all the technology but you can reinterpret the past you can do it yeah that's not within our affordances it's we can only infer on certain things but we have causal agency and actually in the SPM textbook there's some integrals that are like from all of the time series together just like sort of mash all of time together and there's other ones that are calculated like up to that moment in a sliding window and then it says that's what makes it causal is we've restricted the analysis to only the previous states that's kind of how mathematically causality is defined as is like well if it's lagged in time or if you slide a window as the arrow of time you know changes but then that's not the same thing as the arrow of time or the experience of time or anything like that so yeah but then this local logic slide was just to say that and to also continue this discussion on contextuality it's the local settings so how can I where does this matter and how does this expand the discussion hopefully lead to more powerful models is um if you're not playing with the local rules then you're not playing the game and so you're going to semantically lose against nature if there's organization in a way that attempts to violate local logic like if there's some equivalence between heat and computation in our current understanding and you design a machine that just violates those rules like you don't plan to dissipate a certain amount of heat given how much computation it's like you're not designing within the rules of the system you're just making a poor computer and so we're talking about communication though and information transfer that's meaningful so what are the local logics for communication what are the local logics that are shared by people who speak the same language versus all languages versus all ages and things like that what's the shared context to communicate what we need or how we should act together what else would be a fun slide to go to or just topic to go to it mentions ergo distancing there that might just be interesting to see its take on that you know blue do you remember but um I feel like ergo to see wasn't we can look actually in the paper itself let's just see if it's used not used in the paper itself but we put it in there for some reason partially related to these kind of what we're talking about now in a way like as you bring up it's relevant of a term but yeah it might be there's a question about it raises that question because it's such a big part of active inference how those two things relate because it seems to be able to get at some of the questions without the need necessarily for ergo distancing in the same way but maybe I'm not reading into it but I got feeling that it may not require it because you have these spaces that maybe have inherent ways of obtaining knowledge or information between these different types of morphological transformations okay I think here is where we asked it so this is the quote in the paper they're talking about contextuality by default and they say the criterion effectively generalizes the intrinsic contextuality of quantum theory so the one that we know of with the wave in the particle and the entanglement, spooky action stuff to the case in which other properties of a context e.g. the order in which questions are asked also affect the distribution of a variable of interest so it's like 20 questions if you go down one branch first it's going to be like a bifurcation with a question and communication is like a bifurcation whereas in the telegraph in the Shannon info theory it's like if a signal is garbled and it doesn't come through it's like it's a lost transmission but semantically it might ruin the file but at the level of the message it just wasn't transmitted accurately but so it bifurcates the meaning of the file but from a signal to noise it didn't have a categorical bifurcation maybe only one bit in a million was actually flipped and that was just a stochastic thing but when we think about semantic information flow the order in which the information flows especially in a dialectical or in an ecological context is like the whole topic information flow is improvised or at the very least it's engaging so it's not just letters on the page recapitulated but it's something informational Steven? Yeah that's important and that ties into the idea of how much there's like with Shannon entropy or there's the idea of a signal and you're losing some of it now with Pristin's work it's the idea it's all non-linear and you need the action so that you can compare the way that the signal to means are happening to infer something from the noise you need the action but in this case it's like there's another way to sort of say there's an inherent something out there so to speak there's sort of something in the signal it's not just only the action needed which is probably why then this can be applied to Bayesian inference and not only active inference because they're saying there's something inherent you don't only need the action and you're getting it because of the order in which the questions are asked and that order actually that's interesting then that often how I've thought of contextuality I just think about contextuality is where am I you know what's happening but what order did what happen happen give me something is new when are we and when were we there all those questions blue so I have a comment on can you back up to the cone to cone diagram that this is why we brought it up so here in this cone to cone diagram the one on the top right like so you see this information channel C that relates all these classifiers A1, A2 all the classifiers are linked to this core information channel and that that kind of doesn't really imply like a distributed system it applies like a centralized control but we brought up ergodicity because if you look down in the second picture here from C to D there's this commutative commutative property that is or I guess it's even in the top picture so it's the G's right like the commutative aspect of the cone to cone diagram and so I think that that's where the ergodicity came into this paper right like the fact that that all properties are available to all points in the cone to cone cool and also you know these topics can go expand beyond the narrow and the technical but let's think about a technical definition of sampling ergodicity so not taking on all the experiential elements per se but A remember is a classifier so it's a two space scarlet letter A that's what's being represented with the and it itself is a classifier because when we were talking about the two spaces there was like it could be the types of a computer language and then what their descriptions are so this is like cats are small you know horses are big this is my classifier you know big and small and then horse and cat and then just which one's big which one is small so it's a two space informally and then here is a different two space A2 at a different time maybe it's time point two maybe it's person two maybe it's all the people so here's me A sub K all the K people or all the K moments and whether it's one person through time if you ask the question you get the same quantized response yes the cat is smaller than the horse because cat small horse big if you always get that response back semantically then you're sampling from ergodicity with respect to that question so there's it's a big topic with like what it means in physics and everything like that but just from a sampling perspective if there's K people in the room and you sample and all 100 of them or every time you sample 70 out of 100 give you an answer that's the sampling ergodicity and then here are C and D which are like two observers and they're kind of co-sampling but they can only see the raw data so they're able to see all just what we were getting at with the top part just like this is like sampling across informally population or through time and then it's ergodic if there is a stationarity there here that is the same scenario but now there's two observers and then if they are related in this very specific way then it opens up a lot of connections semantically between C and D also Sarah added in the chat that salience is a time dependent phenomena and that's extremely true so everything that we talk about with attention and salience and surprise those are like almost explicitly time bound processes and even in machine learning when people talk about neural network with attention or with recurrence they're kind of talking about a time element but anything that's where it's not just a snapshot that moves state to state but there's something that's captured across time steps and that's called attention sometimes so it's kind of like it's actually a similar way to how it's used in a modeling framework here this was one question that we had in dot zero and beyond hopefully got to ask Chris a little bit but like what are examples or how can we what's the next question to ask or what's the next system to explore this way that was one set of questions also if anyone in live chat has any questions we can definitely look at those um what else would be interesting I mean there's a few more slides that we haven't looked at that we could kind of explore so maybe one element here oh yeah go ahead Stephen sorry just one he talked about maybe this is something he brought up as a slightly new advancement he talked about separability separability um as a sort of he positioned that sort of center stage during his slides when he did his introduction um as maybe a way to I don't know think about contextuality um in a more tractable way um yeah I wondered if that was you know was that how did that come up was that something oh that's interesting was that a little bit surprising that he when he did that um yeah great question and point so I just searched in the document there's two spots where it's mentioned they right we have previously shown how inference can be represented using the formulism of this paper employing only the quantum theory of separable systems and the thermodynamics of measurement interactions so there's the citation to Fields and Glazebrook to explore that more but I think from what we're talking about here's an interesting part that is related to two spaces it is often the case that two spaces may be separable or other types but let's just you know focus on the separable element um i.e. both separable so it could be separable it's almost like you could have another two space you know which two spaces A through Z are separable or not so separable is a quality and then it says separable means all rows are distinct extensional means all columns are distinct so actually the transform that goes between them is probably the transpose the T in R then if it's separable on columns and you transpose it it's separable on rows so that's like a relationship that's like kind of a high level but it's maintained when you transform it back and forth like blue is saying add subtract add subtract and transpose the matrix back and forth um so separable meaning the rows are distinct so let's just think about the case of the two space as a descriptor just which was one of the ways that they used it so like the types types of processes kind of so if it's separable it's like they're distinguishable types sometimes that's also called a full rank matrix where it's like every row and column matters uniquely and informationally there's no doubled rows where you could condense the matrix by saying actually there's five of this row and so this or this row is a linear combination of this other row Steven? Yeah that um this is good to bring it back here because it's uh this is sort of the applications of it and um the separability um I think there's more that can be looked at with this over time I think this is obviously something that's going to be quite applyable and it also relates to this idea of self-organization and planned organization so you know how much are things able to infer things because of the the separability that in itself maybe changes what's possible um yeah actually here's a quote we haven't looked at this is really interesting though in working set theoretically so with this sort of set approach to choose spaces there's considerable scope for the choice of arguments in the choose space as well as the choice of the corresponding relation R because we get to choose the spaces and the map so we can you know map from various things to various things different ways these and so what kinds of things can we do probabilistic in particular conditional relationships so that's like all of computation in stats Bayesian fuzzy type relationships fuzzy logic fuzzy math um spatial observations object identification and marological reasoning this isn't a paper to look into that means parts and holes how they're related to each other and dual process theories of cognition so that's bringing it to the cognitive neuro and to even the neuro scientific numerous examples are discussed in the paper and they are it's um it's a double paper but it's very informative and so all these kinds of things are kind of what is adjacencies for the choose space dean and then anyone else Dave if you have any thoughts after dean otherwise no worries I just could you go back to that cocoon thing again there yeah that one there the bottom one it's interesting because when I read looked at that in the paper it took me back to some stuff I did about um that diamond shape and how that symbolically relates to opening up which you mentioned Daniel and and self reorganization which Stephen mentioned um we see that when we see a diamond uh designating a high occupancy vehicle lane and the sort of way that that frees you up as long as you meet a minima in terms of the number of people in your in your vehicle and we used to talk about that idea of how these spaces are there even though you may not be necessarily noticing that um yeah that's that's even if we don't formally look at it that way there's all sorts of examples around us of when that's actually occurring so I just thought I'd point that out it may not have and you may not necessarily do those kind of associations but again if you're trying to bring this kind of information to people and go well here's an example of where we're actually doing this stuff we may not be aware of it but it's it's going on this that's as soon as I saw that that's going on. Thanks for that association there Dave this is not addressing any specific point but Gregory Bates and a number of times observed the cyber neticians getting stuck by talking about random inputs random inputs he says look stop thinking about random I want you to think about noise we're now going to talk about organization from noise and I think part of the point is noise is more clearly relative to your values and your plans. Yes very nice and also affordances which are semantic like even years ago when I was learning about Shannon information theory I thought I can be symbolically surprised by the characters on a page but if I don't speak the language who's to say whether the information is really there so there has to be something else here that's helping address the situations that are meaningful or not because it's not just about how rare the character is in a world where just one word can mean a lot at the right time in the right context Steven Yeah the the idea that you're you need to constrain certain times or separate I mean I suppose in a funny way the car lane example is quite interesting is you know you've got the movement of traffic and at some point the high occupancy vehicle lane becomes different in terms of what's on it as long as you can enforce a rule and you know and communicate it and enforce it and that kind of idea that you can create separability otherwise and maybe that sort of principle of there's some constraints implicitly out there I suppose on what things are going to do with each other I suppose just like the way water behaves in certain contexts at certain scales within our gravitational field but there's also and I'm not sure how if I'm over extrapolating but how biologically you know different ways we have to there has to be some point where you have to impose a separability you know you might need to form an organ from cells to be able to do the next thing you know to create insulin or something it's not just enough to have some insulin cells dotted all around the body you need to have a pancreas you know so there needs to be that constraint you know so that you're in the high occupancy lane and that's what's happening so it's um you're in the high insulin producing cell type and then at a bigger level you're in the high insulin producing organ sub component in the organ and so it's this nesting and there's um maybe an extraction of function using what you were talking about earlier so then Sarah wrote in the chat also about the definition of the two spaces here saying that this is another way in which the intensity is going to come into play because uh Sarah writes separable is saying that the event space for a given time frame cannot be compressed I'm not sure if that implies the aggregate matrix itself cannot be compressed reduced in some way and then here this this bi-extensional collapse so if you can't collapse the rows and you can't collapse the columns and you can transform it and that still is going to be like true it's just like it's informationally rich then any repetitions in the rows of objects and columns of attributes are factored out and also the reason why attributes are at time is an attribute if you think about the object as existing through time just like you could measure the color every inch of the ruler you could measure the length of the ruler at every measurement in 10 time points in practice this removes unnecessary repetitions in the content of information hence minimizes the amount of processing required by a given algorithm 100 measurements of exactly one meter because the time was unique that's still a hundred measurements so the classical approach would be like well sounds like we got an answer I mean look at the p-value it's one meter but then here when we retain this rich contextuality we actually don't pool even though we were sampling from ergodicity of length we don't need to pool all of those measurements together in the same way perhaps and so that is definitely related Sarah's point that this matrix the rows or the columns can be thought of as being temporal so when we think about the matrix having structure on like a top to bottom or left to right way that can represent structure in time like just like you have an eigenvector you could have an eigen value through time Steven so if I'm understanding you could have a relatively say a small number of observation types but build out a space because over time you can still the time variable gives you the ability to make that into some sort of space some sort of topological space it's not just a space of different measurements of different things it's sort of different things over time is that right yep here's an example that kind of came to mind it relates to insect vision so a lot of times people think about insect vision like oh they must see like a bunch of versions of the world but then we think okay well we have two eyes and we see one version of the world so it's probably a little more integrated than buggy putting that aside what's the resolution of the bugs site and it turns out that there's some insects that only have like one omatidia cone so it's like kind of like a one pixel camera it's like almost light dark so you would think and if it were on the wall all it could do is inference light dark in the room but because the insect has a body is a body it can move so even if it's just light dark or gradients of it's getting like a bit string we can think about continuous variables or just zero one and so can a bit string encode information that's rich like yes like sports casting of a sports ball game you can say now they're running from the 30 and they're at the 20 there's a person and they're at the 10 and so that's a bit string but it conveys action and context so they don't need to take the 4k video in order to know whether to run or walk and that's actually really related to Carl Friston's woodlice example which he claims is the origins of some of the free energy principle kind of any other slides we want to go to or we can just look at the last ones otherwise I think it's been a great conversation it was definitely a educational paper and several weeks like for the lab we're always happy to have new participants such as yourself Dean so thanks a lot for joining and yeah this was an interesting set of discussions also thanks a lot Sarah for helping on and blue for dot zero for 17 because that that was kind of like how we started the conversation in a way in a way that helped us structure it because there's a lot to get lost in here blue what do you think and then Dean you're on the Markov blanket side can you go back to it yep so I was really thinking about I was like I have this note in the paper so hold on let me find it so in like you know when we have this Markov blanket it's like the things that we're not privy to the information that we like don't necessarily know about right and so we make a decision we have all of the information that we need to make a decision but I was thinking like in a system where the Markov blanket is like too big like if there's too many things under the Markov blanket that we don't have access to like how does that like impact our the ability to act within the constraints of the system so like here it's they talk about the frame problem right like so we're you know you kind of can't solve a problem until it's been solved already until you know how to solve a problem like you can't determine the answer and I think that that's like the gist of the frame problem feel free to like elaborate or correct me on that um you know I think that that's like one of the things that is so difficult about a problem like climate change right like so we can't solve the problem like climate change because we don't know how many factors are potentially influencing like we don't necessarily have the entire context for the climate change situation so like I know what I'm doing I know what my carbon output is and like I can research the companies that I buy products from etc but like I don't know what all of the other 9 billion agents in the world are doing and so like we can't you know make predictions based on our actions right so like as the Markov blanket like too big in that situation to be able to solve the problem like because we were not able to see directly the outcome of our actions or like even the actions of our people that are in our network I was thinking about that very interesting Dean and then Dave just a couple of things first of all thank you for being so welcoming I really appreciate that and the second thing is that when I watched last week's broadcast because I watched it on YouTube I felt like I was I was a spectator and now that I've been a participant but I've been a participant of a debriefing exercise so I've actually been a participant of something that's been separated out from what originally happened and I think that this paper is this is real time what this paper was discussing so if you want to I don't know if you could find a better example of where that idea of what perspective is and what potentially prismatic are even if we're not talking about them we're still playing by their rules again thank you for allowing me to be one of the people in a tile today I just really really appreciate that cool and it's like the paper is the tent pole in some ways because it's actually north star or the reference point but we're giving our perspective on it but we shouldn't be disagreeing about what the paper is or says we're looking at the same slides but then the richness is actually everyone's understanding and contribution just wherever it is coming from whenever it's coming from there because what else could that context provide Dave yeah I don't want to go too far off track but I noticed the paper references Donald Hoffman and I don't want to do an injustice to the good doctor it may be that I see him stepping into what when he didn't really with his one video about the interface is anywhere you want it to be it doesn't make any difference the interface is totally arbitrary maybe that's just his billions and billions moment but did Donald Hoffman go down the rabbit hole a couple of years ago or is he still doing good solid work that I can read and not get my inexpert self thrown down a rabbit hole and speak to the body of work don't know it that much but just I pulled up the citation from the fields in Glazebrook paper it's a 2015 paper of the interface theory of perception and so the way that they that fields in Glazebrook use it in the paper is basically to actually in a way that's related to the interfaces it says equivalently an interface and that's the citation so the Markov it here is being connected to their approach so it's kind of probably a yes and it probably is great insights about the interface and then it probably does matter where it is or what it is or how it's modeled so yeah nice point though Stephen yeah the good good observation there I mean there's probably a you know there's a reason why Chris Fields has got involved in active influence because let's be honest as from my understanding of my observations over the last 30 40 years consciousness and physicists have kind of run the show around consciousness and there's been more and more sort of big claims about you know consciousness which is great and this is seen as you know and there's a thing called Mindel the Mindels do the process orientated psychology and some good work but I always used to like have a bit of a sigh when quantum got mentioned because I knew that it was going to be like even though you knew physics I just felt and maybe I was wrong but I just felt it was being used in a very trans personal slightly hand wavy way but maybe that and now we actually bring it into like it's not all about the physics it's about the biology which is kind of like taking the physics area down a peg or two although now Kristen could say it takes it up a peg or two because it's all based on physics it depends which way you want to look at it but interest in world let's put it like that yep in this quote this is the authors they're saying that this Markov blanket concept which is map the interface and all these other ideas like communication channels it's doing a double role in these models which is that it's where free energy is minimized so it's sort of what is going into the function that's getting minimized if you're you might be reducing your water usage but what's going into your actual function that's deciding your policy that's what's getting minimized and then something else might also be affected but what's getting minimized are the blanket states and it is that epistemic barrier that's keeping context what isn't directly measured that information is hidden from the observer so there's a probably a bunch of questions about this and a lot to say about it but one big question that I have is we know from previous papers that one of Friston's innovations was to separate separability perhaps separate the blanket states into outgoing action states and incoming sense states now it's possible that there are certain kinds of models that play nice in the sort of undirected Bayesian conditional relationship neutral way because it's sort of like a built in bi-directional road it's sort of like cars going both ways in the same lane but when we really make that separation clear and we really have some states that have an asymmetrical relationship and other states that have an asymmetrical relationship it could be simpler it could be more difficult so there might be new relationships that are enabled by the separation there might be relationships that are put off for now because it complicates it in a different way so I don't know what those would be but those are just like what could happen when you split the states or maybe nothing happens and it's already been dealt with but yeah and then Sarah's perpetual slide on the reservoir computing we had a little oh the next steps were just from their words they had three areas of next steps kind of will go through this then we'll kind of anyone else give a last thought but here's the areas where they're taking it next or where they want to signal for everyone to take it next so one is exploring intrinsic contextuality in humans it's like short and sweet to humans so we're asking about applied active inference about what is the applications of this paper they are asking that too they don't need to go into any more detail the second question is about how context switching is implemented neurocognitively so like you're talking to two people and you're bilingual and they say something in one language and the other person says something in a different language but semantically you might not even be aware of that difference it might just be associated with that person's voice or with the situation so how is that happening in the brain or in the body or different animals and then the third question they're broader question so this is the real you know that's the speculative no the third question is whether that intrinsic contextuality can be detected and characterized at multiple scales in complex systems so this is where it's kind of getting to the collective intelligence or distributed computational systems like what if there were a signal that we thought were uninformative but it were actually reflective of this just immense wisdom or processing power cognitive dimension so we might think that we're communicating with all intelligent life when we send out the golden phonograph with the etching of the fibonacci sequence or a tetrahedra or our body or something like that but there could be a whole dimension of information that's just at a resolution we're not even perceiving so that would be big of course how would we detect that a priori with minimal scale and system dependent assumptions which are equivalent to throwing out the noise and the signal together in this way of thinking well it would be great to learn more about these topics I hope that we can continue to just have experts on have all learners and participants on because it's kind of fun especially when we're talking about stuff like this so any last yeah Stephen you're muted Stephen just mention you've got a couple of other you know the next couple of events that are coming up just so that people know they're interested that you're going to be talking with doing the other live stream ah yes um let's look at the calendar yep so we will um yep we'll have 18 in the coming two weeks on the predictive global neuronal workspace with Ryan Smith and Christopher White hopefully joining for March 23rd and 30th but then also March 18th in two days we'll have Demetris Bolas so yeah that will be pretty cool um thanks everyone for participating great discussion so fun times