 Hello and welcome everyone to Active Inference Lab to the Active Inference Livestream. Today is March 9th, 2021, and we are in Livestream 17.1. The Active Inference Lab is an experiment in online team communication, learning and practice related to Active Inference. You can find us at the links and the contact mechanisms here. This is a recorded and an archived Livestream, so please provide us with feedback so that we can be improving on our work. All backgrounds and perspectives are welcome here, and we'll be using good video etiquette for Livestreams, muting if we're not talking, raising our hand so that we can hear from everyone on the stack and using respectful speech behavior. Here we are today in 17.1, coming fresh off of our fun, quarterly roundtable, number one, and the first of two weeks discussing this paper with the author, Chris Fields, and we're gonna have a ton of time to get into it today and a follow-up discussion next week. Today, our goal is to discuss and learn this awesome paper and area, some areas that are probably new to everybody. And today in 17.1, we're just gonna introduce ourselves, say hello, then Chris is gonna give a presentation and then we'll be able to ask questions and continue the discussion from there. So without further ado, we can get into the introductions and warm-ups. So everybody can just go around, we'll introduce ourselves, say hello, and then we can also consider these warm-up questions. So after introductions, and anyone wanting to say something on a warm-up question, we'll go to Chris's presentation. So I'm Daniel, I'm a postdoc in California, and I'm excited today just to hear all of the things that we missed or missed in 17.0 and learn and recalibrate and correct. And I'll pass it to Alex. Thanks. Hi, everyone. My name is Alex Velskin. I'm in Moscow, Russia. I'm a researcher in systems management school and also co-organizer of Active Inference Lab. And I pass it to Scott. Thank you, Alex. My name is Scott David. I'm the director of the Information Risk Research Initiative at the University of Washington Applied Physics Lab. I'm super excited about the discussion today about information flows. We deal in, as name sounds, in risk flows and information in risk are really hand-in-hand. So this is gonna be super exciting today. And I'll pass it to Siri. Hi, my name is Sarah Davis. I'm a science artist and actually current grad student in philosophy of science at University of Leipzig. And I'm really looking forward to this talk today. Who else do we have? Chana and Chris. So maybe Chana first, and then Chris, you can introduce yourself and take it away. Hi, my name is Chana, and I'm a mathematician and a movie grad student in the fall, but I've been doing all this stuff. And Chris and I are working on a couple of cool things and I support him. Anyway, hello. Cool. So Chris, welcome. Thanks for joining us. And we are looking at, hello, Stephen. So Chris, feel free to just introduce yourself and take it away with a presentation. Okay, thanks, Daniel. Thanks for inviting me to suggest a paper and join you for this discussion session today. I wanted to start with just a quick presentation to provide some background context on this paper, which is about context. Written by myself and Jim Glazebrook, who is a mathematician. We've been working together several years. We actually connected and started working together because we were both interested in the perceptual and attentional phenotypes that one finds on the autism spectrum. And that's where we encountered Carl Thurston, actually, and the whole active inference framework and some of his papers on autism. So let me just start this little presentation and that'll give some background for discussion. Here we go. Okay, as I said, just some context for this paper about context. And let's see. Okay. So am I so good on vision and hearing and all that? Looks perfect, thank you. Okay, cool. So here's a standard picture from Carl Thurston on the theory of active inference. It describes the interaction between some system that has internal states and some system that has external states and the interaction is mediated by sensory states and active states that form a law called Blanken. And when I first encountered this, it seemed like a very nice modern statement of ideas that I would trace back to robotics initially in terms of the learn slash explore distinction that one finds in developmental robotics especially. And even from there, back to the basic theory and ethology where you have the approach avoid distinction, approach being very similar to explore, to acting on the world in order to learn something and avoid being a state in which the organism backs off and tries to figure out what it did wrong. So the theory has a very rich history and I think what it's current formulation due to Thurston and his colleagues brings to it is a large body of applicable formalism. And in fact, I would characterize the theory of active inference as it's been presented as perception and cognition from a physics perspective. And the presentation of the theory thus far has been primarily in the language of classical physics. And I think that the language of classical physics is actually seriously unsuitable for talking about active inference. It's complicated and messy. And so what I'm gonna do today is provide a little bit of background from a more quantum theory perspective and I consider today just starting with an introductory talk about quantum theory but that won't give us quite the background that we need for this. So I won't be quite so straightforward about quantum theory but I am happy to answer any questions about it and can go on and on about it as Shana knows. So what I really want to do is give you another way of looking at active inference which I would characterize as physics from the perspective of perception and cognition. So taking the idea of observation and reformulating physics using that idea and you'll recognize from the paper there's a cocoon diagram that's sitting next to something that looks a lot like a Markov blanket. So the cocoon diagram represents the internal state of some system and the external state is completely unspecified. All that's happening out there is some dynamics and the job of system A is to I'll learn as much about system B as it can in order to make good predictions. So this is still active inference just from a different point of view. And looking at it this way, it gets us into a cloud of issues that are all related and I hope by this little introduction to show you how they're related, at least somewhat, give you a sense of how they're related. And I'll start this psycho with system identification which is the old problem from engineering and cybernetics of figuring out what system you're working with. And in physics, this is typically just taken for granted and in biology it's often just taken for granted. But in computing theory, for example, in what are called device independent protocols in physics now, it's not taken for granted. System identification is up for grabs. Intrinsic contextuality is the topic of the paper and it has to do with the unavoidable, uncorrectable effects of unknown contextual variables on measurements being made. Now that's clearly related to the question of reference frames. How am I making my measurements? And in fact, the first formulation of contextuality in physics by Kokon and Specker was about making measurements with different kinds of apparatus and showing that experimental results could vary even for the same observable based on how what other things were measured at the same time. This connects to the frame problem from AI which is the problem of predicting side effects of your actions. So side effects of making a measurement in a particular way would be an example. Skits to the problem of memory. If you're going to predict side effects you have to remember what you've done. And so you have to ask about what are the side effects of recording your memory someplace because that's a physical action. And finally, we get to the question of ensemble sampling and the question of what probability actually means. Prior probability is fundamental to Bayesian decision theory but it becomes problematic to say what does prior probability actually mean? And all of these issues swirl around at least in my view, one issue which is the issue of separability. What does it mean to separate the world into a set of systems that they interact? And separability has a very precise meaning in quantum theory. So this is one reason for diving immediately into quantum theory. In quantum theory, the notion of separability is defined in terms of whether a quantum state factors into two quantum states. So if I have a system and I can draw a boundary somewhere in it that divides it into two compartments A and B then if the joint state, the state of the entire system AB factors into a state of A and a state of B then the system is separable. In other words, it's entangled. And entanglement is kind of the natural state of systems in quantum theory whereas separability is the only state of systems in classical physics. So A has a state and B has a state and we can specify those two states independently of each other if this joint system is separable. That means A and B have identities and we can talk about their interaction. So I wanna go back in history to the 18th century and just mention how strange an idea this is. If you think about physics at the time of Laplace, he was mixing the kind of atomism of democratists, the kind of hard sphere, irreducible particle atomism with the dynamics of Newton. And so Laplace's world was a bunch of atoms, each of which was a hard sphere, a monad, and they were moving around according to Newton's laws. And Newton's laws act instantaneously. So there's no sense in Newton of forces being communicated. The forces are just there. And this is something that Newton was aware of. And here's a quote from Nicholas Geisen who's been very concerned about this issue of locality and physics. And he in fact wrote a beautiful paper pointing out that the passion for locality and physics is very recent. It comes from Einstein. Einstein invented this idea that information travels at a finite speed. Before that, everything happened instantaneously. So Newton understood that if you go up to the moon and move a rock, then everybody's weight on earth changes instantly. Now that's contextuality. That's a change here that happens unpreventably due to completely unknown events happening somewhere else. So intrinsic contextuality is closely linked to non-locality. Now, in Laplace's world of all these atoms obeying Newton's laws, there aren't any boundaries. There are no boundaries around systems like the earth or you or me. Or a cannonball. So where do those boundaries come from? In a Laplacian world, they have to be imposed from the outside. So we impose a boundary. We slice this picture in two. And we say those are two different systems. Now, if you think about that in classical physics, that means that you've created a boundary. That means that you've created an infinite potential that separates the particles from each other and prevents the Newtonian dynamics from mixing them up. And Frank Tipler several years ago wrote a beautiful paper published in PNAS showing that if you just start with Laplacian physics and you remove these singularities, the boundaries around objects, you get quantum theory automatically. So Laplace, in a sense, had a theory very much like quantum theory and he had a notion of the universe very much like entanglement. It was full of what Einstein would later call spooky action in the distance. Okay, so in this picture where we start with a system and we slice it down the middle and call the two parts A and B, we're assuming that there's a boundary there. It's not given to us by the physics. And we're assuming that it stays constant over time. And in quantum theory, unlike classical physics, the dynamics will automatically erase this boundary. So quantum dynamics of an isolated system is unitary. And so it mixes everything together and do a superposition. So if we say that A and B can be separated, what we're effectively assuming is a weak local interaction that's only well defined for a short period of time, because eventually the dynamics of the joint system will erase the boundary that we put there. Okay, but if we have a separable system and if it's finite dimensional, we have a perfectly general way of talking about the interaction. We can always write the interaction in this simple summation form. The whole system interaction, which I'll call H, A, B with parentheses, is the sum of what A is doing and what B is doing and then how they're interacting. And this interaction can always be written in the form that I've given below, which is basically a bunch of thermodynamics, numbers of degrees of freedom and KT and some measure of efficiency, times a sum of operators and those operators, we can choose a basis where those operators have binary outcomes. So they correspond to asking yes, no questions, which is what John Wheeler postulated the very foundation of physics was, just answers to yes, no questions. So we can always do this in quantum theory and that's why quantum theory is nice. It allows us to write down a formalism that explicitly describes the interaction between two systems in terms of exchanging information. And this depends on an insight that Boltzmann had in the 1870s and it's really this insight that gives us quantum theory. Boltzmann realized that information always costs energy and what this did was rule out the way of thinking about measurement that one finds in classical physics, which is that the observer is completely detached from the system and information flows from the system to the observer. But that process of observation doesn't affect the system and Boltzmann realized that this simply wasn't true. Observers aren't gods. If I wanna get information, I need to spend some energy and that energy flows back into the world. So by getting some information I'm always affecting what's going on. So from a formal point of view, these operators, the MK implement both perception and action and that becomes key, of course, to understanding active inference because active inference is about getting information and giving information by acting. Okay, so to sum this up, whenever I have a bipartite system that's finite and it's separable, I can write the interaction as thermodynamics times questions and that looks just like classical communication. I can think of my two systems as agents, Alice and Bob, they're always called and they exchange bit strings, finite bit strings. So now let's think of a specific model. Let's suppose that Alice and Bob are each ultimately preparing and then measuring the states of a bunch of independent qubits, quantum bits. And we can think of those as electron spins or something like that. So Alice in some time prepares an electron spins and she hands those to Bob and Bob measures them. And then Bob writes a message by preparing the electron spins in some way and hands them to Alice and Alice measures them. So they've now exchanged some bits. But there's no requirement that they use the same z-axis to measure the spins. So if they use exactly the same z-axis to measure their spins, then if Alice writes 1001, Bob is gonna measure 1001. But if Alice and Bob are using different z-axes to measure their spins, Alice will write 1001 using her z-axis and Bob will get some probability distributions over one and zero for each qubit, measuring it with his axis. And the way those probabilities will look will depend on how tilted his z-axis is with respect to Alice's. So this is now an interesting situation. And this gets to Daniel's question about Shannon information versus other kinds. Information with semantics. This is a purely quantum system. There's nothing to introduce any classical noise. So the information exchange is perfectly symmetrical and completely free of classical noise. But by using different reference frames, they introduce quantum noise. And specifically what they lose is the message. They lose the semantics from the message. So Alice encodes four bits, Bob gets four bits, but the code may have been lost. And the code resides in the z-axis. If they're using the same z-axis, they share a code, if they're not, they don't. So semantics depends on shared reference frames. And that's what Shannon information theory is missing. It's missing the idea of the reference frame that's used to decode the message. It's just counting bits. So Shannon theory is just a quantitative theory of bits, a way of counting bits. And what we're really interested in is decoding, encoding and decoding. So we're really interested in reference frames. And that will be a central theme. It is a central theme in the paper. Okay, so some of every interaction between finite systems and a sub-couple joint state can be represented in this way. Two agents, and they're separated by a set of bits, a set of qubits. That set of qubits can be thought of as a holographic screen. It functions as a Markov blanket. And the information that encodes is completely specified. It's the eigenvalues of the interaction written in its Hamiltonian form. So we know everything there is to know about an interaction in terms of a picture like this. And these pictures are everywhere. In biology, everyone is now using the Markov blanket concept. It's very common. In computer science, exactly the same concept is called an application programming interface. And it can be plunked down between two virtual machines and regulate their communication. In cosmology, the holographic principle is originally invented to deal with stretched horizons of black holes. So it separates the exterior from the interior. And it specifies exactly how much information can be obtained if you're an observer on the exterior about the interior. And it's symmetrical, so it also specifies exactly what the interior can learn about the exterior. But you also see this concept in particle physics. If you're thinking of a scattering interaction, for example, that's mediated by the vector boson of photon or something like that, say it's electromagnetic scattering, then all of the information is on that vector boson. There's no information about, say, entanglement on one side that's communicated to the other side. It's just a single value, a momentum transfer, for example, that's encoded by this holographic screen. Okay, so where do we take this? If we have an observer and she's got a holographic screen and she's interacting with the rest of the universe, Bob, then by looking at her bit string, Alice can learn very little. She doesn't know what Z-axis Bob is using, so she gets a message, but she doesn't know what code Bob used. She doesn't know how big Bob's state space is, so Bob could be a small system, could be an enormous system. Since she doesn't know the state space, she can't possibly know Bob's state. She doesn't know whether Bob's state is separable or entangled, so whether there are any divisions in Bob that she has to worry about. She doesn't know the dynamics on the other side, so she can't predict what Bob will do next. And all of this is in principle. Looking out at the world, I actually don't know that there are any separate objects in it. I can't, based on the information that I can get at my boundary, which is just a set of interaction I can use. And so these questions have to be answered some other way. They can't be answered by observation, they have to be answered by a generative model. So hence, active inference is based on what models predict. And what quantum theory tells us about generative models is that they're heuristic by definition. They can't be constructed based on the data. They're only heuristics. So where did they come from? And here when we look back at our picture, our original picture of a joint system being divided, I recall that this boundary is arbitrary and it doesn't last very long. So this joint state isn't separable for very long. And the only asymptotic dynamics, the only dynamics that's long lasting is the joint dynamics. So Alice's generative model, which is implemented by her dynamics is effectively a sample of the joint dynamics. It's just a temporary sample of what would be going on anyway, if there was no division. So it makes sense that Alice's generative model contains information about the world, but that information did not come from observation. It came from the time before the boundary was imposed and it will still be there after the boundary goes away. Okay, another thing that quantum theory adds to the active inference picture is that it recognizes that all of the energy that's required to run the calculations also has to come through the Markov blanket. So if you think about the states on a holographic screen, a Markov blanket, any information encoding, if that interface is the only interface between the two systems, which is how we've defined it here, then some fraction of those bits can't be informative because they have to be burned as fuel. So by definition, they're meaningless. Now this tells us again something very deep, which is that the context of a measurement can never be specified completely and that again is in principle because some of the information on the blanket has to be burned as fuel. And we see this on the output side too, of course. Some of my output on the world, some of my action on the world is just dissipating what for me is waste heat. And I have no control over that. That's part of my metabolism. But I'm side-effecting the world all the time by dissipating this waste heat, which I don't think of as an action, but which is an action from the point of view of the physics. So here again we have an addition to the theory that we get from thinking about the dynamics from a quantum physics point of view instead of a classical physics point of view. Okay, so this is the picture I showed before the set of qubits is the Markov blanket. The generative model is represented by this coconut diagram. And it becomes clear of what the generative model is. It's a combination of a reference frame and some combination, some computation. And it's clear what it does. It specifies the option space for both perception and action. It encodes everything that's detectable and everything that's actionable about V for A. So here just a couple of cartoons. For example, if Alice looks out on the world and what she sees is a red rectangle interacting with a green diamond by some interaction, then she must have reference frames for seeing rectangles and diamonds and she must have state observables for red and green and she must have a memory so that she can keep track of what's going on with those state observables long enough to construct this picture of an interaction. So all of the semantics is inside the observer. It has to be provided by these structures inside the observer. So the semantics that's imposed on the world is imposed, it comes from the generative model. It's not out there in any sense. Part of what the observer has to do is write memories if the observer is going to see change and since memories are classical data, those memories are effectively written back in the blanket because the blanket is the only place where any classical data live. So we can infer from this that if a memory looks internal to a system, that system has to have intercompartmental boundaries inside it to write those memories on. So this actually tells us something, gives us an expectation about physiology in organisms or in any sort of artificial structure that memories have to be written on boundaries of some sort. So again, here's this swirl of issues that all surround this question of separability, of the world being separated into distinct pieces. And I've mentioned all of them now except the frame problem, which is the problem of predicting side effects. And clearly if you don't have access to the dynamics outside you and you don't have access to all of your actions because some of them are just heat dissipation, you can't possibly predict side effects, all side effects. So the frame problem is unsolvable in principle in this setting. So in this paper what Jim and I set out to do was to construct a general representation of these reference frames. And because they're physically implemented they're quantum reference frames, they're not just abstractor. And these things specify both what the agent can expect and what the agent could do so they completely define the option space for active inference for the agent. So by having a general representation of the agent's reference frames we know in a sense everything interesting to know about the agent. Then we wanted to distinguish the reference frames that can be used simultaneously and these are the co-deployable reference frames from those that can't. And the distinction again is given to us by quantum theory it's just operator commutativity. So for example I can't deploy position in momentum at the same time because those operators don't commute. They side effect each other. It's the other way to think of it. And physics is full of operators that don't commute. So my action space is not unitary, it's segmented into sets of things that I can't do at the same time without introducing contextuality. So we showed that this non-co-deployability of reference frames actually induces intrinsic contextuality as it's defined in the literature. What that means is that it creates undefined joint probability distributions. Not just probability distributions that are difficult to make sense of or that need some information added to them but probability distributions that actually can't be consistently defined. Why not? Because there's the information that would have to be specified to make them well behaved as information that you can't have that you can't get even in principle. And then our final objective was to tie this explicitly to the frame problem, which is a canonical problem that artificial intelligence discovered by McCarthy and Hayes back in 1969, the unpredictability of side effects. So that's it for my introduction, kind of context setting. And I'm happy to, as soon as I can get to this button again, switch to discussion. Chris, thank you so much for that. There it is. Yep, we got it now. Thanks so much for that really amazing presentation. I'm sure everybody was having some pretty interesting thoughts from what you were communicating. Really though, thanks again for coming on the stream. So for those who are here, please write down your questions and then when you ask your question, let's stop after the question so that we can get a response to that question and not continue on with the question. And then we'll raise our hands to hear from everyone. So I'm gonna first ask a question from the chat and then Stephen, I saw you raise your hand. So anyone else raise your hand. Otherwise, there's a couple of questions from the chat. So here's the first question from the chat. Hand the context-based paradigm, especially quantum contextuality, help to answer how does spontaneously generated thought arise in the brain? If yes, how? How do you see this relating to the brain? Good question. I actually suspect that all cells are quantum computers and that to really get a handle on information processing in cells will eventually have to use quantum information theory. And the reason for that is that cellular energy budgets are really small and place a pretty strict upper limit on how much classical information cells can encode. And that upper limit is much, much smaller than the information required to encode things like protein confirmation. So I think that cells are in fact devices where contextuality and entanglement play a role. Now, what spontaneous thought means I'm not sure. I'm very sympathetic to the idea that thoughts are in emotions and everything else that we experience just the experience tip of a very large iceberg that we don't experience. So I think our experiential access to what our minds are doing is extremely limited. And so how what we actually experience as it arises to the surface is a good question. I tend to think that global workspace type theories are a good classical answer to that. But all of those ideas I think will eventually need to be put in a much better mathematical framework. Thank you for the answer. That's a long way of saying I don't know. But it's what spontaneously came to mind. So how could we ask for anything else? Stephen, and then anyone else who wants to raise their hand. Yeah, thank you. I was just curious. You mentioned sort of the quantum noise and the way that if you have the Z axis, if you don't have the Z axis, you can't decode back the information. And often with quantum effects, it's when you have very low degrees of freedom for a molecule or something that you particularly can see it compared to when it's in the milieu of many degrees of freedom. So I'm wondering if there's like a certain scales where the interactions are in the cell or molecules are very quantum like and then it becomes more contextual and a bit more blurry. And then another scale it becomes a bit more quantum like maybe once you start to get to brain waves interacting and is the contextuality piece moving it in a way, it's another way of moving it in and out of a more classical kind of noisy space so that it's like where it's got more noise and it's less easy to see sort of more distinct quantum effects. So I was wondering like, is there particular scales at which you see the quantum effect being more like you think a traditional quantum like flipping between states and if there's other scales where it's much more like Friston talks about this kind of non-linear malerve. Great. Yeah, that's a very good question. It's a question about is there a quantum to classical transition and if so where? And that question is typically posed in terms of decoherence and in biological systems estimates of decoherence times are anywhere from femto seconds down. There's the famous paper by Tegmark from 2000 criticizing the Hammerhoff-Penrose theory where his argument against Hammerhoff and Penrose is basically the decoherence happened many orders of magnitude faster than any cognitive processes or any neuronal processes. So based on that type of reasoning biochemistry is thought of as being classical somewhere above the say femtosecond type of time scale. So molecular dynamics calculations are basically classical and they're like femtosecond types of time steps. And that introduces the question of what is noise? Right, what does it mean to say that there's thermal noise in a system? When we think about thermal noise classically for example we might think about a protein that's sitting in a bath of water and the water molecules are thermally excited so they're banging into the protein like little billiard balls and that's in fact one way of thinking about a decoherence model. So what we've introduced here is this idea of a temporal sample that we're looking at the protein molecule for a long enough period that the water molecules have got a chance to bang into it for many different directions and we're somehow averaging over that temporal sample. So that gets to this question of ergodicity that came up in the earlier discussion. And it makes us ask how am I implementing this temporal sample? And the obvious answer is I'm implementing this temporal sample by measuring the state of the protein in a particular way. So I'm using an enzyme say to measure the state of the protein and the enzymatic reaction has some particular time constant. So it takes a certain amount of time for the enzymatic reaction to take place and during that time the protein may be wiggling around. So I think that all of these questions about classicality in the end come down to questions about measurement and the temporal resolution of measurements implemented by something, an instrument or another part of the biological system or an observer of some kind. So this is kind of a long way of saying that I suspect that this question of classicality is always a question of observation. And the sort of quantum picture that I outlined implies that the classical information only exists on the Markov Blanket that there's no classical information anywhere else in the system. So classical information may not in a sense be ontological at all. It may be no ontological classicality if this kind of communication picture is correct. Now, if that's the case, noise is epistemological. Noise is an artifact of making the observations in a certain way with a certain reference frame, right? An instrument is just a reference frame. Thank you for the deep answer, Chris. We're gonna go blue, Scott, and then a question from the chat. Hi, Chris, thanks for the contextualization talk. It was really nice. So I just was wondering, you just mentioned earlier like the prospect of each cell being a quantum computer. And I'm also like, I have a similar kind of thought or theory and I just was wondering if you've heard of like quantum cognition and the Pozner molecule with the idea of these entangled pyrophosphates as holding kind of quantum cognition cellularly. Have you heard this theory? And do you think it's credible or do you have other suppositions as to where this quantum cognition could occur intracellularly? I've certainly heard and seen various studies of localized quantum effects in particular molecular structures. So stacked pydomains and so on. And there's the whole story about is there a quantum coherence in, goodness, I've just dropped the word, light harvesting molecules or electron transport or something like that. And there's the Hamer-Aus theory that there's quantum processing in microtubules. And in all of these approaches, one is thinking about a quantum process happening in a background of a classical cell. One's thinking of the rest of the biochemical situation as being classical, a classical environment of some kind. In which this single quantum process is happening. And as I sort of expressed in answering the last question, I suspect that that entire way of thinking is probably wrong. That we can't actually think of bulk biochemistry as being classical. So trying to think of a particular event as being a quantum event against a classical background may just be misleading us about what's how information processing is actually occurring. And if we think of the interior of a cell, say the entire interior of something like a bacterial cell as being in one coherent state, then it's not that there are particular quantum molecular states, it's that the entire interior is one big entangled mess. That demands a very different way of thinking about biochemistry. And I don't even know how to formulate that way of thinking about biochemistry. But I suspect that approaching it as a quantum information problem where we're extremely explicit about where measurements are made would be what would have to be done. Thanks, Chris, really amazing about the relational and the semantic biochemistry. That's not how we got it in 103. So Scott, and then a question from the chat. Chris, thank you. That's an absolutely fascinating presentation. There are a number of questions, but I'll just ask one or two. One of the challenges of quantum physics or challenges to quantum physics being applied in the macro level is that people have said don't take the concepts and apply them to social constructions and things like that. But here, the origin of the active inference notions or the relationship to the active inference notions suggests to me the opportunity to look at phenomenon of much larger systems, larger scale, social scale. And so I wonder about your acceptance or enthusiasm for that notion of applying these ideas to social systems, political systems, economic systems, other systems that have interactions that can be described with active inference. And this feels to me like a nice way to say, hey, we're talking about quantum politics, quantum markets. We're talking about prionic governance for the conformational changes of narratives. And so a lot of the work that we've been doing on the information side is gonna, I've been saying without knowing, you've explained to me what I've been trying to say for years now in this talk. And I've been saying without knowing why that data plus meaning equals information. And the meaning part is what you just explained to me, what I meant by meaning. And I always thought of it, I was a lawyer for 30 years, so I always thought of it as the meaning like a contract with an enforceable narrative, which is meaning. But you've just given me a new way of looking at that meaning from a quantum perspective, which is extremely robust and scale independent. I just wonder about your comments on that and your enthusiasm for that kind of application. Thanks. Thank you, that's a very interesting observation. I think the experiments are gonna be very difficult, but I think if we can figure out how to do the experiments, we're going to find contextuality and entanglement kinds of effects all over the place. And I would point you to the work of Etobar Zafferov and his colleagues, some of which is referenced in the paper in their formulation called contextuality by default. What they've done is translate that's kind of most standard model of quantum contextuality into classical probability theory. And given this criterion of if a consistent classical probability distribution cannot be defined over a set of data, then it displays contextuality. And they published a series of papers in the past four or five years on this approach. And one of them deconstructs a large number of claims of quantum effects in psychological data, which have been reported now by numerous groups around the world with various kinds of experimental protocols. And there's active controversy between members of these different groups about what is and is not evidence of quantum behavior. It's an enormous literature now. But using the contextuality by default approach, Zafferov and colleagues, his students, Cervantes, is one of the lead people who's done this work, have constructed extremely careful experiments where under their best criteria, there are robust intrinsic contextuality effects. And interestingly, they have to do with meaning. So one that we refer to in the paper is called the Snow Queen experiment. And it's based on interpretations of the Hans Christian Anderson story. So I think that as the tools and techniques for developing these kinds of experimental protocols are improved, we're gonna see more and more effects that are very, very difficult to challenge from a statistical point of view. And the problem, as Zafferov at all point out, is that in physics, if I'm doing something like a Bell experiment, I can put my two observers far enough apart that they can't communicate just due to the speed of light in the time it takes to do the experiment. And you can't do that in biological systems. So you always have this communication loophole to deal with. And what Zafferov and colleagues are really trying to do is develop the statistical analysis techniques to get around the communication loophole in an arbitrary experiment carried out with college undergraduates. As the first. And I think they've been successful. Thanks for the response. So we're gonna go to a question in the chat and then if anyone here who hasn't raised their hand yet raises their hand, otherwise we'll go back to someone who has. So this is from Sudakar in the chat. How will an agent acquire information for their generative model? You've said that there has to be some heuristic to infer the information even from the joint world. And you said that the object semantics have to be inside the observer, but how does the observer learn that in the first place for their generative model? That's an excellent question. And again, the only answer I can give is I don't know. But I think that biology gives us hints. If you look at the process of cell division, which is one of the best examples of separation that we have to deal with, that we can really manipulate in the laboratory, then, and if you ask, how much information is shared between the daughter cells? Well, clearly their genomes are shared, but the genome is only a very tiny part of the shared information. They also share information that's encoded in their cytoskeletons. They share information that's encoded in the cell membrane. They share information that's encoded in the cytoplasm itself. They may share aspects of their bioelectric states. So there's a lot of information that is inherited, if you will, across cell division. And most of it has not been quantitated in any reasonable way. I mean, it's easy to quantitate the information in DNA or protein, but it's extremely difficult to quantitate this other information. And I'll just give you one example that's from my colleague, Mike Levin's lab at Tufts. They work on regeneration and plenarium. And they've been able to show experimentally that it's been known for quite some time that if you alter the ability of cells to communicate bioelectrically, then you can get plenaria to regenerate with two heads instead of one. But what Mike's lab was able to show is that this is epigenetically heritable. So plenaria that looked perfectly normal may have inherited the tendency to regenerate with two heads after injury. And this happens without changing DNA, without changing RNA, without changing protein content, without changing any of the obviously heritable factors. And have been able to show experimentally that the bioelectric effect happens first before any observable changes in gene expression. So here's a case where a completely unlooked for kind of information is epigenetically inheritable. It makes a huge difference to what the animal does with its physiology, with its anatomy. I mean, the two headed plenaria have two perfectly good operational brains that are independently able to direct behavior. So they're pretty weird creatures. Thanks for the response. And just briefly, that's one reason why the information is in the genome, kind of Zoolander take doesn't work well because that's assuming there's this map between the source code and the program. But then there's these alternate stable states that exist because of the semantic or the epigenetic or the environmental frame. So it's not quite as simple as just this one mapping that you made. So that's a really nice point. All right, we're gonna have Sarah and then Shauna and then a question from chat. Thank you, go ahead, Sarah. Yeah, I have almost like two different questions or directions and I'm not sure how they're gonna match. So I'll just start with one and see where it goes. One, it seems to me like the way you frame the quantum setup, you know, you have A, you have B, you could actually kind of do this with any prob, it's probabilistic type notation that you're using but you have A, you have B that have the joint distribution of the two. And it seems like, I guess the question that comes to my mind is like, what happens when we don't, if we're hypothetically, because I know our world is constructed around reductionism, but if we were hypothetically able to separate out or to not think of things as reductionist, you know, and so like, for example, how could our math possibly be different if we were looking at things as only spanning sets rather than eigenvalues and eigenvectors? Yeah, so this is kind of where my mind goes is like, is there even a kind of a way to approach a problem purely through contextuality rather than, you know, reductionism of any kind, that's, yeah. I don't know, on the math front, I may actually defer to Shauna here because she does know much more category theory than I do, and I suspect the answer is a category theoretic type of answer. But I'll make one comment, which is that reductionism is one of these complicated words that has many different meanings for many different people, but when we're with think about what we're doing in quantum theory is merely labeling and we're labeling things for convenience with the full understanding that our labels in a sense don't have any physical meaning. They only have meaning in the context of particular observational outcomes that we've made by measuring things in a certain way. But that in itself is a problematic statement, right? And this is in a sense the core of the Copenhagen interpretation. It's what Bohr talked about in his paper in Nature in 1928. He basically said at the end of the day, we have to talk about apparatus and we have to talk about colleagues and we have to talk about classical communication. And in a sense quantum theory tells us that all of that is a poor approximation, but it's the only language that we've got. So I think this question of how far can we go without reifying separation is a very good question. And as usual, my answer is I don't know. Awesome, thanks for the response. Do you have any thoughts, Shauna? Yeah, let's go to Shauna. Oh yeah, Tons, I wonder if I should ask my two, yeah, category theory is really powerful and when Chris says there's only one logic, no, no, no, he and I are trying to find another one, something called like a chromatic types and I was actually gonna ask you, Chris, if you think, you know, the phenomenon of regeneration I think could actually be modeled by when we're trying to do that chromatic filtration, right? It seems like regeneration seems to be some kind of morphism equivalent of when the stop mechanism is. What does it mean to stop? Maybe that means stop means not making more morphisms and the two head of planera like actually has a two category structure, right? That has a one morphism and a two morphism that I think that's what's controlling the central nervous system, right? That if you can have two morphisms and you could have this double head kind of thing and so whatever the stop mechanism is, I think it's almost like, remember when I was showing you those filtrations, how you could actually make more equivalences and more equivalences using that boost field localization. So I think there's another one called the topological localization where the objects in the categories are reflections and that might be what might help share a little bit is that there's different categorical models for interaction. Anything interaction based should be immediately category theory. I think the tensor network stuff fails because like a node is just a point but in category theory, you can add space to that point by making the point something else and then you're working in this modularized space. So that was my thought that about like spun and when you were talking about spontaneous thoughts or whoever had a question about that. And then I also think that like our work on the profinite condition, Chris, could also account for like thought, whatever that is, because the thought doesn't really last. Speech doesn't last. The speech fades based on the composition of the molecules in the air, but something remains. I mean, we all continue. It seems to be that holographic copy of the profinitely many in the diamonds and stuff. So I just want, so for Saria, I think a bunch of this category theory can really help you with, you know, it's like you have this object in topology called a sheaf which is this very, very complex structure but it attaches to a topological space and it's like a little bag and it keeps track of all the data. And so if each space has the structures attached to it that keep track of what happens locally, you can look at the modularized space and the isomorphism classes of those little bags. So what's happening in the isomorphism space with the little bags probably seems to be what this like global consciousness things is. So you can upgrade that one dimension using category theory. You can work in a stack and a stack is gonna be a sheaf that doesn't take values and sets anymore. All of quantum theory is taking place in sets and I think it needs to be a little bulldozed. Like it needs to go up. If you can take values and categories, then in each point, you have so much more space to move around. Right, so then if you actually have this object that takes values and categories and not in sets, then you've allowed yourself more space so that you could have these phenomena like the planera and stuff like that. But if you're just working in like points and like dots and like nodes connecting, I think it's just not enough. But you can do all these other things, but I'm not sure if I answered anything. But yeah, Chris, I think that Regener, I was gonna ask if you thought like our, which we need to pick back up the chromatic, you know, our chromatic type over a temporal logic if that could actually work with this. You know, I'm fascinated by the regeneration and the stop mechanism, which is weird, you know? Yeah, I'll just quote Mike Levin telling me over and over again and everyone else who will listen that the most important question of biology is when to stop. Yes. Now, how does the embryo know that it's done? Now I can stop. I'm an adult. No one knows. And that the question isn't even asked that way. But when you think about it, it is a key question. Because it's related when you said like our experiential, phenomenal capacity is just the tip of the non-experience. So it's almost like if the stop is the iceberg of everything else, I'd like to see what's the everything else, you know? Thanks for these points. It's really an amazing mapping that is rigorous and something that we should be learning more about how the point can be just the tip of the iceberg, kind of where it's piercing something that's a lower dimensionality. And so what do we do with that? So I'm going to go to a question from chat and then I'm going to... Can I ask a second part of the question I was working with? Go for it. This is, feel free to just be like, eh, it's not great. Just move, you know, not answer it. But I can't help but analogize something that I just, yeah, I cannot ever let go, which is in electrical engineering, or in electronics, essentially, you know, you have this, well, in anything. But I really like electronics because it's very tangible. You have the real component and the imaginary component of signal, you know, that goes across an interface. And so I'm always really fascinated by that because it always seems like a really good analogy to this thing that we can look at through math or quantum mechanics, but with electronics you actually have, you know, a thing that you can put on your desk. Because you could think of this phase change that happens at interfaces as the context. And so I've always been interested in trying to look at that thing that gets lost without looking at the current or the voltage like measuring at a point, but actually just keeping track of the context as things move across interfaces in a circuit. It's not even a question, I apologize for that. But if it brings up any thoughts, great. Either of you two, if you'd like to give a thought, otherwise it stands and I love how many ideas people are bringing up. This will definitely be an episode for people to re-listen to and make some knowledge networks up. Yeah, all I'll say is phase is extremely important. Phase is an importance of various, it's like phase according to whom. So Chris and I really torn apart this reference frame idea, like according to whom, at what point, if phase was like non-local, then everything would be the same thing. But so at what point do you actually have, you know, electronics, you assume that you have a three, that you're like local and stuff, but electronics are fascinating. It's still like kind of tangible, but if you work on that, that space that's pre-actualization, I think it's very, just like Chris presented, it's very, very difficult to say who you are, what the boundary actually is. And it's a boundary, like a boundary in mathematics is like an asymptote or something, right? So it's like a horizon or a Cauchy horizon or something like that. So inside the very notion of a boundary, you already have causality. And then this, you know, tensor product, and there's all kinds of other tensor products, there's wedge products. And you know, it's, I think it's really hard to say what a phase is without bringing into play all these other reference frames that are so fragile. Oh, cool. I'm gonna do a question from the chat and then I have a simple question. So the question from the chat is, do you think a reformulation of quantum mechanics is necessary in order to tackle the problem of consciousness or life from a full mathematical generality? I'm mainly thinking of the notion of intrinsic information from Tishby's paper, Information Theory of Decisions and Actions, but also maybe other approaches like self-organized criticality. More generally, I'm curious, how necessary is it to bridge our understanding of thermodynamics, information theory, and biology or control theory to have a good understanding, how far along are we and are there any concrete problems for a mathematician? I read what they say, you know. That's an omnibus question. I guess my first answer would be yes. We do need to bridge all those disciplines in order to make progress. Whether it will tell us something deep about what experience is in some ontological sense, I have my doubts about. Whether it will tell us something interesting about what experience, in a sense, does how it works. I think it may tell us something very important about that. But yes, I do think that these disciplines need to be bridged. I think we're at a bit of an impulse that's been imposed on us by the disciplinary structure of science. Thanks for the answer. I'm gonna ask my simple question and then we're gonna go to Steven and Blue and also Sarah, I know you had some other kinds of questions that were a little bit less about the math. But here's my simple question. Where does active inference come into play? Or for those who are just learning about active inference and curious to how it might be the bridge for all these different areas, what do you think active inference is doing here? Well, as I said at the very beginning, I think of active inference in terms of this fundamental decision that every system has to make all of the time between do I explore, do I act, or do I retreat and try to do a kind of post-mortem on just what just happened and learned something. So do I modify my priors or do I try to modify my experience? And I think thinking of active inference in terms of what reference frames are being deployed just allows us to pose that question a little bit differently. It allows us to think of the question in the form of do I make another measurement? Do I act on the world? Or do I modify my reference frame? But you can think of a reference frame like a category, like a cognitive category. So we have cognitive categories like this is a cat and that's a dog and this is a table and this is another person. So modifying a reference frame can be thought of as going into your apparatus and changing the way it works. You can also think of it as modifying my category, modifying the things I'm capable of seeing. And I think you asked in the previous discussion last week, what does the theory say about the capabilities of organisms and does it put a limit on what organisms can do? And I think in a sense that's the question we're trying to answer. To what extent do organisms have flexible capabilities? To what extent can they retool their cognitive systems? Even if it's just E. coli, right? To what extent can something like a microbial population retool what it's capable of? And again, biology gives us these beautiful examples in terms of things like lateral gene transfer where microbes in a community can pass around genes and so can change very radically what they're capable of doing. So, I mean, from an active influence point of view, that's a question, do I keep going with the capabilities that I've got or do I go shopping for some new DNA? It's like the active turn. We don't know how far or what else, what other stones are gonna be turned when we actually start thinking and doing instead of just thinking about doing. So we have Steven, then Blue, then Scott. Very excited to hear you talking about the work of Mike Levin because his work's very, when I saw the video of his work recently, like last year, it was like, wow. It started to bring the whole idea of swarming and how swarming actually is underpinning a lot of what happens and the work with bioelectrics is really interesting. And I've been interested in how if that bioelectric aspect seems more maybe more closely tied to the sort of quantum transitions and just maybe a little clarity on a couple of things because often tradition- Just one question. Let's just ask the question and clarify it. Okay, so the question is, when you have these quantum effects, normally they may be just, you think of things flipping between one state and another state in more sort of basic chemistry, thinking about like energy levels or something like that, but you have many levels potentially. So is it a case that you're able to still infer from the noise what's going on with quantum states and therefore this gives us a way to not have to just go for like the attractor theory and complexity or catastrophe theory or phase theory that we use at the moment, but you're able to still extract quantum state information in ways that's not just classical, like one energy level to another energy level, but you get this more higher information context. Yeah, good question. I think what you're really asking is where are the experimental tools that let us probe this domain? And I wish I knew. And this is something that Mike and I talk about. For example, how could we go about looking for contextuality effects? For example, in things like bioelectric effects in worm regeneration. And I don't know, I think there's a huge opportunity space here for people to think about ways of doing experiments that we can't do now or have never thought about being able to do. That probe a domain of correlation that is non-classical or super determined or however you want to think about it. And I think that the more we can think of biological systems in communication theoretic terms, the better prepared we'll be to design experiments to look for what are really communication theoretic effects. And quantum theory is really about communication. Thank you. Cool. Blue and then Scott. So can I ask you first, because Shana dropped off, I don't know if she's gone or if she's gonna rejoin and it's kind of a follow up to some of the stuff that she was saying, but I can still ask it, but let me pass to Scott and then I'll go. We'll go to Scott and hopefully we can have some future category streams to really learn more about this area. So yeah, Scott and then back to Blue. Great. Thank you, this is, my smile is hurting my face. So it's such a great discussion. Two things I wanna talk about embodiment and this is related and one is social level experiments. So the first one is embodiment. It seems like we have these internal states or embodiments of prior external states or their learned states. And so it's kind of interesting because the external states and the internal states have bear a significant relationship through time, not just through that individual organisms Markov blanket, but by the inherited and other factors that become the priors, right? And so it's kind of interesting because you have embodiment externally and internally, maybe that's extremely fluid, right? Because if you're talking about systems at different levels, they're different types of systems, cell systems, social systems, organisms, organizations, they all have different affordances and their embodiments are not necessarily, they can be, they're not necessarily inconsistent when they're different, right? You can have an embodiment of something. If I eat a lot of fatty food, I become fat, but that doesn't, you know, it's anyway. That's the one thing is the embodiment. If we talk a little bit about that, for me, from my perspective, if you're looking at risk and change, when information being shifted around as a lawyer, ex-lawyer, I look at risk and risk being moved around. And so one of the things that seems to me when you have the embodiment question is that among the relationships, then you get the question, since entropy or disorder is gonna increase universally, what we're doing is we're managing local mega entropy in the assist, right? We're sharing local mega entropy. And so the question is, is that dynamic something that tends to dissipate the Markov blanket as a standalone thing? Maybe Markov blankets are a general thing. That's one question. And then the lady- Okay, let's get back. Let's try to do one question, because it's really, you know, in an essential domain. Let's just try to, yeah. Thanks for the really great question. Go ahead. Yes. Okay. This is a wonderful question. I'll make two comments. One is in terms of reference frames, which I always come back to. I think what quite a very contributed to the notion of a reference frame is it insisted that we regard reference frames as embodied, as physically implemented. And stop thinking of them as merely abstractions. And we can go back, for example, to the idea of cognitive categories. You know, my idea of a table is not an abstraction. It's something that's implemented in my brain. And it can be changed by operating on my brain. And I can change it that way. We don't know how that works, right? But what the embodied movement or the 4U framework, whatever, what that forces cognitive science to think about is these are not abstractions. We're talking about things that have been implemented. So I think that's an incredibly important point. The second point I want to make is about boundaries. And I showed that picture of, right, Alice looking at her holographic screen and listening to the things that she doesn't know. But I could have added to that that Alice doesn't know where her boundary is. If I'm a system and I'm looking out at the world through my Markov blanket or through my encoded screen, then what I'm seeing is this array of information that's painted on my screen, right? This array of bits. And Friston's point and the point of the holographic principle is I don't see what's on the other side. But you can flip that around. It's always symmetrical. I don't see what's on my side either. And that's why I tried to make this point very briefly about memory. If we think of memory as classical, then it's got to be written on the screen because that's the only focus of any classical information. So if I'm looking at my screen, the thing that I don't see is myself. And since I don't see myself and I don't see the other side, I just see the screen, I have no idea where it is. So when you raise this question of a social system or any other kind of complex system, we as observers of that system are using our reference frames or our cognitive categories or however you want to describe them to draw a boundary around that thing. That boundary is our imposition. It's not the system's imposition. The system doesn't know where its boundary is. Is there a talk of, could I just follow? Yeah. Is there a tautology in there though? Because that's interesting, right? Because we're using our system, we're drawing a system within our boundary. It seems like that's an intrinsic quality of the thing. That system can, can a system know itself and can a system know others? And you can only know it through the boundary. So you can't know yourself because you're not written on the boundary. So it's kind of a, well, let me get to the next question because it relates to that. So the next question. But the quick answer to that question is yes. Yeah. I like, that's a good, that's a good one, there's a binary answer. So the next question was the, I was wondering if we can, for experimentation, this is the question of answering experimentation. If and to the extent that this is, there's scale independence and active inference and so that it applies to cell levels and social levels, et cetera. Why can't we look at the social level to run experiments and then apply those, what we come out with at other levels? So it's similar to when you do a qubit at a macro, they do qubits that are manifested in different ways in different kinds of systems, ionic qubits and things like that. So you can measure them in different ways. So we could be observers of the system to, at the social level to identify some qualities. And the reason I'm asserting that as a lawyer, even though the social has a lot of variables, what we do as lawyers is create fewer variables, right? We take a meaning thing and we say, hey, everyone in this area of the statute has to behave this way. And so then internally, everyone can absorb and embody the idea that I know everyone's gonna stop at a red light. I know I'm gonna stop at a red light. That's the story of the red light, red means stop, you're taught it as a kid. And so it becomes instantiated and kind of axiomatic. So one of the things about the law is that I'm going at it from a slightly different way of effectively going in and trying to create non-variables in the Markov blanket that are shared, right? And so I wonder about the embodiment relates in a way, I think, to the experimentation question, because can we do, let's call them artificial embodiments into the internal states through things like law or folk ways or shared meaning or norms or things like that, that we can then experiment against is what I'm wanting, cross-cultural experiments, things like that, that then may have application at other levels. Thanks. I think that's a fascinating thing to try to follow up on. And I would say one of the things that seems most mysterious to me is language, is the very possibility of classical communication, of what you're talking about, a physicist would call super determinism. And super determinism is essentially the same thing as entanglement. If I don't have a choice about what reference frames to deploy, if there's a correlation in advance, super determined by the past between the reference frames I use and the reference frames you use, then in an important sense, we're not independent systems anymore, right? We're entangled. But that's what language is in some sense. And so I agree with you, we should be able to re-describe communication generally in these terms of reference frame sharing, reference frame choice, super determinism and apply some of this math that's been worked out at other places to reinterpret communication. And I'll mention a paper by a guy named Alexi Grinbaum. It's a philosopher of science at Saclay. And it's a paper on device independent methods in quantum communication. But the very last, I see Sarah, you're nodding your head. You probably read this paper. One of the very last sentences in the paper is, he says, this redefines what physics is about. Physics is about languages. And I think that's a phenomenally profound statement. I think he's right. Nice. Let's continue the order. So we got Blue and then Steven and then Sarah. So as Shannon's not here, I think she said in the chat that she had to run. But I wanna touch back on what she said, like when a thought passes that there's still something there and also Chris, what you were saying earlier about memory and what Sarah was saying of keeping track of what gets lost. And so something that I gleaned from this paper now, I have like no background in category theory at all, as you may have been able to tell from the introduction video, right? So it seems like that there's this hierarchical nature of some mathematical summation that is like able to occur like via these methods. And I just was wondering if you're familiar with Eric Cole's work. He put out a paper last year on causal geometry and like he's done some coarse-graining stuff. And so is there some relationship between like that kind of coarse-graining or like dimension reduction and this category theory work or is the category theory somehow able to encompass the entirety of the mathematical problem in a better way or a different way than say coarse-graining? Yeah, good question. Yes, I do know Eric's work and I talk to him almost every week because he's also part of Mike Levin's lab. And so we have discussed this question of coarse-graining and causal emergence and defective information, all of these concepts that Eric has been working with. I think are very closely related to the concepts that Jim and I have been working with. And we're coming at them from such different starting points that it's not clear at least to me exactly how they fit together. But what a coconut diagram does is coarse-graining. It represents an abstraction of a set of logical relationships or it represents an abstractions of a set of criteria into a coarse-grained larger scale, if you will, criterion that covers all of the cases below it, but in which information is both lost and gained. And I think the best example, at least for me, for thinking about it is again going back to cognitive categories. If you think about your category for a table or another person or whatever, you lose information in that abstraction process, but you also gain information by tying all of those exemplars together. And each exemplar is then, in some sense, encoding through the rest of the network. Of its consistency, in some sense, with other exemplars that may have different detailed characteristics. So at the token level, they look inconsistent, but at the type level, they're consistent. And the coconut diagram represents that very complex relationship. So when one is doing a coarse-graining and it works, the information in the coarse-grained state is capturing just this kind of abstract information. It's the abstract information that's actually useful and that captures what is consistent between the fine-grained exemplars. And by organizing them that way, it allows each of them to represent the fact of its consistency with this larger set. Thanks. I also, I'm curious to see, because I do think that it's gonna, you guys will lock together eventually at some point, maybe in 10 years, but I'm curious to see how that kind of dynamic interplay unfolds. Yeah. Well, also, Mike Levine will be on Active Inference Live Stream on July 13th. So we can talk before then or after then. All right, awesome. So we have a little bit less than 20 minutes left. So we'll just do questions and try to get a few more questions in from the chat as well. Stephen and then Sarah. Thank you. Deep breath, there's a lot to go through here. I was gonna sort of build on that coarse grain question and maybe just stepping back at some of the bigger implications of contextuality and how that is thought about. So the principles that you're talking about or the paradigms. So I'm curious about how you see this maybe impacting contextuality and way people think about that. The reason I said that. Let's address that and then continue, Stephen. But let's just get one question and one answer. Okay, go. Okay, I guess I didn't actually hear that as a question. Sorry. Okay, thank you. Sorry, Stephen. Yeah, I'll just finish that thought, but basically just in terms of contextuality and coarse grain and how much granularity is thought about and how that has a bigger, that general idea of how to think about contextuality. I'd be interested in your thought about that as how that sort of has a bigger set of legs. And one of the reasons I say that is because when people often talk about quantum theory in a sort of broader world out there, they often think about this transpersonal. It's all the same and it's unifying and it's actually de-contextualizing in some ways. It's almost, in some ways, this is quite revolutionary to me to really see quantum effects as being contextual, being in that way. So that may be ties in with David Bohm's work on implicit order and the weariness of things. So rather than always being about the what-ness and going up, indigenous approaches are very much about where you are and how you are, what's your embeddedness in your niche. So in some ways, it's like, when I first heard about your paper, oh, you hear quantum, you think abstract, but actually it's doing something very different. So I just wondered how that might have impact in other contexts. Okay, so I heard a number of different ideas here that are parts of questions. One is to do with coarse-graining and contextuality and you put your finger very precisely on the danger of coarse-graining. And this, you can also say this in terms of the frame problem, it's often the little details that come back to bite you. And when you coarse-grained, you're specifically getting rid of those little details. So this is, goodness, it's like the metaphor of the hammer and the nail. Any choice of reference frame is effectively a choice of coarse-graining. And choosing a reference frame, choosing a coarse-graining determines what you can see. But that also determines what you can't see. And it's what you can't see that is responsible for the contextual effects that you can't avoid by learning more about the context. If you've arranged your measurement procedures in a way that prevents you from seeing them, then you can never take them into account when you're building your probability distribution. So coarse-graining is something that we're stuck with, but it's very much a double-edged sword, right? It allows us to get around to the world. It allows us to see these similarities. It allows us to learn things, but it forces us to ignore things that may turn out to be important or may turn out to have been important. Thanks for the really interesting answer there. So with respect to your other question about, if you will, embeddedness, yes, I think that's crucially important. Let's go to Sarah. Continue, Stephen, sorry. I'm just gonna say, yeah, that whole embeddedness is huge for the inactive ecological cognition work and sort of the pragmatic turn in psychology. So there's a lot of implications of that, so I just think that's very exciting, so thanks. Yeah, yeah, I'm very attracted to this sort of notion that ecosystems are, in a sense, extended organisms, et cetera. That whole way of thinking in biology is, I think, very important. And also to connect that, it's like there's this embedded practice, and then we're talking about the physical electronic circuits. So it's almost like across domains, we're taking an action-oriented and a realist manifested focus, but then seeing what is actually manifest as the tip of the iceberg in another sense. But it's totally real for what it really is, which is our measurement, and then also it's a tip of an iceberg because we're also in the system and those kinds of systems don't just arise out of nowhere. So Sarah and then Scott in our last couple of minutes, but thanks everyone for this really awesome discussion. Maybe Sarah, yeah, go ahead. Yeah, so am I to understand that we have another one of these? Cause I would love to ask Chris these other questions, but obviously not now, there's just not enough time. We will be discussing the paper next week, but we did not plan it. Without Chris. No pressure, but we're doing the same time next week, but no expectation, no prior. Okay, yeah, in some ways this is like a media, just rephrasing what's already been said, but it's actually right up your alley, Stephen. I was actually just in writing in my paper, you know about, well, what methods you choose and what questions you ask. It related to reductionism related to this whole theme. But I wonder, this is like super abstract, but like I wonder about the role of play because I was thinking about, you know, like, I was thinking about, there's so many situations in science where you're not, you know, like, when you look at the paper, you're like, oh yeah, they're doing science, but in reality, they're just poking at shit, right? Like, so it's just this way of being with the world where you're like, oh, wonder what happens if I, you know, but they're not really reducing. They're just, it's like, so it's like play. And so I've really wondered about how to bring that to the foreground about what that is in science, you know? And so I, again, I apologize, but you always have a way of coming up with a comment. So hopefully you'll have a thought on this. Oh, yeah. If it's not fun, you're probably doing it wrong. I was just thinking of like an antidote to reductionism, like play seems like a way that you're not taking things apart. You're just kind of, you're with it in some way. Well, to take this back to active inference and to think of it almost in robotics terms, right? What the people in developed robotics are trying to do is understand what motivates a system to engage with the world with this kind of innocence that I think you're referring to, to just explore and see what's learnable, right? One of the key questions is, and P.I. Odea is the one to, I think in the literature, emphasize this more than anyone else, is how do you learn what parts of the world are learnable with the resources that you have so that you can avoid spending all of your resources trying to learn things that you're not capable of learning? So how do you focus your exploratory behavior in the parts of the world where it can actually be productive? This is sort of a meta question on top of active inference and it has to do with Bayesian precision and things like that. But it's your expectations about learnability. And then there's a flip side of this in evolutionary theory, right, the whole idea that what organisms are doing in evolution is trying to improve their evolvability. So they're trying to increase, if you will, the flexibility of their lineage to cope with whatever the environment can throw at them. And this is very much, I think, what's happening in play, as you put it. You're trying to engage with the world in a way that is wanting to know what the possibilities for further engagement are. Thanks for the answer and speaking to play. It's almost like you start with what you have, the rubber band and the tinfoil. So you start with the actual, but then you think about what if and you think about adjacencies, even categorical adjacencies. So that was pretty interesting. So, and then also a comment on what you said about the expectations of learnability in Active Inference Lab, we would hope to learn by doing and applying and update people's expectations to increase their understanding of, they can learn Active Inference, basically. We'll figure out other ways to say it more directly, but we'll manifest through observations on people's screens that they can learn Active Inference. And then it's almost like in a cognitive dimension, our collective and our individual cognitive evolution will undertake another inflection point when we believe that we can learn. So that will be really powerful. So Scott. Yeah, along those lines, just in terms of further leverage, I wanna talk a little bit about the buy or build decision that every organism makes. And the idea is, do you develop it internally or do you get it outside, right? And so it seems like one of the things that struck me is we've talked about the brain being tied to language. One of the things I've been fussing with lately, this is the question, I've been fussing with the idea that the mind resides in language and the brain is just an antenna tuned to the mind. Now the antenna would be the Markov blanket, right? Or the action of the Markov blanket is a continuous dynamic tuning. And so that you're, if you're born into a cultural context of my sister adopted a five month old Chinese baby, she was raised in central Pennsylvania, she looks Chinese, she doesn't speak Chinese, doesn't like Chinese food, she's not Chinese, right? So her mind is a creature of her environment, embodied. So one of the things in terms of affordances, we talk about resources internally to develop the Markov blanket. You can really shortcut that if you say, hey, I know that my resource internally is I should tie myself to Daniel because he's awesome and he knows all the stuff. I don't have to know it. I just have to know if I have a question, I'll go ask him, right? So you start to have the thing like it's like eukaryotic revolution of multicellularity. You start to have this de-risking and leverage at larger scales made possible by the fact of synthesis of multiple Markov blankets. Now they can be synthesized around common ideas or common problems. So the question is in terms of that, again, that separateness, you see ourselves and our Markov blankets and our minds as separate, but it seems like maybe they're not. Maybe the mind is all one thing and we have these instantiations of it that we call different, there are different iterations of that Markov blanket. Do you comment on that kind of macro and micro level notion briefly? I'll try. I think this is all good thinking. I view this in the language of, in a sense, extended organism thinking and evolutionary theory, which is very closely connected to multicellularity and evolutionary theory, which is closely connected to the sorts of faculty multicellularity you see in microbial communities. I think we see examples of this throughout biology. And so the question becomes, how do we take this into the human sphere? Or, as I think you've posed it very clearly, how do we realize that it's already been taken into the human sphere? We just don't quite get it yet. And I suspect that you're right, that we just don't quite get it yet. And I remember in graduate school, someone said, what you're learning is how to read the literature. You don't need to learn any facts. That has to do with security, right? And new forms of threat and integrity and privacy, all these new problems are not necessarily new problems. And that's why I wondered about the biology and the biomimicry you see, that how can we actively pursue biomimicry in the human sphere? Thank you for that. Cool, sorry, you're a little quiet at the end there, Scott. But with this, close the hour. Chris, really thanks so much. And also to Shauna for joining with your expertise. But really, this was a special event for us. We would always welcome the opportunity to speak with you or colleagues to learn more. And any other final thoughts that you'd like to give in closing, Chris? Yeah, feel free to send me email. My address is on my website, chrisfieldtoresearch.com. All the papers I published recently are there. But yeah, I'd love to continue discussing. Awesome, well, thanks a lot. And next week, we're gonna be having a followup discussion to really digest this, definitely re-listen to it and go from there. So thanks everyone for joining and we'll see you another time.