 Hello, everyone. This is Act In Flab, live stream number 40.2 on March 24, 2022. Welcome to the Act In Flab. We're a participatory online lab that is communicating, learning, and practicing applied active inference. This is recorded in an archived live stream, so please provide us with feedback so we can improve our work, hear about your QRF. All backgrounds and perspectives are welcome. We may actually have to change this to all QRF are welcome. And we'll follow video etiquette for live streams. Check out activeinference.org to learn more about the activities in the lab. Today in 40.2 we are continuing to learn and discuss this awesome paper, a free energy principle for generic quantum systems by Fields, Friston, Glazebrook, and Levin. And we're really appreciative to have several of the authors joining us today. And in this .2, we're just going to have introductions and then jump right in to several of the notes that we left for ourselves from last time and see where else it goes. And anyone can write a comment in the live chat and we'll just say hello again, pick up with some threads that were left unspooled in the .1 and see where we get in the .2 and how we continue going forward. So I'm Daniel. I'm a researcher in California and I will pass it to Dean. Morning, my name's Dean. I'm in Calgary in Canada. My emphasis, I think, is on the practice part of active inference and seeing how it gets applied in different contexts and situations. I'll pass it to Blu. I'm Blu Knight. I'm an independent researcher in New Mexico and I will be facilitating the discussion today. But let me pass it off to Stephen. He's still here. Stephen, are you here? You're muted, Stephen. We still don't hear you. Okay. Thanks, Stephen. We'll continue on to Dave. Dave, I don't see. Also left. Okay. Okay. Let's continue. Let's pass it on to the authors we go. So let's pass it to Mike first since you have audio this time. Yes, hi. Yep. This is Mike Levin. I do biology at Tufts trying to understand what living systems are doing at all scales and how intelligence manifests in various unconventional embodiments across, and they compete and cooperate across scales to give us the amazing phenomena we see in biology. Awesome. Thank you. Dave, would you like to say hello? Introduce yourself. Maybe no audio also. Okay. How about Carl? Would you like to say hello? Yes, I would. Hello. I'm Carl. I'm one of the co-authors on the paper. I'm a neuroscientist from London. I have to confess I'm a bit of a passenger and admirer of this paper. So I'm here to learn as much as to answer questions. So I'll pass it on to Chris. Hi, I'm Chris Fields. I'm also an independent researcher working sort of on the boundaries between physics and biology and information science and sort of convinced that those are all three the same actual discipline. So this is a paper that expands on that theme a little bit. Great. Thank you. So I think where we left off last time was talking about how the FEP is asymptotically the principle of unitarity. And maybe Chris, would you mind leading us off here? And if you want to point out a specific maybe formalism or place where we should start, I think it was the very last formalism in the paper actually. But maybe if you would talk us through that, that would be super appreciated. Okay. Well, let's talk first informally just about the motivation for this idea and how it might make sense intuitively. And again, I'll go back to Carl's paper from 2019, a free energy principle for a particular physics, which the current paper is obviously modeled after in a certain way. And as we discussed last time, I think, what Carl was able to show in particular physics was that the idea, the very intuitive idea of a thing of something that exists and importantly maintains its identity as a single system over time. And what that means is that it can be measured by some other system. And it can be measured repeatedly and reliably over time by some other system. So thingness is associated with repeated measurable. And what Carl was able to show in particular physics was that if we assume that something is a thing, then we're effectively assuming that it has a stable non-equilibrium steady state density. And therefore, we're assuming that it has a Markov blanket. And therefore, we can treat it as implementing active inference as constantly predicting its own future existence and behaving so as to make that prediction come true to the extent that it can't. And such behavior may be prevented or overcome by its environment. So the environment of something may cause it to cease to exist. You know, the thing could be a marble sitting on a table and its environment could include a sledgehammer that's about to come down on top of it and destroy its thingness. But as long as the environment doesn't do that, the object can continue to execute active inference and maintain its Markov blanket by so doing. And of course, its Markov blanket is what defines it as a separate entity, as a thing that's distinct from its environment. So let's translate all of that into quantum theory. In quantum theory, we have this notion of interaction as measurement already. And we have a notion of interaction as information exchange already. So it's very natural to take this whole picture and make the notion of measurement over time precise in this, using this quantum theoretic formalism. And quantum theory also gives us a very natural implementation of a Markov blanket, which in physics is called a holographic screen. And holographic screens were introduced back in the 70s to talk about black holes. But a holographic screen is just a reformulation of the concept of a Markov blanket. It's a boundary that encodes information. And specifically, it's a boundary between two systems that encodes all of the information that one system can have about the other. So it does just what a Markov blanket does. It provides an information encoding boundary that gives the systems on the two sides of it identities for each other by distinguishing them. And this notion of identity and quantum theory corresponds to separability. Separability just means lack of entanglement. So if the joint, if you have two things, or a thing in its environment, then if they're not entangled, you can talk about their states independently. If they are entangled, then you can't talk about their states independently. And so the idea of thingness goes away, since thingness implies having some sort of conditionally independent state, right? A nest density that can actually be written down. So all of this fits together rather nicely. And so if we go back to particular physics of the idea of active inference, how emerges from that paper as a fundamental underlying principle of physics that tells us what we mean by thing or system that's identifiable over time. So this becomes a very deep principle deeper than Newton's laws, because Newton's laws actually talk about objects that are identifiable over time. And the free energy principle and the idea of active inference tells us what we mean when we talk about something being identifiable over time. So if this is a fundamental principle of physics, then one would expect there would have some relationship to the principle of uniterity, which is the most fundamental principle of quantum theory. And as typically axiom one, which one writes down the axioms of quantum theory, was axiom one in von Neumann's formulation back in the 30s or 40s, I can't remember which. So what's the principle of uniterity? The principle of uniterity is just the principle that in a closed system, information is conserved. So information is neither created nor destroyed in a closed system. So it's a fundamental conservation law, and it's exactly analogous to the principle of conservation of energy, right? The conservation of energy says in a closed system, energy is neither created nor destroyed. It's conserved. So the principle of uniterity just says this for information, as well as energy. And we know that information and energy are very closely linked by Landauer's principle or by Boltzmann's definition of entropy or all of these other connections that make physics informational. So how does the principle of active inference relate to the principle of uniterity? What does it say about the conservation of information? Or what does the conservation of information tell us about the principle of active inference or the free energy principle? Well, the first thing to note is that uniterity applies to closed systems. So thing environment pairs. And active inference characterizes what the thing is doing in response to its environment. And of course, it also characterizes what the environment is doing in response to the thing. And another benefit of quantum theory is that it makes this symmetry between any system and its environment manifest, makes it very explicit. Then it's already explicit in the free energy principle. I mean, the Markov black, it has two sides. And the system acting on its environment is the same as the environment sensing the system. And in the same way that the environment acting on the system is the same as the system sensing its environment. So interaction, physical interaction is information exchange in both frameworks. And it's perfectly symmetrical. If the system remains identifiable over time, so does its environment. Its environment ceases to exist. Because its environment is defined with respect to it. That's why we call it its environment. So all of these symmetries are built into both frameworks. So the free energy principle is fundamentally about prediction. VFE, variational free energy, can be thought of as prediction error or potential prediction error. So the free energy principle is about the system being able to predict next state of its own Markov blanket. And if it predicts the next state of its own Markov blanket perfectly, then VFE is minimized to zero. And that's the asymptotic case, perfect prediction. And there was a question last time about what's asymptotic in the claim that the free energy principle is asymptotically the principle of unitarity. And the asymptotic state, as I think Blue pointed out last time, is perfect prediction. So now let's translate all this to quantum theory. And this is where the figure in the paper that shows the disk with the various triangles comes into play. I think that's figure six, but I've actually forgotten. Yeah, it's that one. So let's now introduce this idea of a QRF, which is just a little package of predictive capability. QRF means quantum reference frame. As we discussed last time, a reference frame is implemented computation that makes a measurement comparable across time. So a meter stick, for example, implements a computation of length. And it allows us to compare measurements of length across time, because we assume that it's fixed. And of course, we can use a meter stick, because each of us has embedded in our brains a representation of length. In fact, we have a representation of a 3D coordinate system without which your states would be useless. So we can use external reference frames like meter sticks, because we have internal reference frames like our representation of 3D space, and our intuitive understanding length that we assume stays fixed over time. And that allows our measurements of length to be comparable across time for us. So now let's think about a QRF as a way of something that implements active inference, or that enables a system to implement active inference by allowing it to compare its measurements over time by prediction is meaningless if you can't compare your measurements over time. And let's ask, what would it be like for a system to be able to perfectly predict the next state of its Markov blanket? Well, the next state of its Markov blanket is a state that its environment has written by acting on it. So our question of perfect prediction is, what would it be like for a system to be able to perfectly predict the next actions of its environment? You notice that the system isn't predicting the next state of its environment. It doesn't know the state of its environment. State of its environment is on the other side of the Markov blanket. What it's trying to predict is the next action of its environment on it, which is the only action of its environment that's relevant, and the only action of the environments that's detected. So its environment like it is characterized by QRFs, right? The situation is perfectly symmetrical between the system and its environment. So the question of predicting its environment and the system predict its environment's actions perfectly, only if it and the environment exactly share their QRFs, because the QRF not only interprets sensation, it gives meaning to action. So it says, what your action is going to be. So if I use my meter stick to cut a 2x4, my QRF, the meter stick, is guiding my action. It's giving it meaning. It's making it repeatable in the same way that it makes my perceptions repeatable. And again, that's why we use these things. We want to be able to act in repeatable ways as well as proceed in repeatable ways. And if we can't act in repeatable ways, then again the notion of prediction becomes meaningless. So perfect prediction corresponds to perfect QRF alignment. So the question becomes now, what does perfect QRF alignment correspond to? And Jim Glesburg and another colleague, Antonio Marchiano, who's a physicist in Shanghai, showed in a previous paper that perfect QRF alignment between a system and its environment is only possible if the system and environment are entangled. So perfect QRF alignment corresponds actually to the collapse of the Markov blanket, because it corresponds to the collapse of separability, which means that the nest is no longer well-defined, because it's no longer a conditionally independent state. And you can see that if you think of what perfect QRF alignment means, it means that your actions correspond exactly to my predictions. A conditional statement, it's an exact statement. So if I predict that you're going to do X, then you're going to do X, period, which means that your state is no longer independent of my state. Your state actually depends on my state, and my state correspondingly actually depends on your state. If you predict that I'm going to do X, then I will do X. That's what perfect prediction means when we translate it into this QRF-based concept. So perfect prediction and the collapse of separability are the same thing. So the asymptotic limit of the FEP, which is perfect prediction, corresponds to the collapse of separability. Well, the collapse of separability is just entanglement. And the principle of uniterity is the claim that any isolated system left to its own devices will evolve its state, which is the joint state of any two of its components will evolve toward complete entanglement. It will evolve toward a pure state in quantum theoretic jargon. So it's in this sense that the FEP is asymptotically the principle of uniterity. It's that its asymptotic condition is the same as the condition that the principle of uniterity imposes on closed systems, i.e., on system environment combinations, where whenever I say environment, I just mean everything that exists that's not the system of interest. So a system environment combination is always a closed system by definition. So that's my introduction and we can start on discussion. That's great. Thank you. Stephen, did you have a question? Yes, you can hear me okay. Excellent. I was just going to ask that's really helpful. There's a lot in there obviously. I was going to ask in terms of the pure uniterity, would that be what happens say at the electron or the kind of these normal quantum particle scales? And as you go to scales above that, it becomes more and more approximate. You have less of this isotropic, this quality of knowing an entanglement. Well, that's a very deep question and it's a very good question. Let's think about the canonical entanglement experiment, the sort of Bell EPR experiment. I'll draw it with my hands here. So in the canonical experiment, you've got a source and it's located here in the middle of my face and it produces an entangled pair of something, photons or electrons or bucky balls or whatever you want. And they travel apart at the speed of light or something very close to it. And at some point they encounter two symmetrically placed detectors and the detectors are operated by observers who are always called Alice and Bob. And these detectors measure something and in the canonical experiment they measure spin. And what Alice and Bob each observe independently is a random distribution of spins. And if we think of this as a two-dimensional experiment, then they either measure spin up or spin down in some coordinate system. So Alice and Bob each pick a z-axis that defines up and with respect to their local z-axis they see a random distribution of up and down. Now things get interesting when Alice and Bob later get together and compare their results. And what they find is that whenever Alice observes up, Bob observes down and vice versa. And so they conclude that their results are classically correlated. But then it gets more interesting because what we allow Alice and Bob to do is constantly alter the direction that they detect the spin in with respect to their own local z-axis. So you can think of the spin detectors like a polarization filter. So like a pair of sunglasses. And what Alice and Bob can do is randomly and independently change the tilt of their sunglasses while they do this experiment. And in real implementations of this experiment, first implemented by Alan Aspect in his lab in Paris in the early 1970s. It has since been repeated probably hundreds of times by different groups all over the world including the Chinese using a satellite to generate the entangled pairs and measuring them thousands of a thousand or so kilometers apart. So if you remember the speed of lights a foot per nanosecond. So you've got a lot of nanoseconds in there you're a thousand kilometers apart to play with. And so the trick in this experiment is that Alice and Bob can change the direction of their detector within the time it takes for the entangled pair to get from the source to the detectors. So if it takes 10 nanoseconds to get from the source to the detectors they have to be able to change their detectors within 10 nanoseconds. So that's why you want detectors that are far apart. So Alice and Bob redo the experiment changing randomly changing the directions that they're measuring with respect to their local z-axis and then they get together again to compare their results. And what they find is that if Alice observes up, Bob will observe down even when they've been randomly and independently changing the directions of their detectors. So one way to put that is if Alice decides to rotate her detector 45 degrees then Bob's result will automatically be rotated 45 degrees and it's symmetrical so vice and vice versa. So that's not classical correlation. That's classical correlation that's robust against manipulation. So I was discussing this with Carl a while back and I used this example if you and I are classically correlated then we may have the same beliefs about something. So we might both believe that the earth is round. That's classical correlation. Now suppose someone convinces me that the earth is flat. If that happens and you then believe that the earth is flat that's entanglement. That's classical correlation that's robust against manipulation. So it's correlation no kidding. Correlation that survives the environment doing something to you. And so now let's go back to this question. Is this is this an effect that only occurs in some microscopic domain? And the answer is it's an effect that was first described by a theory that was intended to only describe a microscopic domain. Like quantum theory was developed to deal with nuclear and atomic and nuclear physics. And so there's this idea of the quantum world which is very small. But the math of course applies across the board. And trying to think of quantum you're trying to think of classical physics as a limit a mathematical limit of quantum theory always runs into problems. And the problems are always the numbers go to infinity that shouldn't be infinity. So it's not really accurate to think of classical physics as some sort of mathematical limit like h bar goes to zero of quantum theory because if h bar goes to zero lots of relevant energies go to infinity. And this has been known for a long time. Nonetheless it's constantly taught in this classical limit kind of framework that isn't really correct. So one point of doing Bell EPR experiments at larger and larger distances is to demonstrate that quantum theory is not really a microscopic theory. So if you think of an entangled state, an entangled state is one object. It's not two electrons that happen to have some mysterious relationship. It's one object that has one state. Remember what entanglement means is non-separability, non-independence across any decompositional boundary. So an entangled pair is one thing. And what the Chinese were able to demonstrate was you can have one quantum object that's over a thousand kilometers long. Now that's not microscopic. That's big. A nice way to think about entanglement was introduced by Leonard Soskin quite a few years ago and Raphael Boussa with their idea that an entangled pair is the same thing as an Einstein-Rosen bridge in spacetime. And an Einstein-Rosen bridge is a black hole and a white hole connected end to end. Or you can think of it that way. It's a topological connection in spacetime that simply identifies two points in spacetime that would otherwise be considered distinct. So this emphasizes that if you have an entangled pair there's no distance inside it. If you have two entangled electrons for example there are a thousand kilometers apart as far as they're concerned they're right next to each other. And in fact as far as they're concerned they're located at exactly the same point in spacetime. And that's what this ER bridge notion emphasizes to us. But there's really no separation here that the separation is an artifact of our observational capabilities that's not a characteristic of the entangled state. So entanglement actually calls into question what the idea of distance even means. It forces it to be relative to us. And that's why there's all this discussion of emergent spacetime in quantum gravity and quantum cosmology. Because if you think about things from a straight quantum theory point of view spacetime is an observable. It's just something that's relative to how an observation is made. It's not an intrinsic property of anything. It's not ontic as they say. So let's go back to this question again. If we think about entanglement and we think about observation and we think about entanglement as something that's observed by comparing observations made by different observers. Neither Alice nor Bob in that experiment can detect entanglement. They only realize that there's an entangled state when they talk to each other after they've done the experiment. Then this question of scale becomes a question about how observers interact and what observers can know about each other and about each other's experimental apparatus. So when I described the entanglement experiment I said Alice and Bob each have their own local z-axis. And when they get together and they discuss their results and they compare their results then they make a very important assumption which is the assumption that their z-axis are comparable. So if during the experiment Bob's z-axis was varying at random with respect to Alice's z-axis then their results comparing their results would be meaningless because they'd be measured with respect to completely different reference frames. So we have to make this assumption that they have the same z-axis. Now interestingly enough that assumption by itself tells you that Alice and Bob have to be entangled or their z-axis have to be entangled. So the idea of entanglement kind of expands to take in all aspects of the experimental situation as soon as one starts unearthing these assumptions that we make to talk about comparability between experiments. So let's re-rotate back about half an hour and we see that the quantum reference frames are what make experiments comparable for an observer. It's similarity or comparability of quantum reference frames that make comparing experiments possible for two observers. But if the two observers exactly share their QRFs in a way that's robust against environmental manipulation then they're entangled. So we've kind of come full circle here. It's when one can make this notion of entanglement as macroscopic as one likes. And again this is in a sense why quantum gravity and quantum cosmology are interesting. It's what drives things like the black hole information paradox which you may have read about. So we're now getting into in a sense very fundamental physics. How do we what's the relationship between entanglement and the concept of space-time? What's the relationship between quantum theory and general relativity? Which is the big question in physics from one point of view. But the short answer is you can make entanglement as big as you want. That's great. Thank you. That actually leads into a question that I had way back when we were talking about entanglement and quantum reference frames. So I was just thinking like if I cannot perfectly predict my own behavior is my quantum reference frame like incompletely aligned with my own quantum reference frame? Like I mean maybe we've all had the experience of you know predicting if I kick the ball in this way I predict it'll go over there but like it doesn't actually go over there. So like I mean we predict it more off our prediction is is not accurate. So is it like the actually a temporal separation that like makes that happen or or what do you think happens in those circumstances where where my QRF appears unaligned with my own QRF? No that's also that's also a beautiful deep question. And it points to something that I think can and should be stated as a general theorem which is no system can perfectly predict its own state. No system can observe its own state. Its own total state. So that situation is ubiquitous and unavoidable. And one could phrase it by saying a system can have a a QRF or a meta processor as one often uses that vocabulary that represents itself. So we all have a self representation that's in a sense metacognitive. We claim to know what we believe and things like that. But of course those beliefs are all one very coarse-grained, two extremely incomplete, three useful for making predictions but they're often wrong. And you know this is this is this point this point has had huge technological consequences. I mean think back to the history of artificial intelligence right in the 70s. There is the the huge new wave was expert systems and you know AI people redefine themselves as knowledge engineers and we're going to go out and interview experts and find out how experts did things and coded up in computers and soon we'd have you know artificial expert systems that did anything interesting. And that whole business failed utterly right. It was a complete disaster. It didn't work at all. Why? Well the simple explanation for why is that experts can't tell you what they're doing. Expertise isn't verbalizable and it's not just expert piano playing is it verbalizable. You know expert systems engineering isn't verbalizable. Expert computer programming isn't verbalizable. And that's that's just an illustration of the fact that our meta processors don't actually have a complete model of the rest of us. Much less a model of the rest of us plus the meta processor. So it all it all comes back to this general principle that a system can't represent itself and so it can't predict its own behavior. Awesome so between this work actually I should ask Carl to comment on that tirade because he's also thought about this an extraordinary amount. Well I'll certainly make the comment that that was marvelous. I'm glad this has been recorded because this should certainly be transcribed and I know it's probably impacted in the paper but it was so beautifully articulated and clear taking us through the issues. I really enjoyed that. So I have loads of comments but I won't I won't waste people's time going through all of them. That last issue is really interesting about the meta processor and the meta cognitive aspects of a system being able to measure itself. I mean there's a fundamental observation there of course you can't measure yourself. The whole point of the particular physics paper was to say that yes there can be two kinds of information geometries if you like and the big move is that one system can measure i.e. infer by possessing or having a movement on in some sort of information geometry infer and belief update about something else. So at no point is there any room for it to they also start to infer about its own inference that that is you know that's a mathematical impossibility from from the random dynamical system perspective I'm not sure sure about the the quantum perspective but it sounds so that's also true. So that begs the question and how on earth do we have the illusion that we know ourselves and of course you know it's a little bit of a a colloquial question because you're 99.9% of things don't actually think that they know themselves. So unless there's a particular phenotype particle or species that boasts philosophers I'm pretty sure that most of the things that exist don't have any any pretense to thinking that they know themselves. It's a really interesting point to get across anecdotally you know I spend a lot of time taking people through examples that probably go right back to things like idea motor theory that you know the way that we the way that we develop a sense of agency and think that we have a sense of agency you know it is a gift an illusion that is probably unique to only a small number of us and may not even be available to people who say severe autism and the example that I literally in the past day just written down in a didactic way was that if you just take vanilla active inference under a Markov blanket and then you look at particular kinds of sparsity structures where the active states of the Markov blanket or the holographic screen are hidden from the internal states. So these are special kinds of systems have a particular sparsity where now the active states of the Markov blanket are now become hidden from the internal states and you will certainly license some entanglement and I would I'm going to later ask about the entanglement and synchronization so I would say entanglement or synchronization of the internal states such that the internal states have beliefs about their action but those beliefs become beliefs about action as a hidden cause so that absolutely no notion from the inside that you cause that you're just representing the cause so the example I have in mind is you're watching a little fish swimming around looking for food gobbling up little bits of particles of food and you think oh that's a very sort of artful fish with purpose and pragmatics and knows how to navigate its world but from the point of view of the fish all that is happening is that the water and the particles are moving around it in a benevolent nurturing and fortuitous way such that the water is delivering food particles to the fish's mouth so from the fish's point of view it has absolutely no awareness that it is the author or the agent that's causing this synchronization with the environment so you know the deeper question is how on earth do we have the illusion that we think we know our own agency and how and when do we develop agency and you know furthermore it's not just agency about me it's agency about other things at what point do our in our say early neonatal neuro development do we have these models of the world that enable us to distinguish between mother and self you know and at what point do we align our QRS if you like with not mother but with the world that enables us to now see mother as an independent object that is something else that is an agent and you know the argument would then be well perhaps if mother is an agent perhaps I am an agent and I am now the author of my agency and my actions but that is such you know that's such a high level thing I imagine it only pertains to you know to humans that's what that's what I said I'd love to talk about the homologues of entanglement from the point of view of classical thinking perhaps I've spoken up at this point just a couple of things came came to my mind listening to this one is that Josh Bongard had this amazing paper in I think it was around 2006 or so where he had these robots and these they were they started out like a six-legged starfish but the the cool thing about it was that they didn't have an internal model of what they were and or what their shape is or how to move and they basically had to flop around and make make models of their own structure based on what happened to them given various outputs that they generated and eventually they would build a model of what happens and they would walk around and so on and two interesting things followed one is that you could unlike traditional robotics you could rip off one of the legs and they would very soon revise the model and keep going in different ways so they weren't tied to that particular self model right they would have to discover it from scratch but it was still sort of plastic throughout their whole lifetime and they could they could get along so they had that plasticity and the other thing is that and um Christof Adami wrote a nice uh kind of uh interpretation piece of this he he described they they spent lots of um lots of time being completely inactive and basically running through a quote unquote mentally running through all the different things that they could do and what they think was going to happen uh before they actually moved and so you got chris call that dreaming you know that they would basically sort of play out before you know in between their actual motions where they would go around testing and they would actually sort of play this out internally and try to refine the models such that then when they do move they would actually make movements that uh maximally sort of distinguish the different different possible models and so on and he said you know he said that you can watch them dreaming about about their shape so and then the other thing I just wanted to briefly mention this this this idea of uh when and how do you recognize that you're an agent and that actually you need to act for things to happen versus just having the environment do things to you I suspect that one of the things that drives it very early on is this is this notion of stress and I don't have a particular uh I don't have a theory of why stress feels stressful but but just on a purely functional level I think that what happens in in cells you know so very early long before you get to humans or anything like that I think that what you get is this um this a set of mechanisms that that evolved from things like stress about protein folding and so on but then then basically scaled up to be stress about metabolic and stress about morphology and stress about behavioral issues and and so on that basically is a is a system-wide metric of um the delta between what's going on and what you'd like to have going on right the delta between your your set point and then what's happening now and what then I think that drives the strengthening of this uh of the agency model because what will happen is as you as you as you slowly learn that you can do things that reduce the stress level it kind of solidifies this idea that it's actually at least some amount of it is up to you to make life better and and so you then then act and then of course we know that breaks down and sort of learn helplessness assays and so on that's that's that that that really is very traumatic for for all kinds of creatures well below human can I can I speak to that because there's some great points there please yeah um so three really important things being brought to the table there um so um bound to forget the third one if I start the first one I just want to so I'm coming back to Blue's um question about sort of a um QRS and alignment and that and you know one obvious answer that um what could bring to the table is that learning is the alignment to the QRS so learning at all scales you're and we're talking here about developmental scales for example we're talking about robots that learn to repair themselves or learn their new QRS um if you remove a limb or a child learning to um make sense of its world then um you can look at the slow process of alignment of the QRS the things that are invariant over the faster time scales of at which there is classic information exchange on or in the holographics uh screen as simply um a um movement from a state of um this entanglement to a state of entanglement um as you become better and better at predicting but to become better at predicting you have to align your QRS you have to to align that um I think the sort of the robot example is a perfect a nice illustration of that in the sense that these robots were equipped with um the ability to learn about themselves and of course the you know the analog um in developmental psychology and indeed I would imagine developmental neurobotics um is motor babbling it's basically you know soliciting um outcomes sensory outcomes in order to start to disambiguate between what did I cause that or did you cause that you know what parts are apparently under my control but of course you know that's a very anthropomorphic interpretation there's no minus or minus implicit but certainly the learning of the world models the body models um is probably the first thing that any artifact has to contend with because you see that in children you're shaking their rattle to um um solicit and um actively solicit the conjunction of visual motion um proprioceptive feedback from the muscles causing the movement of the mobile or the rattle and the auditory so soliciting uh conjunctions or correlations that are expandable and predictable um in multiple modalities so you may then ask why on earth what is the imperative for that kind of behavior what drives action to solicit this opportunity to um learn the the correlations and the coincidences and the conjunctions and that brings us to the second point um that the um these robots dreaming tell you immediately that at some level they have an internal model um of the consequences of action and that tells you immediately that you've got um if you wanted to write that down from a point of view of classical active inference you've got a um a belief structure on the inside that covers both the con but covers both the external world and these hidden actions and the actions upon of the world and the consequences of those actions on the external states that's a very sophisticated generative model to have and you're getting now into the world of planning as inference so a thermostat won't doesn't have that and yet these robots do have that so it tells you that there are two natural kinds of particles on markup blankets or several systems um at least two kinds one of which is more like a thermostat and one of which is more like um um that's sort of um I was thinking a sort of like an Ashby homeostat or allostat but certainly more like these sort of robots that can learn about about themselves and the key and the crucial distinction I think is that the agent has now started to represent the consequences of its actions but without realizing it's its own actions so how we've got to the level of the other meta cognitive I am aware kind of kind of art about and it interesting it comes back to um what I've said before about natural kinds where one's active states are hidden from the internal states so they have to become inferred you have to infer them they're not they are opaque to the internal states and as soon as you say it like that then you're naturally talking about systems that can effectively engage in planning as inference so they have to have plans in their heads in order to act which corresponds to to the streaming and from the point of view of the the active inference then you ask well what are the objective functions um or what are the principles of these action that would apply to these kinds of plans and when you work it through then one important component of the um the sort of the pathogen or the actions in question is a drive to resolve uncertainty so we come back to this notion of the child soliciting proprioceptive extraceptive um um sensations that enable it to resolve uncertainty and this is exactly the same principle behind the optimal Bayesian design that we design experiments that are going to resolve the uncertainty to the greatest extent possible under our current internal models or hypotheses about how the world works and that brings me to the third great point uh about stress uh and the the really important um um place of stress and uncertainty and angst and not knowing what to do next or not knowing what's going to work in driving behavior because if a large chunk of the imperatives or planned action is to maximize information gain or minimize expected surprise or reduce uncertainty then you can see easily now why it is situations of uncertainty and stress that will necessarily cause the most um epistemic responses in order to drive the system or the uh in this instance the agent um to resolve that uncertainty the final point now and this is basically paraphrasing what Mike just said if the system sees itself behaving more in times of greater uncertainty it may now learn that and and now have the idea oh i am the kind of system or i am behaving as if i am the kind of system that responds more when under stress and then you'll become aware of that and then you can get into the psychiatric or psychological aspects of stress you may not if you're just approaching you may just show a sort of um a sort of selection for selectability like response um but this you know that i think that you know the two fundamental mechanisms imperatives that are being fulfilled here in accord with basic conservation laws or principles of least action from the classical perspective um are account for exactly the same kind of behavior i think the interesting thing about stress uncertainty though is that we're not talking about um the um the content of implicit belief structures but the the the second order statistics the uncertainty uh and that's i think you know brings a whole level of analysis to the table because once you talk about uncertainty and representations of uncertainty um in say internal states um then from a psychological perspective you can start to talk about tension and consciousness um from the point of view of an engineer we're talking about things like calm and gain and get getting the estimates of noise levels if you like uh correct um and i'm not sure what what where the if you like the well let me ask um from a quantum theoretic point of view is uncertain debate into that can you can you have a stressed quantum uh quantum formulation put that out there where i'll stop talking now yeah just uh just to answer that question um if if one if one thinks about quantum theory uh from with a Bayesian view of probability right with a kind of subjectivist notion of probability um which i mean that point of view was really pioneered by chris fuchs and and now david murmin for example has taken it up and uh written some very good things about uh then this notion of uncertainty reduction uh becomes the explanation for why you do experiments and build theories and and it all becomes very well aligned i think with with uh the idea of active inference and in fact we we wrote into that paper at the very end some remark about uh this cubist perspective about quantum theory becoming a result not an interpretation when we think of of quantum theory is in a sense a way of reformulating the idea of active inference so i think they're very consistent right uh a stressed a stressed quantum is uh stressed quantum system is just a system that gets uh observational outcomes that it doesn't expect so you know we we we have to think of uh such systems as having generative models and thinking about the generative model of an electron uh is a bit of a stretch right because the electron is only characterized by mass charge and angular momentum so uh it it can't really have expectations about much right electron can have no expectations about the external electric field for example oh it can detect that it can respond to it but it can't predict it and if we think about oh something like an electron then we can focus in on this idea that having a generative model requires having enough memory to keep a record of the most recent observation at least the most recent one and if you want to have a good generative model then you better have enough memory to keep a record of quite a number of observations and it's this memory resource that really simple things like electrons don't have so uh they're they're not great predictors uh because they don't have the memory to allow being a great predictor and as as both Carl and Mike were saying um in a sense you you have to have memory to be able to feel stress now you have to know that your predictions were wrong so you have to remember what your predictions were uh so memory becomes a really key component of the theory once you put it into this language because the language in a sense forces you to to lay out assumptions about what the what the computational resources that are being required are that wasn't very coherent but anyway I just wanted to stress the role of memory here perfect thank you Carl yeah just to um reinforce and endorse that last point um and uh just tell exactly the same story in a much more pragmatic way upon a viewer statistician so I heard Chris say that basically that if I want to go beyond being a little electron or a thermostat and I want to now um infer the quality of how noisy are my sensors for example my thermorescences if I'm a thermostat that will require me becoming a little statistician and what does that mean well basically I'm going to be estimating the standard error or the the experimental variance there's something quite important about that you know which which comes back exactly to the computational resources and memory if you do a simple t-test as a statistician on some experimental data you necessarily have to acquire lots of data points and store them and of course that is just the degrees of freedom associated with your t-test or your f-test so the degrees of freedom score the number of data points that you'll be able to remember and you need a sufficient number in order to get a precise estimate of the uncertainty so literally the degrees of freedom in classical statistics is literally a statistic that scores your confidence in the estimation of the standard error which is pooled over multiple observations which is an attempt to estimate on average if I observe with this kind of um or I um I ask my questions or I wrote to my holographic screen um or I um solicited these sensory states um you know what would on average what would the uncertainty be associated with it so I think it's a really important point about the computation with sources the degrees of freedom literally in the sense of the degrees of freedom associated with your f-test or the degrees of freedom you have available for the computing power to actually get up to these second order inference machines or takes on on second order inference so and I think that is particularly important when it comes back to Mike's observation about stress and uncertainty and these higher order ways of making sense in the world but in this instance making a sense of sensations that are accrued over time not about the content but about the reliability or the precision of that content so that is a really important point awesome so that kind of oh Chris go ahead if I could just pick up on this and and take it in in yet another highly related direction it's just to point out that that all of these resources are energetically expensive so if I'm going to devote uh space in in my state space to recording memories then that's only useful if I can defend those memories against entropy if I can keep them from decaying so I've got to consume energy to write the memories I've got to consume energies to energy to maintain the memories and then I have to consume energy to read the memories and consume energy to uh compare what I've read with what I see with my kind of current event sensors so as we start adding these computational resources to systems the energy budget goes way up and it's it's another kind of thing to keep in mind as we think about these in terms of embodied forms whether they're robots or organisms that these these entities are extracting this free energy that they have to have to run their computations and maintain their memories and all that from the environment so here's this is another input that in a sense is not sensory but it's still having to flow through the Markov blanket it's still having to flow through the boundary between the system and its environment and a lot of the systems actions on its environment its expenditures of of its own internal energy are to acquire this free energy so you know we we go out and shop for groceries make breakfast or whatever that's stuff that we have to do to keep all these processes running so it ties metabolism and cognition together in a way that they aren't often tied but that they have to be to make sense of what's going on great so just on that point something that we had written down as a question from last time and in terms of an electron having a generative model and discussing agency we had talked about free choice and it was brought up in the paper several times both in terms of like any physical system having free choice if one physical system has free choice they all have free choice they generate like their their noise generating things like free choice generates noise in both classical and quantum systems and so I just was wondering if you could maybe say a few words about the difference in free choice in an electron and the difference in between an electron and like a human or an animal or even a cell what is that difference and is that also related to memory uh is that a question for me the general question for anyone well I'll I'll make a couple of comments one is remember in talking about the canonical entanglement experiment um part of the description is that Alice and Bob can alter the directions of their detectors however they want to at random independently whatever you want to call it during the experiment so that's the free choice assumption or that's called in physics the free choice assumption and what that assumption means it's effectively an assumption of independence it means that there's not some common cause in the past that determines what they're going to do do as they operate their detectors and one can reformulate quantum theory of in terms of what's called super determinism and super determinism is the idea that there is some common cause in the past that determines how experimenters are going to do their experiments and in particular determines how Alice and Bob are going to rotate their detectors in this EPR experiment and super determinism is kind of the ultimate non-local hidden variable so it um it allows you to predict exactly what entangled pairs are going to do and in fact it renders entanglement a classical effect right it just says these these kind of super classical correlations are present because there there was no independence to begin with everything was determined from the very beginning and if you think about as as mentioned last time Newtonian physics is formulated by Laplace it's a super determined deterministic system so anything that happens anywhere in the universe instantly affects what's going on everywhere else in the universe and the most the most current formulation of that is bromium quantum mechanics where the motion of any particle depends instantaneously on the motion of every other particle in the universe and that's how Newtonian gravity worked right before Einstein imposed locality by making the speed of light finite so all of these things are are interconnected assumptions about space time assumptions about the speed of communication assumptions about super determinism and assumptions about free choice it's that cluster of assumptions that the so-called free will theorem in physics is about for theorems there are now a couple of them due due to conway and coca and what those theorem show is that from a formal perspective in physics if you assume that some system has free choice so if you assume that some system is not subject to super determinism then you have to assume that all systems have free choice in the sense that all systems are not subject to super determinism so you can't limit super determinism to some little piece of the universe and say it only applies here it doesn't apply anywhere else and in particular it doesn't apply to me i have free choice even though nothing else does that's mathematically inconsistent um so that's that's what what the free choice assumption means is in a strict physical context thank you very cool news and did you have a question i was just going to ask a little bit more about the and the degrees of entanglement and whether that's different in terms of the observer's degrees and versus the like the mechanical nature of quantum mechanics degrees i think you've answered that to some extent but uh just just the way that that varying degrees of entanglement can be thought about and whether that maybe ties into how the Hilbert space is thought about at the same time yeah this this assumption or this question touches on why we do everything in the paper from the point of view of a bipartite decomposition so we always we always decompose into a system in its environment in the paper and there are two reasons for that one is to keep it simple and the other ones to enforce them are called blanket condition so uh it's entirely common place to do physics in an open systems framework where we talk about uh two systems we can call them Alice and Bob again that are embedded in a common environment and in that case whenever you have a tripartite or greater some sort of multi particle kind of decomposition of the state space you can talk about degrees of entanglement between different systems and you can you know cut up the state space any way you want to and ask about the entanglement entropy of some component of it and you and you get these ideas of partial or they call it non monogamous entanglement um and that's all well and good it's it's mathematically complicated it's conceptually complicated but in in a sense it deeply violates the more called blanket condition um because in point of fact each of us is an observer and uh our goal is to to say what does the world look like to an observer and from Alice's point of view of Bob is a decomposition of her Markov blanket right Alice has to actively disambiguate of her incoming signals into signals that she attributes to Bob as an entity and signals that she attributes to the rest of the environment as an entity so that's an active cognitive process on Alice's part that's what the Markov blanket condition that's how the Markov blanket condition in a sense forces us to think so the Markov blanket condition itself makes us take this idea partight decomposition seriously and again you can think of this in terms of of implicit assumptions about resources if you think in open systems terms and so you think of Alice and Bob as as ontic entities that are that are separate from each other by a priori assumption and separate from their environments by a priori assumption then you systematically underestimate the amount of cognition that Alice and Bob have to be doing and so you systematically underestimate their energy consumption and so you systematically underestimate their their uncontrolled effect on the environment i.e. generation of waste heat acquisition of free energy resources and all of that so it's not just a philosophical difference it's a difference that leads you to make genuinely alternate predictions about things like metabolic load so that's why we do things in this bipartite framework to respect the Markov blanket condition and to keep our assumptions about energy flow explicit. Can we take that to the latter sections in the paper about biological cognition? What are the implications for this for biological systems? Yeah, Carl? Well just to pursue that sort of thought experiment where you're trying to now so you wanted to use active inference to distinguish Alice and Bob observing a pair of electrons and remembering that the pair of electrons are one thing so you now got a tripartite with three Markov blankets in play and many of the issues for example the super determinism that allows some assumptions about the QRF alignments between Bob and Alice to be in play and also the discussion of how Alice has to have has to contextualize sensations from Bob under a belief or an internal model that Bob is indeed another Markov blanket and possibly a Markov blanket very much like Alice. What you are saying is that two of these particles Bob and Alice have aligned QRFs that could have been aligned historically under the super which I've never heard of before but I like the word at the super determinism and that may be a prerequisite to understand the experiments that we were taking through previously when looking at the third Markov blanket which would be the the electron or the two electrons but for simplicity so that from a biological perspective tells you something quite interesting in the sense that it starts to get to the if it's the case that you can basically carve up a bunch of states not in a bipartite way but in a multi-partite way and that carving is by inserting Markov blankets to define the partition and that every if you like subset of that space now becomes internal to its own Markov blankets so there are now no external states all we have in play now are a set of internal states each equipped with their holographic screen or Markov blanket states they're exchanging with other internal states then there's some really interesting questions about how that system will evolve you know from the point of view of the discussion we've been having it's going to evolve and under the principle of unitarity to entanglement so it's going to from from a classical perspective if you allow me it's going to tend to a state of generalized synchronization where everything collapses onto the same synchronization manifold and everything is indistinguishable there's perfect mutual predictability there's a communication of an elemental and fundamental saw so everything has basically everything is seeing from exactly the same him sheet that's the natural way of things the free energy principle is just one way of describing that natural tendency what would that look like cognitively well it would look as if the Markov blankets were trying to learn about each other and to the extent that they can act on each other they're going to try and solicit the kinds of outcomes will enable them to learn about each other so we're now going to get a situation that is driven by the imperative towards mutual predictability and entanglement that's an illusion we know all that is actually happening is that the system is becoming entangled but it will look as if all of these separate Markov blankets or sets of internal states are aligning in a mutually compatible way their QRS so they can exchange and predict to each other and if there's if you like an odd man out namely the pair of electrons then you know then there will be an asymmetry and there will be graded degrees of entanglement you know for multiple levels for example if they're both observing a thermostat then the thermostat's not going to be very good at learning how to align its QRS with Bob and Alice but you know they're all going to make the best job of it ultimately of course with good cultural eco needs construction they'll they'll build better thermostats become little robots and then they can all live happily and communicate so I think that sort of the cognitive thing here comes again back to learning to live in your world as an apparent expression of the the inevitable progression to entanglement whereby we try to learn about each other under the plausible hypothesis that you are like me it's not necessarily true but it's one hypothesis you can be into the table you never know where it's true or not but that's certainly one hypothesis that would work very nicely for Alice and Bob if they can develop a common language so this notion of aligning QRS between two blanketed systems particles that are sufficiently similar then simply translates into learning to share the same narrative to share the same language in order to render everything mutually predictable so I know exactly what you're going to say next and you know exactly what I'm going to say next and we come now to this asymptotic limit of zero prediction error zero free energy and complete entanglement and that would be the your the object we'd have no ukrain's or brexit's or anything if we could get there but that would be the the other direction of travel from a foreign biological perspective just just to there are simulations of this from a purely classical perspective star and what you normally do is start off with two systems that think they have strange attractors usually around the tractor so they think that their dynamics are generated by their act the autonomous or their active states create stuff out there that has a chaotic aspect and I say that explicitly just because this sort of the notion of free will and choice in the classical domain usually reduces to sensitivity to initial conditions which strikes me as a homolog of the super determinism game you know can you go right back to the beginning and explain everything and in sort of classical dynamical systems there are certain situations where you can't because you get a sensitivity to initial conditions so that would leave space or free will and choice that certainly you're at one level but sorry I digress the so what you do is you basically put sort of two chaotic systems with sensitivity to initial conditions together but they're not identical in the first instance but if they can exchange sensory and active states or they can exchange across a shared holographic screen so we're now back to the bipartite situation then they will naturally and if you think about they can't do anything else they will naturally come to a state of entanglement i.e. if generalized synchrony and if you allow the parameters of the equations a motion that underwrite these chaotic dynamics to also maximize mutual predictability on minimized free energy then they will actually come to show an identical synchronization because they will learn to become like each other just like mike's robots but in this instance they're learning about the other person simply to make things predictable so then you will have a shared narrative and in principle you should be able to evince a kindly I mentioned that because I know Chris wanted to talk about language we wanted to talk about you know quantum physics is just basically a description of communication and language I don't think we're going to have time to do that but I thought I'd slip that in anyway Chris did you want to say a few words quantum and language well that that was that was lovely carl and I I like the idea of this kind of simulation leading to to generalize synchrony I think it's interesting from this perspective that we we always divide our environment into entities like us and then at least one entity that's radically unlike us which we call the shared environment or the the open environment or the general environment or something like that and so we have this sense of a social grouping of similar entities that are commonly exposed to this vastly dissimilar entity with which we don't share a common language and we're not very good at predicting and so on and so we always have this kind of open systems point of view and one of the functions of this vastly different system with which we don't share a common language is it's our free energy source in sync so it's where we put our waste heat and it's it's where we get our free energy for doing computation so I I think it's interesting to think of this situation of kind of local synchrony or local entanglement or a local language community local shared predictability embedded in this unpredictable chaotic and in a sense in principle chaotic because it's the waste heat dump we've we've assumed a priori that we can't understand it because it's where we're putting all of our the thermodynamic entropy that we're creating we we put ourselves in a conceptual bind almost automatically by being systems that that have to generate this entropy and put it somewhere so I like this this classical to quantum correspondence very much and I think it works very nicely awesome Mike yeah I want to I want to float an idea which is literally just a few days old so it may be a complete nonsense but I'll float it anyway because I think it's I think it's it's it's relevant and I've been thinking about it this way it occurs to me that you know this this binary distinction between there are agents some of them like me some of them different maybe competitors whatever but there are there are some agents that I can communicate with or can I receive influencer signals from and then and then there's this environment which is you know something that we assume or I mean some some cultures of course don't assume that but but we can assume that it has very low or zero agency meaning that it just is the sort of dumb on purposeless universe and it's on us to sort of figure out what it's doing right so it occurs to me that that that distinction maybe shouldn't be binary and maybe what we want to be thinking about is as an agent when you are receiving something from from what you think is the outside you might want to estimate what is the agency on the other end and you might want to do this for the following reason right and I started thinking about this by imagining what it would be like to be an internal partition or a chunk of a giant neural net being trained you know you you live if that's you you live in a very mindful universe you know you could catch on to the fact that you know what I'm being I'm being trained for something you know the world like there are bigger patterns here and of course it moves in inscrutable ways and everything because I can't you know I can't sort of I can't understand the whole pattern of what's going on but I can definitely tell that it's rewarding me and punishing me for specific behaviors right and of course some people do feel that way about the universe and let large that there are patterns that that are not just you know not quote unquote just physics and so so the reason that the reason I think and this extends to you know we have supervised learning and unsupervised learning and again we think of those as kind of binary things right is there some sort of intelligence on the other end telling you what's right and wrong or is it just on you to abstract patterns from the environment and the reason I think we this matters is that if you are trying to learn from the environment meaning there's only one agent involved there's right there's that's you there's one agent and then there's this sort of unthinking universe around you then you are pretty much guaranteed that whatever you can manage to infer it will be to your utility it's on you to figure out you to learn whatever you can if there's another agent on the other side and you are being trained right then what are the odds that that agent has your best interests at heart I mean maybe but maybe not and so if this is some sort of supervised learning you have to ask yourself what is it that I'm being trained and is it really what I what I want right there's there's another there's it's kind of a whole thing that you can imagine evolutionarily that may be to avoid being hijacked by by by by by parasites by by you know competitors by whoever you don't really want to be trainable too much you you want to learn but you don't really want to be trained and so maybe the idea then is that what if what if and this is okay this is where you know I really haven't haven't talked to anybody about this yet so this is this again could be a very amateuric stuff but but what if something like back propagation or whatever what if that literally hurts what if as a as a right as an early creature having some sort of error function forced back through your channels as opposed to whatever you were trying to generate yourself what if that's evolutionarily designed to be unpleasant and what if what you're trying to do is avoid that happening you don't want to be trained you want to learn on your own terms and you could sort of imagine the different layers of a feedforward artificial neural network where the like the initial input layer that that sort of creature gets to see the world quote unquote as it is it gets the raw inputs the next layer and certainly the layers past that you know the ones on the right that are abstracting they're getting all kinds of propaganda filtered to them by the early layers you know they don't get to see the real inputs they get to see whatever the prior layers think they should be seeing and and so maybe if you don't have a flat network like this but maybe what you have is a you know more biologically you have a kind of a nested multi-scale kind of system maybe there's some sort of incentive for the middle layers to try to sort of crawl leftward so that they have more raw access to the real world and not be trapped being being fed by these other agents right so again you know this is all very qualitative stuff but you get the idea that's that's kind of what what what what I was thinking about nice Mike if you want to chat about this this is what I was doing for about 10 years unleashing and getting out of the lab and getting past the toy model so perhaps we should chat about this for a bit can I just a couple of things uh Chris first of all thank you for uh I feel like I've come away with a Munich Stein volume worth of coolness for the last two weeks just listen to you explain these really well complicated things to me in ways that I can actually understand um one of the things that I'm I'm I'm still kind of curious about is once we let things out of the lab and we keep them in that variability retained space and we're and we don't want to be in conflict with sort of the basic rules of of what quantum information tells us I want to kind of go back for one second what Stephen was kind of closing the 40.1 with and that is what do we do when we look at things in in that sort of relational realm that a lot of um indulge indigenous cultures sort of take up and try to find themselves within and I'm kind of leading to this idea that there's there's a sort of a subject matter generalist that leads to a prediction matter expert down the road and I I talked to Carl I have it was my question to Carl back in June when he was talking to us in the active inference lab and so without getting into any kind of conflict with the quantum aspects of this um what what what do we take out with with us when we go on a bike ride with with chris fields that's that's quantum related that isn't in conflict with everything we've talked about today but gives us a better confidence around the things that we might encounter and predict what do you take on your bike chris yeah what's in your quiver yeah good question uh you know once again these these theoretical frameworks are languages and they their role is to shape our concepts or there they are ways of shaping our concepts and I think the the fundamental kind of lesson of quantum theory for us is to not take the boundaries we see as literally as we are encouraged to do in the classical world view I'll try to distinguish the classical world view from classical physics because classical physics by itself again going back to someone like Laplace is a physics of of atoms which are which are you know sort of elemental it's not a physics of bounded macroscopic objects the boundaries even in classical physics are sort of arbitrary and I I think that's what quantum theory is trying to tell us to is trying to tell us that these these are boundaries that we draw on our blankets and I think this is in a sense what I was thinking about the FEP is trying to tell us that that we have to take this notion that we're blanketed entities seriously when we think about what we're interacting with which is you know everything but us Daniel in our closing minutes um if we could just each give a thought this is really just a special and very powerful uh conversation uh mike with a back propaganda amazing thought experiment there um and uh no boundary and quantum questions touching on the work of like ken wilber and I think it's just so powerful about um does someone have the whole world in their hands is that a good person is it a bad person or thing what kind of thing is that who's on the other side of this video call who's on the other side of that other side what's the bigger picture and it really brings it home that no it's not just about electrons this is something that is across scales and systems so really just wanted to appreciate the paper and hope that we continue on this line of development and I'll pass it to you Stephen thanks Daniel yeah I was going to tie this a little bit some of the threads with um this challenge of having this false sense of certainty in our culture and it does feel uncomfortable to be trained um but we do it in the west because we're confident that we've got this expert knowledge but then we suddenly find out we don't know as much as we do know or we think we know so that may be why bottom up sort of organic indigenous kind of ways of being or ways of sensing can feel more holistic so I think there might be something quite powerful in that so I like that thread I think that ties together with um the challenges in practice when we try to to train up communities especially if we're working to say if I've worked in rural Africa and communities there that if they don't feel connected to the the colonizing narrative it feels very oppressive so there's you know there's a lot of good points there around where is their coherence where does decoherence and also where's their certainty and where there's uncertainty so thanks this is really helpful and um I'll pass it over to blue well I've said enough I think uh I'd like to hear from Mike or Carl or Chris about where where you're going or what you're taking away or or what what are your final thoughts about the discussion we've had over the last couple of weeks I mean I guess I guess the only thing I'll say is that uh to me I really like um this idea that it it turns on its head this this notion that that you have this this this massive amount of mindless stuff and then somehow you know a little bit of mind shows up at the end that that that kind of turns that upside down and and if that's the case then it becomes then the scaling problem is is really really interesting right is how how you scale you know how how biology manages to scale these things so that they become synergistic and bigger and bigger as opposed to you know just merely a bigger pile of rocks than than the previous pile of rocks and so that uh right so so those those mechanisms and in particular in biology and and so you know I look at it from from cells getting together to be organisms and solving problems in the anatomical space but but I think there we actually have some pretty good um alternative although very very very similar in many ways isomorphic but alternative stories to what happens in neuroscience to try to understand how that scaling actually works so I'm super I'm just super super excited about that and also the role of the observer and all of this and the idea that all of this is based on various observers observing each other and I think that makes it it makes for a lot of uh progress and and fewer pseudo problems when you when you look at it that way Carl or Chris any final thoughts well I'll I'll just thank you guys again for putting this together I thought this was a fascinating conversation and I'll you know if I'll could throw one more thing into it I I I think conversations like this are are evidence for the kind of disciplinary siloization that's been forced onto us by academic tradition being an artifact and not necessarily being all that useful thanks so much Carl we're going to leave you with the final word ah good well it has to be a thank you doesn't it um for you lot for putting this together and for the brilliant uh um didactic and also challenging uh deducted um packing of some really um challenging but I think revealing issues and also the brilliant questions um my final word will be future pointing and um just taking of um taking up the sleet of Chris as you know that having unsilowed conversations is useful just think thinking about Mike having the balls to come up with his brand new hypothesis that's two years old so two days of my purchase so I thought it was really interesting I'm just a nice example of putting things into this kind of discussion which we're all going to go in think about so my immediate thought was how on earth does the second layer in a variational autoencoder or something doing that prop act um and of course it can act if it if it's a recurrent neural network and it can select which of the um the lowest level neurons or hidden layer sorry not the non-hidden layer units to listen to um and of course we come back again exactly to attention and selecting those sources of precise information and where you've got a kind of internal action so an interesting and silly thought but just a nice example of how talking together can can excite interesting and potentially useful or possibly silly silly thoughts but again thank you wonderful I had a great time observing all of you and hope to get to do more of it in the future yep we can have a .3 anytime consider 40 um in the uh inter-measurement interval while our igus is digesting and uh talk to you all soon thanks again thanks very much everyone thank you bye it's great