 Hello, everyone. Thank you very much for joining. This is the Active Inference Lab, and you are listening to the Active Inference Livestream. Welcome to the Active Inference Lab, everyone. We are an experiment in online team communication, learning, and practice related to Active Inference. You can find us on our website, Twitter, email, YouTube, Discord, or Keybase team. This is a recorded and an archived livestream, so please provide us with feedback so that we can improve on our work. All backgrounds and perspectives are welcome here, and we will follow good conversational livestream etiquette. Today, we're really excited to be discussing the neural correlates of consciousness under the free energy principle with one of the two authors, Juanja Uisa. Today, we're going to be having our first discussion, 16.1, and then next week, we'll be having a follow-up discussion on the same paper. If you have more thoughts or questions arising in the next week, that will be a great time to bring them up. Today in Active Inference Stream 16.1, our goal is to be learning, discussing, and hearing from everyone, and that is going to be facilitated by this awesome presentation that we are about to hear from Juanja. For 16.1, we're going to have a presentation, and then we're going to just pick up with the participatory discussion. So Juanja, thanks again for joining us, and please take it away. Thank you for having me. It's a great pleasure to have the opportunity to present this paper here and discuss it with you. I'm just going to share my slides. I think you can see the- Looks great. First, I'd like to make a short advertisement for my journal. So together with Sasha Fink and Jennifer Wint, I founded Philosophy in the Mind Sciences, an open access journal for work at the intersection of philosophy, neuroscience, and psychology. We recently published a special issue on the neural correlates of consciousness, which you can find at philosophymindscience.org. So if you're interested in this topic, you might want to check this special issue out. Now, in this short presentation, I'm going to mention just some key points from the pre-print, and this is the structure of the paper. These are just section titles from all sections after the introduction. And we're currently revising the paper. So we received some very useful reviewer comments, and actually one of the reviewers pointed out that if you're not already committed to or interested in the free energy principle, the results that we present in this paper may not be that interesting or may not see that relevant. And in fact, in part, this is I think a fair comment because in the first section after the introduction, we review the standard notion of a neural correlate of consciousness, which has been provided by David Sharma's in a seminal paper in 2000, and we then mentioned some challenges for this concept. And this motivates moving from the neural correlates of consciousness to the computational correlates. And actually then in the next discussion, you also relate this to the free energy principle. And then it turns out that there are some problems if you want to apply the free energy principle to this kind of research, because the main reason is that the free energy principle is not itself a theory of consciousness. And it's not a theory of computational correlates of consciousness. So there are first some obstacles that have to be overcome if you want to apply the free energy principle to research on consciousness or on neural and computational correlates of consciousness in particular. And if you're not already committed to the free energy principle or not really interested in it, and you may wonder why should I even bother thinking about this if it only creates new problems that you don't get into if you're not dealing with the free energy principle in the first place. So this is I think a fair question. And we're trying to highlight the genuine contributions that we're making or that we're trying to make in this paper and the revisions. And these maybe relate to the final two sections before the conclusion in which we talk about computational explanations of consciousness in general and what the free energy principle can provide or can contribute to this project. And then we try to explain what the specific role of the free energy principle can be here. And we suggest that it may provide unification. So I will take a few minutes to talk about these things in a bit more detail. So let's see. This is a more detailed overview of the paper. So in the section in which we talk about the neural correlates of consciousness we mentioned that according to the standard definition a neural correlate of consciousness is a minimally sufficient neural structure. And then there are three challenges that we mentioned. One is that actually what is interesting about the neural activity associated with consciousness may not be that it happens in a particular place in the brain but more that it's that it displays a certain dynamics. And this can even be a global dynamics. So it's not what's happening at a place but what's happening over time and maybe in the entire brain or in large parts at least of the brain. Then the question, another question is even if you can map certain types of neural activity to conscious experiences this mapping can be seemingly arbitrary. So we need to understand why or what like to understand why that particular type of activity is associated with consciousness. And then a third challenge is that the notion of a neural correlate of consciousness as defined by charmers and as used in much research on NCCs only applies to usual conditions and does not apply to unusual conditions. So after brain damage for instance and another very unusual condition is provided by potential cases of islands of awareness. So this is from a recent paper by Tim Bain on the itself and Marcello Massimini in which they ask the question whether there can be systems that are causally isolated from the body and the environment and still be conscious. So can there be conscious states in systems that neither receive sensory input nor produce motor output. And there are some potential candidates for such systems which include excranial brains, hemisporotomy and cerebral organoids. So an excranial brain is a brain that has been extracted from the cranium but is kept alive on which can still display some non-trivial activity. And in hemisporotomy, a large part of the brain is more or less disconnected from other parts or the rest of the brain. So neuronal connections to other parts of the brain are cut and in, well, cerebral organoids are just neural structures thrown in the lab. And in all cases, well, one may wonder, well, could such structures, such systems give rise to activity patterns that are sufficient for consciousness? And what we suggest is that computational correlates could provide at least some progress towards an answer because computational correlates can be applied to different kinds of system. And in fact, the authors of this paper mention complexity measures. So there's some evidence that neural activity associated with consciousness displays a large or high complexity, high algorithmic complexity. And in principle, one could measure the complexity of activity generated in these isolated systems. And that could provide some evidence for inferring the presence of consciousness in such systems. Now, actually, complexity is also an issue when it comes to applying the free energy principle. I will talk about that in a minute. And one of the things we do in the next section is that we relate the free energy principle to work on computational correlates of consciousness and computational principles that have been associated with consciousness. And we, in particular, we also discuss the challenge of islands of awareness. And this is not just a challenge for neural correlates of consciousness, but also especially a challenge for the free energy principle. One of the problems that is created by trying to apply the free energy principle. And the reason is this, the free energy principle is mainly applicable to systems that are in causal interaction with the environment. So systems that are part of an action perception loop in which internal states interact with external states mediated by blanket states, which contain sensory and active states. But if you have an isolated system which is cut off from sensory input and motor output, it's not clear that you can apply the free energy principle to describe what's going on in the system. But actually, there's some existing work on the free energy principle and dreaming by Alan Hobson and Carl Friston. And in that work, they suggest that, well, I just go a bit forward. You can, as is well known, I guess, you can formulate or describe the free energy functional as involving a accuracy and a complexity term. So you can write variational free energy as complexity minus accuracy or complexity plus inaccuracy. And as it turns out, the accuracy term contains a term denoting particular states which involve blanket states. And the complexity term does not. So the complexity term can be minimized by changing internal states. And so in previous work, it was suggested that during dreaming the causal interaction with the environment is attenuated. And so this free energy gradient is mainly driven by the complexity term. So basically what the brain's doing during dreaming is minimizing complexity. And we suggest that the same holds our fortiori for cases of islands of awareness. So such systems should, at least for some time, minimize free energy by minimizing complexity. And now this brings up complexity again and the question how this relates to empirical research according to which neural activity associated with consciousness maximizes complexity or at least displays high complexity. And this, well, tension can be resolved by noting that the free energy functional contains a statistical form of complexity and complexity measures and consciousness signs measures of dynamical complexity that are often based on an approximation to algorithmic complexity. So there are actually two different notions of complexity in play here. And according to the free energy principle, islands of awareness should minimize one type of complexity, but it leaves open the question whether they maximize a different type of complexity. And we suggest that it would be useful to formulate or develop a notion of dynamical complexity within the framework provided by the free energy principle. So what we use to define dynamical complexity or measure of dynamical complexity in terms of the intrinsic information geometry of a system as opposed to a definition in terms of the extrinsic information geometry. But this is actually not something I want to dwell on here. So let's go to the next point computational explanations of consciousness. We, well, that's a fundamental problem associated with trying to develop a computational explanation of consciousness because computations can be performed by vastly different types of system. So you can perform computations in the game of life and you can compute mathematical functions by playing magic the gathering. So the game of life is during complete. And as I recently learned, magic the gathering is during complete as well. But I wouldn't suppose that anything that happens in the game of life gives rise to consciousness or that you can instantiate consciousness by playing magic the gathering in a particular way. So just computing the right mathematical functions in a physical system is not sufficient for consciousness. One other words, there's a difference between simulating consciousness in a system and actually instantiating consciousness. And the question is, okay, how can we make this difference or how can we understand the difference between simulating and instantiating consciousness? And here we think that the free energy principle can make a perhaps tiny, but we think relevant contribution and the main idea is this, as I already mentioned, this is the cause of flow in an action perception loop according to the free energy principle. So here we have a physical system with internal states and it's coupled to external states via blanket states which comprise sensory and active states. And so the main cause of flow we have here is in this physical system is between internal states and external states. So internal states act on external states via active states and external states act on internal states via sensory states. Okay, now what happens if we simulate this such a system in a digital computer? Now this is a caricature or a simple visual model of a computer with a von Neumann architecture. And one of the characteristic features of such a computing device is that it has a distinct memory unit which is separate from the CPU of a central processing unit. So if you simulate a system with this cause of flow in such a system, you'd have to store the values of these variables of external, internal and blanket states within the memory unit. And in order to compute updates of these variables, you'd have to engage the CPU. So the main cause of flow would be between the memory unit and the CPU. Of course, there are also variants of this architecture which the CPU contains the memory unit but then the main cause of flow would be within the CPU and also between the memory unit within the CPU and the rest of the CPU. So it's a different kind of cause of flow in this physical system. Although the computations performed by these systems can be the same in the sense that in internal states can be interpreted as conditional probabilities of external states given blanket states. Okay, so this is, we suggest one of the differences between simulating and instantiating consciousness that there's a constraint on the causal interaction or causal flow between the system or the system's internal states and its environment. And of course this may be different in a computer with a neuromorphic architecture. Okay, then in the final section, we relate these ideas to our previous work on minimal unifying models of consciousness. And so I just explain this very briefly. In general, what can computational models bring to, or how can they facilitate an understanding of consciousness? So computational models may serve as a bridge between phenomenal properties of experience and properties of neural activity. They may operationalize theories of consciousness. They may explain tular logical functions that is cognitive capacities associated with consciousness. And so these are things that for instance active inference models could provide and the free energy principle together with such models can then serve as a unifying framework. And how can it be unifying? So here's just one possibility. I'll start with what Jonathan Birch calls the facilitation hypothesis in a recent paper. So there he writes phenomenally conscious perception of a stimulus facilitates relative to unconscious perception, a cluster of cognitive abilities in relation to that stimulus. But this is just a hypothesis which can be empirically investigated. You can for instance, look at different non-human animals and see whether non-human animals that have one of these cognitive abilities in that cluster also have the other cognitive abilities and whether animals that do not have one of these abilities also lack the other cognitive abilities. And so this would then corroborate the hypothesis that there's some mechanism that facilitates or enables all of these on an entire cluster of cognitive abilities and this could be what consciousness does. In addition to that one could also find theoretical supportive evidence if one builds minimal models of these cognitive abilities and if one could show that a minimal model of one cognitive ability in that clusters also minimal model of other abilities or whether or if one could show that there's a common computational mechanism that accounts for all of these cognitive abilities and a hypothetical example just to illustrate would be active inference models if active inference models of such a cluster of abilities or required deep temporal processing or maybe deep counterfactual processing or something like that. So if one would have to posit a certain generative model with a certain architecture in order to account for these different abilities then that could be something like a common computational mechanism which then could further corroborate the hypothesis that there's something that facilitates these the joint that facilitates these cognitive abilities together. Okay, and this could then be to something like a minimal unifying model. I introduced this term in a paper and neuroscience of consciousness. A minimal unifying model specifies only necessary conditions for consciousness. It involves determinable descriptions that can be made more specific and unifies existing accounts of consciousness. And I would now add that it can also unify cognitive abilities associated with consciousness not just accounts of consciousness not just theories of consciousness. Okay, but we can talk about this in that discussion and the general picture would be that there are these different aspects of the signs of consciousness. There are theories of consciousness, potential teleological functions of consciousness that is cognitive capacities associated with consciousness. And there are mathematical functions that are computed by conscious systems. There's the notion of complexity and then normal correlates of consciousness and perhaps minimal unifying models could be useful here as I've suggested. And the free energy principle can provide a way of relating these different notions and different branches of research together. For instance, active inference models can be used to develop operationalized theories of consciousness or I mean there's been this recent active inference model of the global neuronal workspace, theory of consciousness and active inference can be used to model cognitive capacities and so on. And relating these different concepts notions and these different kinds of work together would then could then also lead an insight into why different neuronal structures are associated with consciousness and what role complexity actually plays in consciousness and many other questions and problems. Okay, so this is the end of my short presentation. Thank you for listening and very much looking forward to the discussion. Thank you. Thank you for that, for that really awesome. So you could, great. Well, what a awesome way to begin this conversation with a presentation and also really appreciated that you're revising the paper and kind of giving us this glimpse of not just a research field that's in evolution but the paper itself is undergoing changes. So for the rest of this active stream 16 we'll kind of play it by the usual rules if anybody wants to just raise their hand, we'll get to everybody who does. Also we do have the slides and Blue who's here and I had done 16.0. So we had thought through a couple of topics. So at this point, I guess we'll open it up to people who wanna raise their hand and can also just say hello on their first time speaking and then in the absence of hand raising we'll just continue through these slides. So thanks again, Wancza and let's go to Stephen first and then anyone else who raises their hand. Yeah, just like to say thanks, very interesting to hear the developments on the paper. I was just gonna ask about the, well I suppose there's a lot of things that come up but I'm Stephen by the way, just say I'm based in Toronto but I'm working on a practice-based PhD through the UK and one thing that came up on your slides then was interesting when you were looking at the potential for there to be some sort of memory that relates to those states and I was curious whether that memory itself might have to be connected to sensory states or whether that might be something that which is stand alone and if so whereabouts you might think that might be in the brain like is it a hippocampus type question, grid cell type question or something like that. So thank you. Thank you, that's a very interesting question. I think it could be useful to show my slides again. Okay, so you're referring to this picture, right? And yeah, so the idea would be that in a system that simulates a system that is in a action perception loop there would be some internal states that encode the internal states of this system that is being simulated. So this digital computer would have some physical states that encode the internal states of the simulated system and so coming to the first part of your question that there would have to be some interaction with sensory input but what is sensory input? So this, in principle one could say, well, any of the other physical states of this digital computer could serve as a sensory input to these internal states, right? And the question is, would the causal relations between these physical states that encode internal and sensory states of the simulated systems, would these causal relations be similar to the causal relations we see here? And I would suggest no. And then there's another aspect you mentioned in your question, which is whether there could be some memory unit in the brain. And so perhaps you can elaborate a little on this, were you thinking about simulations that are going on in the brain? Yeah, I was kind of thinking or curious if you could have maybe two parallel kind of, you've got the real time, real sensory input being like experienced and then there's like an update to this kind of memory, but that memory doesn't, that memory rather than it being a representation, that memory, it could be different. And with that memory, I'm curious with that memory itself then connects back to the actual sensory organs. So it's not like it's in some computed form, but somehow the sense of smell, the sense of, the multi-sensory integration is still happening at some level, but maybe in a different way, which then helps in dreaming and stuff like that. Yeah, so this is really interesting. I mean, I think it could be useful to distinguish different types of simulation that could occur in the brain. One would maybe be involved in imagining certain kind of actual situations and another would be involved in dreaming. And you're suggesting that there must be some memories somewhere and which would be some states that are, that give rise to these conscious experiences, but that are not directly connected to sensory input. Is that what you're suggesting? Yeah, they're not, maybe not directly connected. They're connected to the sensory organs. So it's not that it's like a separate, it's like you, in a way I'm saying it, it needs to keep some connection to the sensory, what we call inputs, I suppose, but in a different way. So it's not like it's completely offline, if that makes sense, but it might be online in a different way, which would then still allow you to have a dynamical system in both cases and a plausible route for how they came about. So yeah, so I'm curious about that. So basically, whatever these priors are, they're still somehow engaged in the sensory organs, in being alive and conscious. Thanks, Stephen. Juanjo, go for a response there and then we'll continue with Adam and Marco and then anyone else. Okay, so the one thing that comes to mind here is that it may be useful to talk about nested Markov blankets. So you'd have the brain with sensory states and active states and states of the brain can be regarded as internal states. But within the brain, there would then be another Markov blanket and internal states within these internal states, which would then be directly associated with simulations or with dreaming or other kinds of experience. And so there would be an indirect connection to sensory organs, but as it were, the sensory states that immediately affect these internal states that are associated with say dreaming would be within the brain and not directly connected to sensory organs. Cool, thanks a lot. Adam at then, Marco. Can you hear me? Yep. Hi, everyone. Hi, Juanjo, thanks for that. That was, this is really fascinating work and it's disturbing me and that I've recently attempted to present the minimal unify model using the energy principle as a overarching framework. And I'm wondering whether I need to backpedal on some of the claims I made. So I tried to bring together integrated information theory and global neuronal workspace theory. And, but in doing this, one of the things I pushed back against an IIT is, sorry, I've not tried my little breathless. One claim I was trying to push back on is that you needed a neuromorphic architecture to have consciousness and that you couldn't do it on a one, one, one. I think this issue is probably largely moot. Moot? Moot. Just what probably would take to get enough compute. We probably need neuromorphics just for energy efficiency. But I'm wondering with respect to that slide and in terms of causation, whether by the time we commit to a core screening that involves those entities that actually we might need to core screen over the workings of the Von Neumann architecture and actually go into the virtual machine and that the causation is in terms of like a, like in Perlian causation, we have to like commit to an ontology. And then once we have our directed graphical model, then we're doing the operations and that's like our causation. And so I'm wondering if there might be like a mixing in that figure and that if we're like, I don't know if that makes sense. But I guess with relation to nested Markov blankets and Steven's question also, I'm wondering if this might end up taking biological consciousness and potentially putting it in the same kind of epiphenomenal boat in terms of degrees of how much directness allows us to actually treat sensory motor coupling as actually being real and causal. So I'm wondering if we do that, do we end up like being in the same boat of not ascribing consciousness to biological systems either even though they're using massively parallel operations and they're in their very dynamics having this dual aspect character. I do not know. Cool, want to go for it and then Marka. Thank you so much. By the way, Adam, thanks for attending this session. And you had some, immediately I had some comments on Twitter when I posted a link to the pre-print and I didn't have time to respond to these comments. So I'm really grateful to you for attending this so we can actually discuss the points that you raise. And yeah, so how real are virtual machines? And I think there are two issues that have to be separated. And so this is one thing I'm sure about and then there's one thing that I'm less confident about. So the thing I'm sure about is if you have a coarse grain description of a system and abstracting away from certain details that doesn't mean that you end up describing things that are not as real as the things that are happening at a more detailed level. So if you're describing a grain in terms of what's going on in individual neurons and describe that in a lot of detail that's more specific description you have than if you're just considering the average activity of neuronal populations. But that doesn't mean that what's going on at a population level is less real or that this average activity cannot have causal powers or that relations that you would describe as causal relations at this more coarse-grained level at these relations are not really causal relations or something like that. So I think we agree on this part and then the question is well, if at such a coarse-grained level, if I understand you correctly at such a more coarse-grained level, if this instantiates a virtual machine, doesn't this mean that the virtual machine is as real as the physical machine that instantiates the machine? And I'm actually, I don't know, but there are some things that follow if you accept that virtual machines can be conscious that are really, really counterintuitive and disturbing. So I mean, if you want to understand the series Black Mirror, that these episodes in which a person is uploaded to a computer and then there's a virtual version of that person in a virtual environment and that virtual person is being tortured for, I don't know, for weeks or even months. And the assumption is that this virtual entity is conscious and this happens in over a course of, I don't know, minutes in the external real world. So if a virtual machine can be conscious, then conscious experience is that we have during over the course of several weeks or months could take place within minutes or maybe even within seconds. And that's really strange, but you could also run these computational processes backwards and then you'd have backwards conscious experiences or you could stop them at any moment and then continue them and or you could have physically highly distributed systems that are connected by some device which then enables this collection of physically scattered systems to compute some, to perform some computations. And if this can then arise to consciousness, you'd have a very strange conscious entity and other things like that. And then there are these other experiments such as net blocks, block hat experiment in which the Chinese nation instantiates consciousness and or at least the functional states that maybe associated with consciousness. And so there are these really counterintuitive consequences. And this is the main motivation for me for trying to make a distinction between simulating consciousness and instantiating consciousness. And maybe there are some virtual machines that are conscious. But my intuition is that it would have to fulfill very specific conditions. And one of these conditions may be that interaction it has with its physical environment must be the same type of interaction that the simulated system has with simulated environment. And that's what I try to illustrate here. But of course you could argue well, this is just motivated reasoning because you don't want to accept these counterintuitive consequences. And why would you make this accept, why should one accept this requirement? Thanks, it's an interesting style, the argument from consequences with consciousness. Well, that can't be correct because then this would be conscious and we know that's not true. It's like, wait, do we know that? So Marco and then Blue. Another question, another comment about a nested Markov blankets. And so you suggested that maybe this requirement is too strong if we, maybe it's fine if we apply that to digital computers. But if we then apply that to actual, to biological organisms that we would regard as conscious and the result would be that they are not really conscious at all, so it's too strong. I guess that's what you're suggesting about what you're, the question that you're raising. So yeah, I guess one would have to flesh this out in a bit of more detail, but one way would be to say, look, the free energy principle makes some idealizing assumptions and the very notion of a Markov blanket as used in the free energy principle is, as I think many know by now slightly different from the notion used by, proposed by Judea Pearl. Because Pearl's notion of a Markov blanket is just a feature, Pearl's Markov blanket is just a feature of a model of certain causal graphs and the Markov blanket used in the free energy principle is an ontological notion or metaphysical notion because it's meant to be actually part of a physical system and one could question that it really exists and yeah, so, or that actual biological organisms that are conscious that they actually have these or that they fulfill these idealizing assumptions made by the free energy principle. And for this reason, if we derive a constraint on what it means to be a conscious system as opposed to simulating a conscious system from the free energy principle, we're actually requiring that a system has these or fulfills these idealizing assumptions which maybe features of a model of a system but are not actually satisfied or not actually instantiated by the system itself. Yeah, I think there's an ongoing discussion or there will be an ongoing discussion about the notion of a Markov blanket and whether it's justified to treating it as a ontological notion as a feature of an actual physical system. And we are committed to this strong reading in the paper. So we're using the notion of a Markov blanket in the ontological sense, not just in the epistemic sense. So this may be a problem. Cool, thank you for that addition. So Markov then blue. Thanks, thanks for the presentation, the great paper, I think it's really excited by reading it, especially a great use of FEP and I'm biased because it particularly it's a lot of intuitions I had. And it's also very nice in the spirit of pluralism at this approach of really embracing the unifying quality or power of the FEP and focusing on unification rather than another universe or overly saturated claim or proposition. And it's going to be probably some scattered thoughts but I'm trying to also touch on what Steven said. I'm not sure if I understood what it meant but I reminded of the amortization. So Sam Gershman has some nice papers on this. He has a nice paper called, Remembrances of Influences Past. So the parameters or the components or the factors of the strategies used in influence could also be remembered. And so maybe that's what I thought you meant when you said something related to or associated with sensory states but not actually sensory states themselves, maybe in the physical substrate that enacts these sensory influences and there might be partial, might be parts or parameters or factors that can be remembered and that these are the ones that can be, these are the ones that also are reactivated to agreement despite the absence of sensory influence. But anyways, if it's not what you meant then let's talk about that later. About your conversation with Adam. I have a bit of a, I'm not sure if you meant virtual machines as popularized in actual computers and software. For me, it's very important to distinguish a mere embeddedness, such as the normal virtual machines and an embeddedness where the relation between the embedded subsystem and the larger system is one that's related by active systematic mutualistic influence, that these are mutually contextualizing and there is a kind of cybernetic relations which is not the case with normal virtual machines, especially in the normal architectures because these are more sterile channels as we'd like to say. So as for my own points and reading, so I very much appreciate that you elaborating emphasizing the notion of dynamical context as opposed to mere complexity and very elegantly avoiding the many concentric paths that tends to be lurking in these points. But as a suggestion or question, so you suggest that the notion of accuracy kind of falls away in the termination of free energy, very issue free energy for honest of awareness, but actually would argue that in fact, accuracy is still relevant, but it's a reflexive kind of accuracy. So because of the embeddedness of the market markets or the embeddedness of these ecosystems, you could say that to some subsystems, their concerns of accuracy are relatively the other subsystems, the eternity system. And so you see this in dreaming, right? So the different subsystems of the brain need to update themselves and each in relation to each other. And so for example, I think Tononi has some work, I'm not sure if it's published yet, but that the activation of during dreaming where the epicenters of the activity radiating outwards seem to correlate with the sites of activation during the waking experience where something new is learned. And so this, and under the FEP, it becomes extremely clear. Well, it suggests an intuitive image for why? Because if there's a local kind of improvements, a local updating occurred in that model, but all the subsystems either ecosystem are mutually contingent or contextualized by each other, then it would be useful to kind of extend the fruits outward or share the fruits of an experience with other systems. And so you also then lose the necessity of a global coherence of the ecosystem because there is no unified external world, right? All the context become local. It's simply a constellation of local context, which in my opinion is an elegant way to look at this because it seems parallel to the phenomenology of dreaming, which argues for your paper's motivation because this tentative computational explanation has immediately a nice pathway to the phenomenological aspects to be explained. So yes, I would just really like to hear your thoughts about whether the architecture should really be dropped or whether we should see accuracy still being a part of the very energy in the lower levels. So more specifically, it's as if the complexity at one level is partially constituted by the accuracy improvements at the lower levels, right? And so, especially because you mentioned embeddedness, but I don't see it as well integrated in the paper, but I think it would much more bolster your claims, especially because of its relations with other material and consciousness research, especially the Fourier camp and their whole emphasis on embeddedness. And this also relates a bit. Well, I would say that in another way, you can see this as a complexity reduction can be seen as a configurational accuracy. So if we assume that the act of choosing the configuration is itself a question of policy selection or the action of configuring yourself in relation to the context, the complexity can be seen as a configurative accuracy, right? And so now you get these weird things. So a decomposition of complexity reduction in terms of intrinsic complexity reduction. So kind of the Bayesian model reduction is seen as active intrinsic curiosity paper and extrinsic model reduction, a model reduction in relation to the contextual systems in which it stands, relation to. Anyways, I'll stop here. I think I'm rambling on too much. So yeah, I'd love to hear about it. So thank you so much for your comments and questions. Yeah, so this point about embedded systems and nested Markov blankets, that's definitely something that I have to think about a bit more, which will be relevant. So when you talked about complexity and how within an embedded system within the brain, there could still be a curiosity maximization despite a lack of sensory input via those sensory organs, I actually was a reminder of a discussion we had with a reviewer, but not a reviewer of this paper, but of the Markovian monism paper we published last year. So one of the reviewers brought this question up, what happens in an isolated system? And they mentioned that according to IIT, I think it could still be conscious, something like that. It reminded me of what you said, or the other way around, what you said reminded me of this discussion we had. And my intuition I had, or the initial reaction I had was that, okay, well, in such systems, we have maybe no sensory input from the external environment, but there's some environment within the system, within the organism that can provide some simulated sensory signals. And I think this is what you were referring to. I don't really remember what happened. I think Karl had a slightly different opinion regarding this, and suggested that the more elegant way of treating this case would be to regard the system as a system that just minimizes complexity and ignores maturity. But now that it comes up again, I don't think that's the end of the story and it may be relevant and interesting to make this a bit more complicated by actually considering nested systems and simulated sensory signals and so on. I don't think that this is something we can build into the revisions because the paper's already getting quite long, but it's something that we should focus on in future work. And I would be grateful if you could point me to some of the literature that you mentioned. Because send an email or the amnesty. And then there's another thing you mentioned, a distinction between two types of virtual machines or two senses in which a virtual machine can be understood, things that can be associated with that term. And I don't think I am aware of this distinction and I'm not sure I really understood it. So perhaps you can repeat what you said. So it's that one is merely embedded and the other has some additional features. I think that could be really interesting and relevant. Should do the power later. Yeah, go forward to keep in thread and then we'll go to Blue and then to Dave and Adam and another question from Chad. I try to keep it brief. So actually one of the reasons why I'm so motivated about this is actually because when I did my master thesis with Carl, I had the same argument of complexity. So he's also like, no, no, it's just complexity or direction. But the thing is, as you know, it's about redundant parameters, but I don't think that's gonna be the issue in the story that I told you. It's not about redundant parameters. They're misattuned because of their contextual contingencies having been updated, but they themselves not. Because in the real life waking experience, it wasn't, it weren't interacting, but in a more extended context, it's kind of a weird kind of counterfactual marginalization. But it's like expanding the scope in which the newly approved experience might be relevant. That's how I see it. And so it doesn't seem to me that it's valid to say it's merely about removing redundant parameters. It is improving the configurative operation or anticipation in the future given the limited scope of the experience that was newly acquired. As for the virtual machines, one important difference is, for example, the mutualistic relation between, for example, merely the transistors and the virtual machine they're in, it's kind of flat or low dimension. There's no multi-skill, multi-level thing. There's no adaptivity all the way. As in the driving force behind the generation of the virtual machine itself is simply sent out. It is not in the path itself. It's not a contingent upon the path integration. So there is this non, it's not an active integration of their relation. Okay, nevermind. I'll try to think about how to phrase it, but my intuition says there's a very important difference between virtual machines and classical form and the kind of virtuality that is relevant for minds, but especially consciousness. Thanks, Marco, for these awesome points. And that's the fun of the synchronous moments that we share is to surface all these new connections and ideas, and then we return to the work and the sharing on platforms with each other. So thanks again for that. Juanjo, if you wanted to add anything, otherwise we can go to Blue and Dave. Okay, awesome. So we'll go Blue, Dave, and then we'll have heard from everybody once, then the chat, and then return to Adam and everybody else. Hi, so I wanna go back to your discussion with Adam also and it incorporates virtual machines really. So you were discussing simulating consciousness versus instantiating consciousness and the necessary conditions for both. And in describing like the Black Mirror episode, which I've seen and is awesome. You know, you really implied that there's this necessary like arrow of time going forward for maybe instantiating consciousness that's not present in simulating consciousness. And so is that like one of the required conditions like does the arrow of time play a role? Like this bi-directional like time is like not that that can't be conscious. And maybe it would also help. And I'm sorry if I missed this in your presentation because I was a few minutes late, but I don't know how exactly you define consciousness. And I mean, I did the dot zero with Daniel and it's really kind of difficult to, you know, measure or ascribe scientific properties to something that's like not clearly defined. And I know that there's many definitions, but I just was wondering, Wanja, what you personally take consciousness to be or how you understand it. Okay, thank you very much for these questions. I start with the second question because that's in a way it's easier to answer that question not because I can define what consciousness is, but because I can just tell you that I don't know what consciousness is. So, I mean, there are various ways of operationalizing consciousness and ways that are used in research on a number of areas of consciousness in particular. And many of these measures of consciousness involve some kind of report, verbal or behavioral report. And for instance, the question is, then just, can the subject report reliably report the presence of a certain visual stimulus or not? And that would just be an example. And so, I don't have a strong opinion about ways of what's the best way to measure consciousness. But so, what we are kind of building on here is just the existing work on new colors of consciousness which use various ways of operationalizing consciousness, various ways of measuring consciousness, whatever it is. But of course, this is a bit unsatisfying and we would like to know what is consciousness? Perhaps you can tell us how to measure it in different ways, but what is it that is being measured? And here I actually think that a computational explanation of consciousness could provide an insight into the very concept of the notion of consciousness in that if there's a common computational mechanism associated with different cognitive capacities that are facilitated by what we call consciousness, then this could provide part of the puzzle that gives us a definition of consciousness. But just as an example, think of the global workspace theory according to which consciousness is basically global availability. Well, you could use that as a definition of consciousness. Consciousness is of global availability to different cognitive subsystems. So information is processed consciously if it's not just available to some individual subsystems but to all or most of the cognitive subsystems within the system. And yeah, this is of course a bit unsatisfying because it seems that it doesn't capture everything that we associate with consciousness. So consciousness has many interesting features. So if we try to describe the phenomenology of consciousness then we can say that it's typically structured. We experience spatial relations between things. So you're not just experiencing say a desk and a computer screen and whatever else you're consciously perceiving right now. You're not just hearing something but you're experiencing these things in relation to each other and there are spatial relations that you're experiencing. There are part-whole relations and there are also temporal relations of course. You're experiencing not just isolated events but successions of events and it seems that what is happening right now is not just an instant but it's somehow extended by a species present as William James called it. And there are further general structural features of consciousness and what a definition of consciousness should provide is a means of making sense of these different properties. So for instance, if consciousness is global availability of information in the cognitive system why does this mean that why does consciousness have these phenomenological features that characterize most or maybe all types of conscious experience? So that's something that a definition of consciousness should provide in addition to just saying what's happening in the brain. And computational models I think could provide a means of bridging levels of descriptions let's say descriptions of neural activity and neural mechanisms and then phenomenological description that describe features of consciousness as they appear from the first person perspective and integrated information theory is of course an example of a theory that tries to achieve just that starting with phenomenological axioms that characterize consciousness or typical forms of conscious experience and then trying to operationalize these axioms in a formal way such that they can be applied to neural structures or neural activity or other kinds of system. And in principle I think this would be how one should try to define consciousness and finding computational mechanisms underpinning cognitive capacities that are associated with consciousness and trying to relate models of neural activity or maybe computational models of cognitive capacities to characteristic phenomenological features of conscious experience. And then, yeah, so this is what I think how one should approach the question, what is consciousness? Can I ask a follow-up really quick? Yep. So I just wanted to, so you're describing really this global availability and like Abel, it sounds a lot like David Krakauer described individuality like in 2020 with like this bi-directional information flow in an individual, right? But there was no like conscious, so you have to have that downward causation which sounds a lot like the global availability that you're describing. And so I just was wondering, I don't know if you've read the paper or not, but it also like I got that, I got that out of what you were responding, but I also tended to think that you're maybe only talking about human consciousness but then when you went into IIT maybe not. So do you actually draw a line like what kind of a system can be conscious? Does it have to be biological? Does it have to be like higher order vertebrates or do you think that this is, especially in terms of computationally like defining consciousness, which I also think is a great place to start. But does this mean that maybe all systems up and down the evolutionary ladder can maybe have some degree of consciousness or what do you think, what are your thoughts on that? Thanks for the question. And so where to start? Maybe first you mentioned top down causation. I wouldn't assume that consciousness involves top down causation. So I don't see a reason for assuming that and that's not what I meant by global availability. So I'm not familiar with this work by David Krakow that we mentioned but it seems to me that it may not be may not be directly connected to what I was trying to convey. And then the question, where to draw the line between conscious and unconscious systems? That's I think a really interesting question and further, I mean in the beginning of the talk in the beginning of the presentation I mentioned one motivation for going into research on computational correlates of consciousness and that was trying to have some correlates of consciousness or criteria for consciousness, maybe necessary conditions for consciousness that can be applied not just to human beings or to human beings in ordinary states or human beings with a normal brain as it were but also to unusual cases. And in the presentation I only mentioned the possibility of islands of awareness, isolated systems but of course unusual cases also include one human animals that are physiologically different from human beings in vertebrates, artificial systems as well. And this is largely I would say an empirical question where to draw this line. And there's recent work, for instance by Eva Jablonka and Simone Ginsburg who have an evolutionary hypothesis about consciousness or hypothesis about the evolution of consciousness and they suggest that a particular type of learning that is a learning capacity can maybe the evolutionary transition mark of consciousness or creatures that during a species that develop its capacity were then also conscious and the capacity, learning capacity they call unlimited associative learning which involves particular types of learning such as cross model learning and other types of learning. And so their suggestion would be to look if there are, what systems are capable of this or have this learning capacity and what other cognitive capacities they have. And so this is basically an empirical question trying to determine which creatures are conscious and which are not. And we of course need to have some criteria that can be applied to determine whether the creature is conscious or not. And if we don't have a definition of consciousness or a theory of consciousness that can be applied to non-human animals and it's really difficult to tackle this question. But focusing on theological functions of consciousness on cognitive capacities of consciousness is one I think particularly promising strategy and I was also mentioned Jonathan Burch who has proposed this that there may be clusters of cognitive capacities associated with consciousness such as race conditioning or cross model learning other capacities. Yeah, so I don't know where to draw the line. I think it is an empirical question and these more empirical projects trying to investigate cognitive capacities of certain types of non-human animals can I think be complemented by theoretical research using computational models because that's one of the advantages of computational models that it can be applied to different types of system not just the systems that have a central neural nervous system such as ourselves but also systems that have a different physiology or in principle also artificial systems. I wouldn't assume or wouldn't make the assumption that artificial systems cannot be conscious but I think it will be really relevant to determine what constraints what conditions have to be fulfilled by an artificial system in order to be conscious and which systems, which types of system will never be conscious and if for instance this black mirror scenario is a physical possibility if we could actually build such devices that simulate conscious beings and thereby create real suffering for instance that would be horrible and it would also mean that we could just make hundreds of copies of these virtual systems and instantiate the same type of maybe horrible conscious experience in multiple times that would be really a disgusting scenario and so I think it would be extremely relevant to investigate the difference between simulating a system and instantiating consciousness to avoid creating unnecessary suffering or disturbing conscious experiences and you had a very interesting question about the arrow of time going forward and this is something that I guess I have to think about in a bit more detail so what happens to the causal, no other action perception loop if you reverse the temporal direction of the simulated system and so intuitively I would say that it's important whether the arrow of time goes forwards or backwards that you wouldn't have conscious experience if you rewind the experience of a simulated system but I'm not sure about that it's a really interesting question Thank you Thanks for the response so we're going to do Dave then a question from the chat then Adam and Marco It sounds like getting a handle on consciousness may be suffering because it's been deflated too much if you assume that something you're examining is a predicate it gets really easy to turn it into not just a predicate but a thing and things are pretty hard to work with mostly they seem too easy the first over deflation is assuming that conscious acts or conscious activities are persistent, long term things a predicate of an organism the organism is present now if you subscript that and say is present of certain things now and then under some conditions well then you can ask what conditions enable consciousness what's achieved, what's the evolutionary advantage of being able to be conscious when you need to and people as disparate as Gerald Adleman talking about conscious flashes in the brain the dynamic core crystallizing now and then and then there's conscious activity maybe it's not needed because the problem has been resolved earlier Alfred North Whitehead talking about threads of higher experience within a brain for instance something that happens now and then and then the transformational psychologist Gergiev actually gives an exercise to force people to be conscious and he said his background is most people are non-conscious the great majority of the time however if I ask you are you conscious? the answer is always yes because I've made you conscious I ask you that specific leading question now the notion that consciousness is something that comes into you from the outside that's been around for a good while that's what the physiologist in the 1870s and 80s assumed so Freud since he was busy thinking about so many things didn't want to tackle that and he just assumed consciousness comes in something's going on the outside of my brain my neocortex responds to that and that's where the consciousness comes from and if I'm not being stimulated well I enter into nirvana I don't dislike and I don't like I'm just in a great blah state and the engine is just idling and waiting for something else to make it do something so oh and the other way conscious activity is getting deflated too much is as I said being treated as a predicate well it seems that if you expand it to a relation it might be a lot easier to start dissecting I'd say off the top of my head a relation among ten terms self A is conscious with self B of object X in that conscious relation between A and B of X given presuppositions of A, B and the emergent relation and the three sets of dissatisfactions or goals of whether it's an unpleasant stimulus wanting to get away from something or getting towards something of A and of B and of that emergent relation between the two so that's a ten place predicate right there and when I'm thinking hard I don't really see any of those ten dropping out because when I'm in dialogue with myself I'm at least that complex and so is my dialogue with myself how about you? Thanks Dave for the question Wanda and then I'll ask something from chat Thank you Yeah so you made some really interesting remarks and in fact I think it's important to see consciousness not just as a property that an organism has a static property but as something dynamic that can be transiently lost and so on and so you're suggesting to treat consciousness as a ten days relation is really interesting and I think this speaks to the idea of investigating nested Markov blankets and interactions between subsystems, within the brain or within conscious organisms and I think you're completely right that a lot of the relations that can hold between a conscious being and other conscious beings or conscious beings and parts of the environment must in some way be mirrored by relations between parts of the system of subsystems of the system and these are really complex issues that we try to ignore in the paper which I think we should get into at some point especially the idea that there are nested systems nested Markov blankets within the brain so I think this is really important Thanks for the response so I'm going to ask a question from Cambridge Breathes in the chat and just for this last 25 or a little bit longer I have the slide up for 16.2 so we don't need to worry about answering consciousness in the next 20 minutes let's actually build the energy for continuing our discussion as we do in between the weeks so Cambridge Breathes question was and this is related to this discussion does every memory or thought episode have a Markov blanket of its own that can be mathematically characterized and measured in other words can we consider every thought episode every minimal phenomenal experience as an independent island of awareness on its own so Juanja I'd love for you to give a thought on that how do we operationalize the Markov blankets are we a singular blanket or is it just this constellation of little transient bubbles and how does it relate to this islands of awareness idea that you mentioned in the paper Thank you so I'm not sure how to how to formalize a memory or thought episode but I'm sure it would involve some Markov blankets but that doesn't mean that there's an isolated system or island of awareness within a conscious system because there would be some interaction with the internal environment as it were another question that comes to my mind now is that if you imagine something I have a thought episode there will be some part of if we assume that some part of your brain realizes this thought episode would there be a conscious system within a conscious system because of it if it has the right engages in the right cause of interactions with its environment and I think this should be added as a or could be built into a further constraint of what it means to simulate as opposed to instantiate consciousness I'm not sure how exactly but no I wouldn't say there's an island of awareness when you have a thought episode or remember some events but there's some really I think relevant points associated with this idea thanks and it's just funny how we talk about instantiation and for instance and we talk about instances of virtual machines in the Linux sense so it's actually nice computational language so I have Adam and then Marco then anyone else who wants to raise their hand so Adam or yeah go ahead hi can you hear me yep as usual stop asking that so you mentioned with the reviewers you had these lists of places where your intuition is blocking and I share a lot of those could a nation be conscious could a bunch of tin cans and ropes strung together and there's a sense in which like there should be a universality of computational realizability and maybe we can do like in principle tin cans and string but it just might not be practically realizable because there aren't enough there's enough aluminum in the universe but I actually liked your proposal that you kind of dismissed it as maybe being a little bit special pleading but I didn't find that to be the case at all so if you actually created a rich embodiment and embedding within a virtual world and then if there's some sort of like homomorphism between this and this I guess actually I don't even know if it needs to actually couple with the physical world like the importance would be that you're having this sort of functional closure with respect to active and perceptual states in this sort of self-evidencing and self-making that you're having within the system you're doing inference on this interaction and within the context in which within the context of the virtual machine, the virtual world in which you're running this seems like we could say that consciousness is arising here but I'm not sure in terms of functions of it I've been wondering whether we can think of consciousness as a kind of like world model that we're doing divergence minimization with respect to and it's having some causal significance of this kind and it needs to have certain additional necessary properties for it to be a world model capable of generating experience of having basically coherence with respect to like space or to the world and causation and causal kinds of coherence and I'm wondering if that could potentially speak to the issue in terms of bringing up functions of which systems are more or less likely to possess consciousness is like we just add in additional stipulations as additional necessary conditions and I'm basically wondering if like quasi-contin like operating categories or some of the things people talk about in terms of core knowledge that these might just be necessary for any kind of world to appear to any being whatsoever for any kind of coherence maybe and with the function being you have some sort of data structure corresponding to the agent and the situation to the world and this is acting as a kind of top-down controller and in ways maybe is a virtual machine of sorts if you have any thoughts on that otherwise thank you Adam does the system need a world model in order to be conscious or not so I think if it needs a world model it doesn't have to be a very compact world model I mean it would have it must have a world model in the sense that its internal states must encode probability distributions over external states given blanket states but this is not the world model you were thinking of I'm wondering if we need a little more to actually have like composition and I guess like the kind of axes to have like things relative to things with particular like distinct properties where those properties are situated for the various things that are in relation to each other and whether this is basically serving the function of allowing to serve uncertainty basically like allowing you to both be informed by and inform these next perception cycles that maybe you need additional like structure built in for this to actually be like a world represented by the internal states not sure though I think perhaps you don't have to have a model that contains objects like a system that is conscious because you can have a conscious system that doesn't know what a tree is or what an apple is but I think I mean you're hinted at spatial temporal and causal relations I think it must have a sense of spatial and temporal relations and causation that would be a candidate I guess so there would be a world model which could be fairly abstract in a way about gives the system a notion of time and space and causation and this would I guess I mean one thing that I completely sidestep here is the issue of that if a wind has caught minimal phenomenal experiences that is conscious experiences that are characterized by the absence of spatial relations and or even temporal relations in which there's no subject-object structure so these are also sometimes called non-egoic states or states of pure awareness or consciousness as such and are described in especially in the literature from eastern traditions because these are states that can be experienced during meditation but also during other states and perhaps you don't need a world model and these types of experience and minimal phenomenal experiences Thanks Adam go for it Do you think it's possible in those sorts of non-dual states that there could be like a very minimal coherence along those lines like sometimes like like sometimes it's like described as like just spaciousness it's like just the space of awareness as these stripped-down non-dual states but they're still be sort of it's not enough that it's necessarily a movie it's not clearly like identifiable it's like here's a movie from your point of view and you're separate from the world and you're an agent represented but still there's just enough of a point of view just enough like they're in there but they're not like reflexively modelable I don't know if they... Yeah that's a possibility actually a really interesting option so with respect to a so-called non-egoic states of consciousness or states which the ego dissolves there's a possibility that actually the subject-object distinction disappears so that those are non-dual states of awareness but not because there's no subject but because the subject is expanded and contains everything there is so I mean in such states or in the transition to such states subjects often describe a dissolution of the boundary between themselves and the environment and one possibility is that what happens is not that the world... that the boundary becomes blurry and then the self disappears but actually that I become the world and so I cannot distinguish myself from the world anymore and that's why people say there was no I so that's at least one possibility I think Sasha I think has argued for this point and one could apply the same idea to space I would say and that would be similar to what you suggested if there are no points in space anymore no place where I am but if I'm the world then I'm everywhere so there are no points in space that I could distinguish from the point at which I myself am located because I am everywhere and have become one with the world and so there's only space as such perhaps only the world but no places within the world no points that could be differentiated Thanks for the response makes me think of King of Infinite Space and a circle with circumference of everything and inside of nothing however you want to phrase it so we've got 10 minutes left Marco and then anyone for closing thoughts and again I have this slide up where we're writing things down for next week so Marco and then anyone else who wants to give a closing thought and there's one last little question in the chat too so go ahead Marco I'll do my best to keep it short regarding the last few comments carefully say that which is called space and which is called by Buddha as no space therefore it is called space and so your conversation was very much about the notions of emptiness or the rate of luminosity which is a fascinating model of literature in Buddhism but I would argue against the notion that we should take as valid the context the idea that now we contain the whole world because it's exactly in that phenomenological state where language breaks down and so any reports of that phenomenology must be taken with a heavy grain of salt so one alternative so the way I would see it in semi FEP terms is that it's a state of maximum potential reach it's not that everything is contained within that state it's that everything is reachable from that state it's a kind of optimal readiness but contrasting that maximum potential reach is a minimal determinacy which relates to the Buddhist notion of no grasping or no clutching you're not grasping to any duality any determination you're just letting go of any determination whilst being ready to determine whenever it arises or is necessary which is related to the codependent arising which again is beautifully congruent with active inferential trends so I've tried to refrain too much from trying to solve questions but since some things already been raised so I think I've pinned down why I talked about the virtual model in that sense the virtual machine and so just like Adam as always, I also agree very much with emphasizing the cybernetic imperative which I found surprising for you not to have included from what I saw so far in the paper so I think one problem in the kind of the folks equality language we're used to when talking about these things we say that the brain is conscious because we don't have the ability to specify in what way the brain generates or relates to consciousness and so following David Chalmers I would argue that it's better to use something neutral instead of saying consciousness we could take seriously the phonological invariance across people's reports and say that there must be some integrated reality let's call that a local space or a local ontology I prefer to call it the sea space and so you could say that the sea space is embedded in the mind's brain which nicely has an acronym with MB and that the sea space effectively has primarily a cybernetic imperative but in the pursuit of that cybernetic imperative it actually finds itself doing different things or inferential things simply by the virtue of the fact that it needs to model its environment which for the sea space for that sensible conscious space would not be the world as such but the world modeling mind's body system and so you get this weird tripartite distinction which actually Joshua Bach is the first one to actually point into this idea so that you have embedded in the universe, in the physical world a particular system called the body with a brain with a systematicity that we call the mind and embedded in turn we have something called the sea space importantly that itself as opposed to its imperative cybernetics or good regulation learns about that mind body and it's the mutual taxation therein so upon instantiation or realization of this kind of sense of the sea space they become mutually dependent on each other the mind-brains activity or fitness would translate transitively to the fitness of the sea space which in turn transitively translates back to the fitness of the entire mind body and so this is very important to note the requisites of the problem that the initial challenge of creating enough challenge, enough free energy gradients to necessitate the instantiation of such a space and so I don't agree at all with these unnecessary debates about whether organelles are god that conscious like what the hell what's going to come in for them they have no adaptive imperative there's no incentive to instantiate some kind of regulatory space, regulatory subsystem there is no coherence instantiated to even lead to that there's no initial ignition to scaffold that complexity and so anyways that's my personal annoyance another notion I would like to share is the island is the awareness the fact that the island is embedded in this huge amorphous sea is what we should relate to that seemingly secluded experience and crucially and this is very tentative but I will share it because of your paper I would argue or propose that the sea space is there where the intrinsic manifold and the extrinsic manifold become equivalents or at least approximately or asymptotically why? because it's another way of phrasing that it's pure reflexivity those causal flows spatial temporal causal activity that is purely reflexive therefore would not pertain to directly that which is outside in other words that sea space is by definition the self referring islands and now comes one more crazy thing because trauma said we need crazy ideas so here's a crazy idea the point is that it's possible perhaps that this tentative islands should be embedded in something and therefore we can tentatively say there's a blanket or an encapsulation which can be seen as like a channel which acts as a channel but each channel mathematically has a bottom a limit so what if what happens when that island that sea space generates more entropy than can actually be expressed through the interface with which it stands for relation to the greater world and so I basically was trying to vindicate in part Adam's suggestions and in turn elaborate what I meant with the difference between searching machines because normal virtual machines aren't in this mutually contextualized relation not at the level of the physics as such and because it's purely in the virtual whereas here the physics themselves are co-opted in that mutually contextualized relation okay sir that's enough for now all good thank you Marco so everyone thanks so much for participating if anyone wants to raise their hand and give a final just little tweet length recap but what I'll say here is organizer is our next conversation live is going to be on February 23rd and we wrote down a bunch of questions for next time and there was also things that were asked in the chat like about the difference between quantum and classical computers and simulation so I know that we're going to have so many fun directions to go to in our next synchronous and asynchronous conversations and communications through the interface so if anyone wants to give a final thought Wanda thanks so much for coming on really it was it's a special paper it was a lot of fun to work through in live in our lab and our collective thought processes and yep so if anyone else wants to give any final thoughts Steven and then anyone else raises their hand just for the last short thoughts Steven unmute just wanted to say thank you very much and I know there's a lot of stuff that's come at you so you managed to field a huge amount of questions but I'll be really curious to maybe see your thoughts on how this work can pan out in the world in terms of people's practices and ways of thinking about other research and how this kind of way of understanding consciousness could play out possibly in because a lot of work in psychology is using brain neural coordinates to justify what they think people are thinking you know so it'd be interesting maybe next week to think about how how does some of these ideas have implications and application which I think they do so thank you thanks Steven Adam and then anyone else with their raised hand yes I mean fantastic as usual and this is my the most fascinating thing there is and I also think though it's extremely important we get this right so like Mark was just discussing on organoids and we were talking about islands of awareness like how quickly do we move forward with this research program which could potentially help countless people but we don't want to inadvertently create like black mirish scenarios so if we can place a very low prior on consciousness for organoids due to not having the proper cybernetic grounding contextualization embedding that would be great like similarly like with artificial intelligence we might eventually create systems where it is in question and so then we're dealing with you know are these ethical beings on our own are we like doing mind crime against these beings or like both avoiding negative outcomes and maybe wanting there to be positive outcomes in terms of creating beings who are conscious and are of value in their own right as subjects who experience and there's something that's like to do them so they should be part of our moral communities these are maybe you know it's not we're not there yet but we might get there eventually and these might be the questions that are the key questions for what really matters for those events with the organoids here yep and what's on the table Adam as you often highlight is everything from who we include in our moral community false positives and false negatives are quite vitally severe in this issue so looks like that's all the raised hands 16.1 thanks everyone for coming this is really a great discussion and look forward to continuing this discussion in 16.2 and beyond thanks