 Hello and welcome to the Active Inference Livestream. Today it is Active Inference Livestream 16.2 on February 23rd, 2021. Welcome to the Active Inference Lab, everyone. We're a participatory online lab that is communicating, learning, and practicing applied Active Inference. You can find us on our website, Twitter, email, YouTube, Discord, Keybase Teams. This is a recorded and an archived livestream, so please provide us with feedback so that we can be improving on our work. All backgrounds and perspectives are welcome. And as far as the netiquette, the video etiquette for the livestreams, we'll be muting, raising our hand, writing down our points, using respectful speech behavior. Today in 16.2 at the end of February, we are having a follow-up discussion on the paper of WANJA Visa and Carl Friston Neural Correlates of Consciousness. And last week, it was really special to have WANJA come on the stream and quite interesting stuff. And next week, we're going to be having our first ever quarterly roundtable, which will be providing some updates on the different projects of the Active Inference Lab. Also, as far as upcoming streams and events, we have a new, I guess, series called the Guest Stream. So this is the livestream, which is the participatory group discussion. We have the model stream, which is the walkthrough or a tutorial of a more technical piece. And then the guest stream is gonna be kind of like anything goes just anytime somebody has a new paper out or they wanna be raising some topic to visibility, we wanna be there for them and help them on that. So tomorrow, we're gonna be having Majeed Benny speak on his recent paper, a critical analysis of Markovian monism. And then in March and in May, we'll have guest visits from Demetrius Bolas and Anna Cieunica. And if you have a suggestion for any of these other events or you wanna be featured on an event or you'd like to co-organize an event with us, then all you have to do is let us know and we go from there. Today in Active Stream 16.2, the goal is to learn, discuss and hear from everybody and just knowing Dave and Adam, I think we'll be able to have a really awesome discussion where we are gonna bring up a lot of fun ideas centering and strangely attracting back to this paper, the neural correlates of consciousness under the free energy principle from computational correlates to computational explanation from Visa and Friston in 2020. So today, with a small group and a follow-up discussion, there's a bunch of slides from last time that we can definitely go to. There's also the paper itself, but let's just start with an introduction. So we're gonna go around and introduce ourselves and again, seeing as just Dave, Adam and I are here, we can each introduce ourself and maybe provide a little context on some of these warm-up questions. So whichever of you two would like to go first, we can still utilize hand raising or perhaps it'll be a little easier to just have it like a regular tripartite conversation. So we'll go with Dave and then Adam on the introduction and on the context, thanks. Okay, I'm gathering up some reference material from a number of different ontologies in order to get those aligned with the topics for the EDU project. I'm also going through a great big brain dump from the most noted neuropsychoanalyst in South Africa who has gotten two papers and a book out just in the last two weeks and I'm struggling through those. He has rewritten the Project for a Scientific Psychology by Sigmund Freud from 1895 in absolutely state-of-the-art Carl Friston terminology. The only thing he doesn't hammer on a lot is specifically active inference, but all the other cool stuff and it's just a brain breaker. Who is this? Thanks for the update. So we'll post any links and we'll try to keep it on the discussion that people are here to be listening about. So if you have any other thoughts, Dave, otherwise Adam, go for it. Hi. So I guess, I'm Adam, like an introduction or just what I'm working on? Just say hello, we can just ask, how did we come to be here? How did we fall into a follow-up discussion on a computational consciousness paper? Okay, so I guess I have been working for some time to try to understand what the heck consciousness is. If that question even makes sense. From different perspectives, where the free energy principle and active inference has in general been, I guess, the overarching view from which I'm attempting to think. And so I've been following what different people are saying, proposing some ideas of my own. And just along these lines of learning about active inference, I found like the live streams and what Daniel's doing and the discussions slash potential movements that he's creating. And this is just one of the most interesting conversations around as far as I'm concerned. Thanks, Adam. Also Dave, I have a question. I know you have a lot of background in classical cybernetics, if it could be called that. So how does cybernetics deal with this question of consciousness or what have been some of the long running threads as far as the way cyberneticians pre-active inference have been thinking about this question? Well, Gregory Bateson did a lot of work on this in the 40s through the 60s. Working as a therapist and as an anthropologist, he played a lot of attention to how people get fouled up he uses a term of schizogenesis. That is the origins of splits. Sometimes it's a sort of a good thing. It's a way that a society can become more differentiated but in a child, it's a great danger. The child, for instance, the thesis which turns out not to be all that useful is that small children are constantly misunderstanding what their parents are saying. The parents are not addressing them very effectively. They're assuming the kids either have less intense emotions than they do or that the kids understand more about the shared, the world of shared reality. So the kid will get flustered, demoralized, alienated. And in the best case, there's a reconciliation and the mother typically and the baby will get back in sync, they're now happy with one another. But sometimes the split gets worse and worse and those same combination of rage and demoralization plays out into adulthood and too often as parents. And Bateson uses, draws a lot on Bertrand Russell's theory of types. So he analyzes society's interactions, roles with a very elaborate treatment of hierarchies of meaning, hierarchies of reference, hierarchies of control and communication. So it loops right back to the old definition of the study of communication and control. Now my teacher Gordon Pask put a great deal of emphasis on two areas that I don't see addressed adequately today. On the one hand, the inevitability of dialogue or conversation in any conscious occasion. He says there's no consciousness without dialogue. Second point, there's no consciousness, in fact, no concept without emotion. And third, there's no persistent concept without circular causation. And we know how important circular causation is in explaining how the Markov bounded system and its environment co-create. And Stuart Kaufman does a lot of discussion of co-creation between the niche and the worker that is the organism, the society, the person, the evolving language culture, whatever it might be. So it's not that these concepts are completely missing but they're not as articulated as they were even in the 1970s when Pask was developing the first really effective teaching learning systems. With tiny clunky computers that were half made up of cards that people, the students had to be prompted to pick up a cardboard card and put it on the display because there was no way to display text other than what you had a light little lights and that would be labeled A1, which means go grab card A1 and read it. Cool, it's like- And we're better off now in some ways. Sounds like playing battleship with your computer. But what you said there about the multiple like perspectives in pursuing this consciousness question, there's the anthropologist studying humans, there's the therapist who might be interested in social systems more generally, but they're in a very specific relationship and they might not be trying to do a full ethnography or an anthropology research project. The therapist relationship is one that's based upon, it's more medical. It's based upon the helping reduce someone's suffering and then even zooming out further and further away from the experience, there's the biologist who might compare human society or behavior with other animals and then even a systems researcher that might even go beyond the biological and ask about where computers fit into all this. And so it's like in the center, we have our own experience and the experience of other people and that is what is negotiated in that sort of kernel of the therapy relationship or in the dialogue which can be therapeutic even if someone's not licensed but then we're going in these concentric shells out towards broader systems thinking. So I think that is what I liked and remembered about the paper and one of the things that motivates me to think about it and go down different rabbit holes and hear from people's perspectives on it because there isn't a priori, a best approach and even within an approach or one of those concentric spheres, it's not like there's one way to do biology or one way to do anthropology, one way to do therapy. So how are we going to make adequate guidance and navigation for the bigger system when if we just get lost in all the possible ways it could go, then we are gonna be doing a lot of moving words around but maybe not a lot of useful actions. So looking forward to hearing about that and then also what you said about the different developmental phases having different cybernetic or communication needs. So cybernetic principles like having requisite variety or good regulator, whether we wanna use the classical cybernetic terminology or just thinking about communicating systems, different developmental stages are able to deal with different stresses and are able to achieve different goals. And so there might be some things that are overwhelming for a child but then maybe later on we hope that that information could be integrated or virtualized or emulated or somehow used in a controllable way. Any other just off the top of the head dots because I think we'll actually have a lot of great discussion stemming from just walking through some of these slides. Is that cool? All right, nice. So just to recap maybe those who have listened to the dot zero and dot one conversations or not but the paper that we're following up on today is called the neural correlates of consciousness under the free energy principle from computational clits to computational explanation. And the aims and the claims of the paper were to integrate the neural correlates of consciousness with the computational correlates of consciousness and implicitly also the behavioral correlates of consciousness, BCC, if we could use that and then think about how free energy principle and active inference are going to be integrating these. We went through the abstract and the roadmap last time. So today we're gonna just start off by looking at some of the questions that people had raised in 16.1 which we've now separated out into different slides and added a CC images in the background. So if anyone in the live chat is just wanting to post a question or something comes to mind we're definitely open, otherwise we'll just slowly march through these fun slides and then hopefully get to look at some of the paper as well. So the first question and the one that blue sort of brought back to the table in 16.1 and helped integrate into the construction of 16.0 is how do we define and measure consciousness? So it's related to what is consciousness but we're almost approaching it instrumentally. Instead of what is it, which is the realism question we're gonna take the instrumentalist approach. How do we operationally define it and how do we measure it or what are we measuring? Because if you're discussing what something is or isn't without the instruments in mind how you're actually going to be measuring it or observing it, what is it gonna be there for? So we had a few different terms that people brought up but either of you would be welcome to give a thought or Adam, I'd be curious just from, yeah, you can speak and I'll ask you a question too is in your approach to studying consciousness what measurements do you think are the most important? Whether they're data sets we already have or whether it's a kind of data that doesn't quite exist yet but what measurement would be the most informative for you? So this is, oh, can you hear me? Yep. Okay. Microsoft says my microphone is noisy but. Sometimes just the little background sounds but it doesn't sound noisy to me so I would let people know if it did, thanks. So I guess the approach I'm trying to do is factorize as much as I can consciousness into different senses of the word and then try to bring those in a relation to each other in some coherent way. And so I might think of consciousness in terms of a set of concentric circles which might be approached differently at each level. So maybe like the innermost core shell we might have within most core you might have some kind of proto-creature consciousness we might not even wanna use the words some sort of organismic modeling. Whether or not we talk of this in terms of consciousness could be a matter of debate. And then at some point we get to this like the appearance of a world to a creature, subjectivity, what it is likeness, quality of phenomenality. I haven't heard like a definition that really satisfies people universally and they all sound very hey but just the lights coming on, what it is like to be us. But then you get into this what is likeness and then there's all the particular ways of what it is like. And now the actual stream of experience is shaped by in different ways and has certain other properties you might get into like access consciousness. So this would be like net blocks distinction between like phenomenal consciousness or your experience itself. And this would be where like a framework like integrated formation theory would come in or you might have then another level of complexity access consciousness or the ability to manipulate on the objects of experience in your mind or a report on them to have knowledge of them. But then it's like we're still somewhat vague in terms of what we mean by knowledge of like what how much knowledge, what kind of it. So you might have even more complex vacations, things like self-consciousness like auto-noetic or reflexive consciousness. And there might not be like any bound in terms of the complexification. So at each level you might have a different handling. And as you go like out of the outer shells it seems like they blend themselves best to operate operationalization and experimentation. Or it's, you know, you can actually do tasks like did you pass the mirror test even though you have some people debate about what that means? Like do you have certain, can you report? What's the nature of the report? Some how sophisticated as you're dealing with the more complex varieties of conscious experience. So that's very straightforwardly lend itself to empiricism. As you start to go into like I guess the phenomenality itself to the degree that we want to hold to this distinction between phenomenal access consciousness or different kinds and I personally make use of it. But once you get to experience yourself you have the problem of the first person versus third person ontology, the privateness of it. It's like you can make inferences about different systems based on their properties and what you think is helpful for realizing what might allow a world to appear to a certain type of system. But you can't know because consciousness is private. You can only comment it indirectly. And so it's, there's things like no report paradigms people at work or things like, you know, are there coherent like most recently a paper just came out looking for basically neural dynamics and are they responsive in a certain way that suggests like a sensitivity and a knowledge. But still it's an indirect inference and it's a problem. It seems like we always have to go through usually by some degree of directness reports and to get an experience. So it's actually like where the rubber hits the road of the initial emergence of a world appearing to a system it's as far as I'm concerned really super fraught. And so my best attempt has been to try to basically cross-reference as many perspectives as possible and try to basically like line up all three of like Marian levels to the extent I can of functional algorithmic and implementation level cross-references with phenomenality and then try to say, okay, what types of computational principles what types of algorithms might help to create a world that could appear to a system. And I've heard of thoughts on that but I can stop for now and keep talking about it. Thanks for this. This idea of cross-referencing, doing the scholarship and really being up to date with what people are actually asking in the ways that they're really investigating it and then looking for some underlying or some inner generative processes it's very similar to what is being done in WANJA's previous work on the minimum unifying models which is like, could we find a cluster of attributes that seem to be co-extent like they're existing together and then could there be some minimal underlying speculated model that would lead to those clusters of traits and then would that help us design better experiments reduce our uncertainty about how to act? And then the other part that I would just ask about is like, you know, we hear a lot about first person, that's the I and then we hear a lot about third person like third person, bird's eye view but then I'm wondering about the second person because that's actually where Dave's questions about the no consciousness without dialogue or without interchange of information as well as Vygotsky and even Demetrius Bullis's and Shilbach's work on becoming who we are through interaction. So it's just like, will we be able to localize it in the brain or in the brain body or in the brain body niche in the social niche because if it turns out that it's something that's distributed a little bit beyond the cranium then maybe neural measurements matter maybe button tapping can matter just like you could have one thermometer and make an assessment about a broader area but we shouldn't confuse the map and the territory or the measurement and the territory. If we think it's relational then any kind of objective measurement is only ever gonna tell a partial picture. What do you think about that, Dave or how does the second person come into play? I don't think I can advance the discussion right now. Oh, cool, that's, yeah. All good, go ahead Adam. So I think in terms of like the related, so I've tended to in terms of like degrees of I guess internalism versus externalism I tended to I guess separate out consciousness or experience from mind or intelligence or modeling where I think like the relevant generative modeling for intelligence or mind would indeed be distributed and extended and would spill out beyond and exist between agents. I've tended to reserve thinking of like a subset of these generative processes, a subset of generative modeling as being kind of stuck in the cranium of the generation of consciousness itself. And even more I've tended to actually like restrict it to even like not the full, like I wouldn't like draw the boundary or the Markov blanket across the whole nervous system but actually within the scope of these sort of metastable synchronous complexes which through their synchrony would entail I guess message passing and the calculation of a joint estimate over the entire sensorium of the agent of its embodied embedded interaction with the world. And so for me, this would be like not even like the whole when these estimates are being generated when these metastable states are forming you could think of them as a kind of non-equilibrium study state that there would be a general boundary around them that wouldn't include the whole nervous system in on the act. And but in terms of like practically what would so in terms of like the onion of consciousness in terms of like the kind of consciousnesses that we have that the exact nature of our streams of experience our self-consciousness and our autobiographical auto and alibiographical consciousness that would definitely be relational. That would definitely be even if like the generative pulses for the qualia might not be they might be confined in the skull the processes that whereby you would come to model yourself in the world and we would be able to coordinate and just live that would definitely be relational. But I'm wondering whether the initial generation of phenomenally self well practically requiring at least for mammals and birds, most of them this communication, this relationality sometimes even maybe like a bootstrapping or a back and forth between perspectives between a second person point of view and a first person to actually like you figure figuring out like who am I by figuring out that through interaction like who are you, who am I, who am I in relation to you? And this is how you bootstrap the whole thing. And also just practically like I think you're later gonna have like Anna on and like this perspective of like inherent interdependence you're not gonna even be homeostatic at all without the relationality. And so like the first priors the exact nature of your consciousness deeply interdependent. That being said, I wonder whether you can have systems that are never in relation to any other system except for the world itself and that they come to model and that if they have the right architecture that this could entail an integrated world model which would there be something that's like to be such a system. I don't know where this begins and ends but I have speculations but those are some thoughts. It's definitely clarifying where you're coming from with the integrated world modeling. It's almost like if you could have a program on a computer that you could know what was getting passed into the program. So you could say, yes, when I hit run this program is gonna start from nothing or from the kernel that I've specified. And then if it ends up doing this auto poesis or self reflection and then can do things like the mirror test or responding in a way that is akin to a person. Then at least from a computational perspective it's exhibiting some of these important features or attributes. Now you're still gonna have people who deny consciousness or think everything has it. So for me, the question will eternally be open because there's gonna be people who say it's all just an illusion or it's not real even for humans. And there's other people who say, well, you just don't understand the electron consciousness. So there's no point in even trying to falsify that. So it's gonna remain an open distribution but we may be able to focus on some of these interesting differences between ourselves and the computational system. It's like a little model system. It's not the same thing as us but let's return to this computational question because I think it's related to the next question with a JPEG of a hand reaching out to the sky. How do we relate experience or phenomenology to neural, computational and all add behavioral dimensions? So how do we connect seemingly, it's like a marketplace of two different types. You got the soda machine that transforms the dollars into the soda cans but how are we going to go back and forth from a behavioral observation to phenomenology to computational? I think this is just, it's definitely an open question. Like these are just starting point discussions because they're not resolved to questions. I would say people barely even know what the structure of a resolution would look like but it has related to this question we just asked which is for a virtual machine or for any type of system for it to interface out to the outside world. So in active inference we call that action states whether it's typing or your voice box or whether it's your eye movements, it's an active state. It's part of the blanket states with sense and action. So however you come to know about a system through the action and sensory states, you're not going to have access to those internal states and then it's almost like even if you knew what those internal states were it wouldn't always clarify the situation. So how do we think about this connection or the interface between systems of different kinds, virtual machines or seemingly conscious systems? How do those link to the outside world and how do we talk about that rigorously? And then I believe it was Cambridge Breaths last time who just at the end brought up this question about quantum and classical computation. It's not something I know a ton about but it's interesting to think about whether the sort of Turing computation, zero one von Neumann architecture just what people think of when they hear a computer just like this laptop in front of me. How is that part of a broader perspective on computation? Especially when we're thinking about computational dimensions like we're not just thinking about Windows 10. We're thinking maybe about unconventional computation and so these were just some questions that people addressed. If anyone has a thought they can go for it. Otherwise we can just keep on walking through it. Yep, go for it. Well I guess I might start with neural behavioral in terms of like in the systems where I think I have some understanding and then move into more general like types of computational systems that could potentially realize things similarly. Whether we're talking about like a von Neumann architecture or like the blockhead experiment or the nation of China or like you social insect colonies or any kind of system. But like starting from I guess where we have maybe the most purchase it would seem to me that experience would correspond to potentially or try to like identify a neuro computational object of a joint marginal distribution from a probabilistic world model which would over basically the sensorium of the organism as it is coupling with the world. And so for me it would be basically an estimation of a visual spatial field which would connect to like Rudolf's work with the project of conscious modeling a point of view but also the feeling of having a body, the live body and their relation between these things potentially involving something like an attention schema like bringing in like Graziano's type of work. So I basically think like many different theories of consciousness are actually pointed to different aspects of what we need with none of them being on their own jointly sufficient but if we bring them together we might be able to get joint sufficiency. And so the idea in terms of neural substrates I'm thinking that the appearance of a visual world could correspond to basically some kind of maybe the precuneus, this like midline structure as like the top of the visual spatial sensorium coupling with the posterior cingulate. And the reason I would like point to these I guess would be partially empirically in terms of like what happens if you lesion these areas and also the types of information available to them. So it's both like where they are their high centrality being hierarchically higher. The empirical associations like if you lesion the posterior being precuneus, you're gonna have you're gonna have lots of problems but also in terms of mental imagery this is like these would be neural choral so tend to be associated. I bring in like the cingulate, posterior cingulate because basically I think that area might be the most crucial one for establishing a particular point of view in the world. So you would have, there's this like old idea of the Pepe's circuit where he was basically dropping electrodes kind of like figure out a epileptiform loci and he felt like this highly central inner circuit for emotional memory where you go from the mammillary bodies to I think it's like the anterior lateral thalamus posterior cingulate into the hippocampal formation out through the foreings back to the mammillary bodies and like this basic loop of emotional memory. I bring it up because if you think of the types of information that you would get there part of what the mammillary bodies are doing is they have this stretch receptor information from the neck and so this will actually tell you like where your inner lamprey's head is pointing and then if you go to the thalamic nuclei they're now getting that information. They're also getting information from the cibular apparatus and so you have from the inner ear this like crazy conch shell nautilus, I don't know how to describe it, this like crazy shape that basically has these like little particles and move around and stimulate these hair cells that cause different action potentials so basically you can get the yaw pitch and roll of the head and so you're getting both like the positional and the vector information of your head coming into this area of four feet into the hippocampus's top of the cortical cutararchy that basically gets your entire sensorium all the modalities coming together pegged according to what you would need for an egocentric perspective and reference frame and so that would be like any human. Just one note on this cause I think it's really important is the question of estimating a visual field isn't like just what a convolutional neural network does it's not just a fill in the dots paint by lines as we heard from Rudroff and Williford and others it's actually a very complex affordance driven projective model that entails a locus of the observer. So if a computer is trying to auto complete some image it is taking a matrix and it's doing transformations based upon statistical regularities in a matrix so we can sort of computationally sketch that out but in the locus of our awareness visually even it's not quite even the same computational problem because it's integrated like you're saying with inputs of information that we can observe and you made a great point with the neck position and then the angle of your ear it's probably super non-linear how that gets integrated into the control mechanisms but that's like the gyroscope and acceleration on a phone. It actually does convey the information about the pitch and direction but then it's being conveyed in this very action-oriented way and that's egocentric action-oriented because otherwise if you're calculating the actions for somebody else, what's the point? In some ways it might even be the lion's share of the information and like except for rare circumstances like you're in like the midst of a gale force hurricane it might be like the most purchase you have on like what is causing the most change in your sensorium would be your own self-generated actions whether we're talking about like overt motions or like now I'm changing what's coming in or like even just the mic or saccades at three times a second or even like these potentially faster jiggles and micro saccades like it's actually all coming from you and if you're gonna try to make sense of like make sense of your sensors without this it might be pretty hopeless unless you're actually accounting for this inactive embedded like situating of what's coming in. And so yeah, you need all this information for you know we're not just like sitting there locked in videos being displayed in us with our eyes open like in clockwork orange you know we're actually engaged in the world and even there your eyes are moving around even there would be hopeless without the inference copy. Here's a fun thing is inactive inference we know that the imperative is to reduce surprise minimize free energy of a system and there's two ways the system can go about doing that it can learn which is updating its internal model or it can act. So you could do action or inference integrated under the same imperative. And so let's think about this super clockwork orange 1984 situation where even the ocular motor muscles have been paralyzed. So someone cannot blink they can't move their eyes and they're get there it's like just they can't act at all they're maybe in a hopeless yes in that situation there would be an internal experience let's just say and because they couldn't all the adaptation and the free energy minimization would have to be based upon their internal model. Now their internal model could be a crystallization of I don't like what's happening to me and I'm not listening to the propaganda and that's not me for example. But that's not always maybe the case and then we can go from that super limited situation into cases where we actually do have degrees of freedom of action and degrees of freedom of thought and it's a really interesting point that our active experimentation in the environment is what is leading along the trajectory of our experience. It's like oh what's on the other side of this pen I picked it up and now all the experiences in my arm all my visual field is different but that wasn't externally induced it was part of an endogenous active sequence of active experimentation in the niche and then so it's really okay cool that's where we get action and inference coming together so continue on. Like in AI like they're trying to like figure out how to get inductive biases for things like objectness and different kinds of core knowledge and I think it's actually a real problem that this inactivist perspective is lacking because they just wanna sort of like plop in different as core analogy is different evolutionary priors as inductive biases right from the beginning so it's like you have somehow wave your hands, base magic, objectness prior causation prior you might have things such as this but it's going to actually be helpful for an objectness prior if the system actually has experience with objects that might be helpful and it's still this idea of like you're trying to sell vision completely divorced from the engagement with the world so you can actually like reach up and if you have experiences like do touching and moving things around and say huh I have uncertainty I'm gonna like you were just saying I can resolve my uncertainty oh that's what's on the other side and now I'm building up an object model and now I'm building up causal models of like what happens if like I let go of this I'm not gonna do it but now you've had now you're developing a sense of the causation of the world that is basically allowing you to have this internal model instead of just like hammering it into the architecture somehow through something like you know some sort of architectural prior not that those things don't exist and are gonna be crucially important but I think it's completely hopeless that they're going to solve that enduring problem of AI without this inactivist perspective I think it's just the hypothesis space is way too unconstrained you'll be lost there forever you can throw all the compute of the universe at it it still might not be enough I could be wrong about that but that's yep that's a claim another way of thinking about that is like let's say the space of all possible chessboards or the space of all possible cup designs it's a big number but actually the space of all strategies for chess it has to be bigger than the number of chessboards because any chessboard there's multiple strategies or the number of strategies for determining the coffee cup like what is each joint gonna do as you're seeking to infer what kind of coffee cup it is again it's like chess strategies it's another level of strategy above the state and so active inference sort of cuts the knot on seemingly the simpler problem how do we figure out what kind of coffee cup it is rather than just doing exhaustive mapping of all possible coffee cups we actually step back to asking what strategy is gonna help us resolve uncertainty about the coffee cup and then that strategy might be encapsulated with just a few lines of pseudocode like a good strategy for game theory might be start nice and retaliate but then act randomly once in a while and forget sometimes something like that that's quite a compression of a lot of things that are relational in the world and so it's almost like sometimes the state estimate is too hard or not even what we really want to know you don't want every pixel in HD of your visual field you need to know how to act and then that way even if you're if you have something in your eye or one eye is injured or there's some sort of other visual impairment or even no visual input someone can still act effectively whereas if you just have an image recognition module you disconnect that and now it's like you're walking on nothing but if you have action as the core loop and then sense as it is an active inference is being integrated to the extent that it shapes action it's a very different perspective than sort of this input process output control theory model and I think this is gonna come back in a second when we look at the circular closed causality of the actual action inference loop with a blanket state intermediating so that's pretty cool any other thoughts there? I guess along those lines something that's something an open question for me right now and also speaks to like illusionism you mentioned recently I kind of bristle against all things illusionist in that like I think indeed I think it implies things like it implies things like you are not conscious or the consciousness is not an important basically adaptation or component of experience but this idea that it might be different than we think I think is valid and useful so in that sense I like illusionism but I also bristle against it but the idea with though being with this sort of the most useful compression and the most useful like how is it encoded? What is the richness of experience? We could actually be somewhat mistaken and that it might be as if we have this HD visual field all available to us all at once or it might just be when we query it we can have it when we need it and so I think it's an O'Regan I think like a sensory motor contingency theory that's his model and it's more extended and it's like you just query the environment but potentially like even internally you might just be querying an entailed like generative world model and it might even be potentially at the resolution of the sensory engagements themselves which are like for instance for visual sampling you have basically for the phobia it's a thumbnail held out at arm's length that's basically all you got and so it might be in any given moment that is actually what's in your visual experience it could be but it's just you're moving it around and so you never notice there's no place to stand to look at yourself not realizing it's like kind of like the refrigerator like whenever you open it there it is Interesting, the lights are on the lights are on and somebody walks by I map like though like part of like perceptually like I suggested that there is some sort of richer filling in and this is part of like I think so like I've suggested and Kanai has suggested that like you can think of cortex in terms of principles of like variational auto encoders and part of what they can do is they can fill in a more rich thing so for instance they might be used for instance like for like a 4k or an 8k television or something for like super resolution for upsampling and you have a more impoverished signal and then you fill in the missing details or you have like an occluded image and then you fill in what wasn't there and then maybe this is like for instance partially how we handle the blind spot that's unclear but how much filling in exactly happens I have no idea it could be just like almost like an etch a sketch thing that is at the really shockingly low dimensional resolution of the actual sensors themselves or not I don't know another thought on the richness or the paucity those are comparators they're relative terms and so imagine another agent in the niche has different affordances and say only 400 to 700 nanometers you know you can't see anything beyond this is like who's having a rich experience in that and so we are having our experience and then as you're pointing out feeling like things are not changing or like we have good clarity outside of our main focal point that's a story we're telling ourselves that actually might be a very adaptive and self-protective belief because if it's the case that when we're unsure about something it produces or it is anxiety and then tunnel vision is like sort of our de facto sensory reality as far as what the resolution capacity of the retina is so how do you square the circle with limited sensory capacity but not wanting to have over anxiety well you make rapid circuiting and then you have a continual story that everything is okay until something semantically is not okay in the visual field um Dave yeah did you see the uh the the youtube address that I just sent through chat several minutes ago um I don't look at any chats during the livestream but what can you tell us about it or how would people find it yeah yeah this is a recent presentation by mark solmes about uh why summarizing his recent his book that just came out um and he's he's he's addressing how it is that Thomas Nagel and David Chalmers ask these really good questions really important questions super smart people the tremendous amount of knowledge have been working at this for decades and they've some of them are despairing how can we ever possibly answer these questions and he said well if you refuse to pay attention to the brain stem and the midbrain and think you're going to find the answers to consciousness and what it's to be like and feeling in the cortex you're not going to find it because it ain't there there is you want to know what does it feel like and you want to look at and you're you've confined yourself to a part of the brain where feeling isn't happening the part that isn't dedicated to feeling but rather is this sort of you might say computing machine that might pick up some consciousness when the brain stem needs the neocortex to do consciousness rather to help it do consciousness so i'll send um i don't know maybe i should send an email or the other thing leave a comment on this youtube video and if someone's curious they'll check back i don't know if i can do that i don't know that i i've tried to put comments on the youtube and i i see no evidence that anything that i put in ever goes anywhere that anybody can see interesting but anyhow but yeah but suddenly you can put it in the in the text maybe later on the other thing maybe you both heard me talk about this before but some folks working on the memory that's used for chess the recognition of chess positions they took a bunch of people from just really beginners who just basically know only how you can move what are the legal moves to really good players to masters and they would give them chess positions and let them look at it for a few seconds or 30 seconds or however long that was that was varied and then say now write it down for me or just say tell me what you can remember about it you know 30 60 seconds or 15 minutes later and in a real position the master was really really good they could sometimes reproduce all of the pieces in an end game or a mid game and the amateurs couldn't could barely do it at all they said well whites kind of still in white's territory and the right side of the board had a lot of stuff in it but if they put up impossible positions positions that you cannot possibly legally move position chip pieces into the masters would get really upset because they couldn't remember I don't know I don't know and fact the amateurs were better than the masters at reproducing impossible boards and the folks analyzing said this well it seems like people don't remember people who know chess don't remember positions at all they remember trajectories that seems to be the only thing they remember and then they they drill down on that you know how can you distract from remembering trajectories and and so on and they said yeah this this seems to be what people remember is the the meaningful moves they don't remember positions I think that might address some of the things Adam was saying about what's what's really deflated what is really the more efficient way to to store data you've got storage in a computing machine and you've got the knowledge and the information and the information about my position and where I want to go from here the living brain uses and they may have no relation thanks for that Dave and I'm going to pass to you Adam with a question from the chat Cambridge breaths asks a question to Adam regarding the computational approach to neuro phenomenology how would the generative passages possibly help to describe consciousness in a way that generative models can't do so it sounds kind of related I'm not sure but the generative passage but it's sort of this trajectory idea I think is what we're getting at so anything you wanted to say or respond in that okay so a few things off the stack and address that if I can or to the extent I can oh with relation to the different types of potential illusion or richness or lack of just there's this one book that reviews a lot of these phenomena the mind is flat by Nick Chater which is interesting I know if the book should be called the mind is flat or the mind is surprisingly rich and deep it just a matter of perspective but with respect to and I also want to address soams and the idea of a conscious brainstem which I'm going to push back on that a little bit the but I guess I'll go to the generative passages versus like generative model idea and I think that that first and I think that actually speaks to what you were saying Daniel in terms of like wanting to have this balanced information like dealing with what's the information you actually got and then what can the agent handle in terms of it's like it's effective read on itself by which it's regulating itself as a cybernetic system not getting overwhelmed by uncertainty at a given point and potentially a kind of additional filling in both for the sake of having a richer cause a model of the world but also just to not freak out the agent to allow it to be more copacetic in handling the uncertainty that's going to have to handle in engaging with the world in trying to via potentially some sort of integrated world model now the I guess the I'm still trying to understand aspects of the like so the distinction between so there's a preprint that just came out by Maxwell Rampson and a few others on neuro phenomenology which is which talks about this generative passage idea but I guess part of it would be if the modeling processes we're doing are temporally deep and they're counterfactually rich and this would be different than for instance just necessarily inverting your the generative model from your sensor so like for instance if you take the posterior lobes it might which I T says are the physical substrates of consciousness and could be on another perspective viewed as like a kind of workspace that provides a global availability or could be viewed as like a predictive global journal workspace this could be predicting what you think your sensory states are at each point this this could get you a good amount of temporal depth and counterfactual richness however really though the unfolding is not just a sequence of sensory estimates just stitched together at your time but it's they're inactively stitched together through time where they're acts the whole process whether you're engaging the world or you're imagining it it's these action perception cycles and so now the frontal lobes are in the mix and now it's getting complicated where now the agent is shifting its perspective around within either an actual world is engaged with or some sort of virtual world and with some sort of relation between these and this is now more extended this is a different thing where the closure of these functional cycles the informational closure is more extended so things like for instance the temporal the inherent depth of the present moment the sort of specious present that might be part of it in terms of for instance it's not just this as i'm suggesting like the actual quail are discrete they're being estimated at alpha frequencies primarily with a foci kind of encompassing the posterior lobes and this is where you're getting basically these nests these non equilibrium study state densities or i call them self-organizing harmonic modes and this is the stream of experience however the actual what's in the stream that's involving other things that's involving this the cybernetic action perception context and where it begins and ends the actual operations engage with that's a different and much more complicated story and i think we're basically going to need to have things i actually think like the hippocampal system might be at the core of that and that's going to be like the key puzzle piece and i'm currently working on moving from basic with this basic like what could do phenomenality to what could do these more complex forms i'm trying to grok the hippocampal system to figure that out but that's like one distinction between like generative i think i might speak to generative passages versus just the components of a gendered model to be being inverted that could entail basic creature consciousness phenomenality or just the estimates of the world the stream of experience i don't know if that makes sense so one more thing though yeah i just want on the passage just well then we'll go to the other topic so think about um memorizing something like the chessboard but think about memorizing a sequence of sounds so if there's sounds that you cannot make like um some sort of whistle or some sort of sound that a human can't make it's going to be really hard to memorize it you could recognize it but it's going to be hard to reproduce it now there's sounds that you can make in a language you don't know if you tried hard enough you might be able to memorize a song in a different language but it's going to be hard next would be in the scale of how easy it is to memorize would be like random words in English or a language that one speaks but not grammatical that's like the chessboards that are impossible it's real pieces but it's not linked in a way okay now up another level of easy to memorize natural language like conversation could be pretty easy to memorize another layer up would be a composed natural language like a story or a poem and then even above the story or the poem as we know from those who have great experience with memorizing epics is the embodied and the inactive poem so where from the head of the story to the to the toes of the story or it invokes your own body and so when we think about this generative passage it's almost like it's increasingly natural for us to condense or to distill memory in terms of a narrative increasingly inactive increasingly self-referential ways instead whereas stimuli that aren't able to be easily brought on board our own narrative in a relational way they're kind of like non sequiturs to our memory and so that's who and what we are this day of the system and how it responds and then also on the passage versus the generative model thinking about the generative model even though we know inactive inference that it's beyond internalism and externalism and tale of two densities and so it's not just like the generative model is a matrix but thinking about a generative model makes you think that you can like pause the sensory input and just okay what's the state of the model but when we think about generative passages and trajectory estimates we get past present future we get self and other and we get planning under uncertainty so those are some of the very very key aspects of passage inference that aren't quite captured by just asking about generative model which makes it sound like you're doing like a linear regression okay then to the other aspect yep Kyle's beautifully said just need a moment okay yeah so okay so for I guess the brainstem part so I really like Mark Sohn's perspective and I'm actually very sympathetic to neuropsycho analysis I think it's developmental it's relational I think it acknowledged like I think basically since we are either you know you'd say because we're reinforcement learners it would be shocking if there weren't things like defense mechanisms because there's some thoughts you're having more of some things you're having less of and the contingencies of value and optimization are too numerous and varied sometimes subtle for you to track and so you can go down all sorts of garden paths so I'm generally like while the particular aspects of the Freudian framework you might question I think we probably should be thinking something kind of psychoanalytic to get at the weirdness of mind and what it's like to be a person that being said while I agree about the fundamental nature of the necessity of the brainstem as part of what's contributing to consciousness and not just as being necessary you take it out you're done however the claim that brainstem itself is a locus of consciousness I find to be somewhat questionable and that I think we're in an even worse position in some ways well where it's the reason I would say so one I would question the evidence that's cited for this but the other is I think you probably need the at the very least something like to do temporally deep generative modeling of a kind that would be able to give rise to an integrated world model it seems like the brainstem doesn't have what it takes I suspect you need the flammacortical system with its ability to do this hierarchical sequence memory and its ability to capture this hierarchy of time scales all coupled to model your hierarchical Lee your hierarchy or the dynamics are unfolded hardly in your engagement with the world you need a system that has those properties and I don't think the brainstem quite has what it takes the you know I'm basically for me to have consciousness I want to have as I say an integrated world modeling theory I want the an integrated world model that has coherence with respect to space time and cause and maybe also agency some sort of basic autonomy as part of it and these are all necessary properties that potentially might be jointly sufficient but they're definitely necessary and so I don't think the brainstem can do all that in terms of the data they cite one would be ants ants phallic patients or children without envelopes verbal cortex and or whether it's just not there and they're showing like this rich emotionality but the my pushback would be that that's that emotions they go all the way back all life like back to like and like demacia is thinking and like the strange order of things and the 2012 book I forgot what that one's called but it's this every single it's these coherent changes in organismic modes that are corresponding to means of engaging with the world these classes of events and this is a state of basically action readiness over your system that allows you to adapt a couple and I think like this is what emotions are and so this is being shaped all the way back to a single cell and maybe we want to push consciousness that far down I personally wouldn't but that is I think you can have very rich unconscious emotions very rich athletes and it should be organized organistically and so this will all be the level of the brainstem you'll see these things you'll see like for instance potentially laughter you'll see fear you'll see rage but I would argue you'll see in some ways homologues of these things going all the way back to bacteria bacteria going into like a correlated form to protect itself that could be a kind of like fear thing or a bacteria releasing endotoxins that could be a kind of anger response or like extending a pillars to exchange circular chromosome little chromosome bits of 100% that could be a kind of bacterial lust I don't think there's anything it's like for these affects for the bacteria so I kind of push back and that basically the reflect these reflex arcs being engaged coherently for these things that basically at the core of our experience was and some was the most important part still I think this can be done potentially just mechanically and like from a like a kind of Bruxian robotics emergentist perspective where it's just the systems intelligence is in it's the what's available like the different modes it's body can go into which will allow for certain types of adaptive coupling yep as the one more thing I would say would be they also point to the so like like like Pank sub-sorcerer like you'll stimulate these areas and then they'll generate the emotion okay but I think that might be the same story like the generation of affects we don't know right or rather okay we know if you stimulate it in us so you can stimulate these areas and you'll generate the ocean but that that area itself is sufficient for it that we don't know I would suggest probably not and then for us have experienced the issue though is we have these looping effects you haven't cut it off it's circular causation so you stimulate these like brain stem nuclei or hypothalamic nuclei and then you generate the emotion well the cortex is still online for us and so that's I would I think the evidence is questionable for the the conscious brain stem even though I love everything else about Somes's perspective I think we should move away from conscious brain stems okay thank you yeah this Freud talks about different qualitatively different kinds of consciousness the consciousness that's in the brain stem and the cortical at least I'm sorry I shouldn't say it that way the way Somes up dates Freud that there's certain consciousness that does seem to be confined to the brain stem and I know listening to his talks he he will say it's conscious and then in the next sentence it's unconscious and that that differentiation is really important it may be that if we were in the habit of talking about what I'm canning just now versus what we're witting it wouldn't be as confusing if we separate the body knowledge from the visual knowledge or visual oral knowledge thank you yep thanks for that David I'm not ready to say that consciousness isn't a helpful word but when you're talking about subtypes just brain stem function or the neuro computational components of brain stem function in this or that state it's all of a sudden like it's like explained away and then also this question about the lesions and the measurements which is what is really important to us when we're thinking about this pragmatically it reminds me a lot of genetics where they'll take a gene even just going with the molecular DNA concept of a gene so just saying they'll take a gene and they'll knock it out or they'll remove its function and then let's say blood pressure goes up when you delete the gene and they'll say oh well then this gene lowers blood pressure because it went up without it but that's actually not true to use out of your terminology the blood pressure is a phenotype that is like a self-organized harmonic mode and then when you change one of the nodes it's going to lead potentially to a if it's a buffered node then it's not going to change the mode at all so it'd be like a phenotypically silent mutation or it could be like a key linchpin just like the you know epigenetic landscape behind me it changed one of these guys that are key and then it changes the self-organizing mode and so just like you said a brain region when you lesion it if it removes some trait that doesn't mean the trait was in that region it's actually a total integrated network and then you change the network or the system's phenotype when you change the system so I'm always extremely skeptical of well this removal of this brain region remove this trait so it's in there it's like actually you could almost say it isn't in there because the system was able to reorganize without it and so it's just it's it's almost like the opposite of what people think is being shown with lesion experiments it's being shown that the system has capacity that is modified by key elements like if you inject ketamine into your wrist there's going to be a different phenotype than if it goes into some brain region like there's brain regions where a different drug being dripped on is going to lead to different phenotypic changes so that's actually not simply evidence that the brain is the locus of consciousness it's just saying yeah there's you know your elbow is a pivotal spot for the movement of your arm and there's brain regions that play critical message passing roles and neuro-functional roles but that doesn't uniquely identify this sort of object reductionist or regional perspective this is fun stuff I think we can let's Adam yeah go for it yeah yeah I guess in terms of things like virtual machines and what's types of systems so I'm like kind of pushing back on brain stems but I'm also in some ways broader I guess it's then like maybe country wandering Carl's perspective and then I'm actually really happy with consciousness being supported by a virtual machine in fact I think you might want to even think about it as a kind of virtual machine and for me though part of what I want to have happen it's not just this potentially the ability to have you know a hierarchy of representation with like synchronous dynamics facilitating the message passing but what I'm going to want is for basically these estimates of the organisms or the the systems engagement with the world to be generated on a time scale that would allow it to both inform and be informed by evolving action perception cycles and so where basically the role for me of this basic family awareness would be a series of estimates who through their basically high degree of coherence can also exert powerful control energy they end up constituting this center slow synergetic attracting manifold that can enslave the rest of the system to degrees and influence its its dynamics basically influencing action selection series of estimates that can influence action selection and be updated via your engagement so this doesn't necessarily need to be a brain but for me I want that I want basically a rich enough estimate of the global state of the system and I want these estimates to be occur on a time scale where they can significantly enough that there's a good amount of grip enough that you can actually do active inference both inform and be informed by the engagement I want one more thing potentially or maybe two more which would be I would want a a large enough network that you can get some sort of non-equilibrium steady state or self organizing harmonic mode among some aspect of it some subset of it which would where the closure would be achieved within the system that you could describe such an object such a harmonic or such a nest density and I think what you might potentially need is a big enough system that you're can get an almost like an interloop or that when you're forming like this metastable state it's not immediately disrupted by the engagements with the world via the sensory states and so I want it to be big enough that there's this it can form these internal complexes can I tie this to a question in the chat because I think it's great okay Cambridge breaths asks Carl Friston was asked if hurricanes have a Markov blanket he said reluctantly no what would that say about Markov blankets in general and is there any mathematical way to attribute Markov blanket to a hurricane so I'm going to put up the image where we have the blanket states whether it's a Markov or a Friston blanket and the internal states with a generative model but what you just said there about the persistence through time of an inner loop how would you relate that to this question about the hurricane and about what systems like is a hurricane conscious it has little inner loops doesn't it little turbulent eddies but what is it about the sort of like sustained memory and this inner looping where does that come into play and is that a way almost to go about identifying which systems intense might be able to be measured as conscious because they'll have an internal dynamics that are complex no idea is especially with the hurricane but although a thought that I've been like trying to think into recently is I'm wondering whether this inner loop might actually be important so maybe we can only define a Markov blanket Markov blanket when you have this sort of separability and maybe only for these like inner loop processes if you're not basically immediately being disrupted by the engagement where you can't draw the boundaries just like immediately effaced and that would still you could still do an inference maybe that'd be kind of like I don't know if there'd be a mortise inference but it would be like you could still do inference and coupling but it wouldn't be this you wouldn't have these series of estimates but the thing I'm wondering actually is whether the in some ways the solution to the hard problem and the solution to what is life along the lines of Markovian monism even though it might be one in the same and that like the boundary of life non-life the boundary of conscious non-conscious what actually makes for this adaptiveness is to have estimates which can evolve slightly independently from the world such that you can do things like counterfactuals in terms of your modeling so you can get this sort of sussing out of different possibilities and then using this as a means of surfing your uncertainty your engagement now I wouldn't want to go as far as to say that all life is conscious I'm not personally of that view like you can make a case for it because I want the additional coherence properties they don't think all life has but this idea though that the boundary problem of life and non-life of unconscious information processing or unconscious perception and conscious perception it might be exactly the same issue let's connect it let's connect it to what the paper says because I think this is really what we're on the brink of so this is quotations from the paper this is about where free energy principle and active inference is going to step in to some of these very challenging perennial questions like what is consciousness how do we measure it who gets to have it so the first summary point of Wanda and Karl was according to free energy principle we need to be thinking about neural dynamics not just neural states so I think that's very related to what we've been talking about not just the states like oh well this neuron has this action potential and this one's doing that we need to think about the trajectory through time so short clean point and the third point is also a short clean point it's more of like a literature claim which is that other people's models were generalizing above them okay so the first and the third point I think are short and sweet let's really go into the second point because it relates to what's the difference between being conscious and simulating consciousness people can have different opinions on this but this is where the authors are taking it and then what kind of biological or non biological systems might have or be conscious so hurricane how about a pilot in the hurricane what if the hurry yeah so what do they actually say here according to the free energy principle there's a relevant distinction to be made between the probabilities of the neural states or the trajectories and the probabilities they encode so I think I had to read this a bunch of times because they sound so familiar but it's basically saying there's the probability of the neural state itself which is like the probability that the brain is in that exact stance which is actually never going to return to and the probabilities that the brain is encoding so for example if the brain is involved in a decision-making task there's the probability of the micro states of the brain which you can go reduce all the way down and it's never going to replicate those micro states so instead of focusing on the probability of the states or even the probabilities of the trajectories which are always going to be vanishingly zero we can think about what are the probabilities encoded by those states like the action readiness is this person in a global dynamic state where they're action ready to move like they're awake or they're not action ready so they're like asleep how are we going to do this subtle shift from thinking about the probabilities of states and trajectories versus the probabilities encoded by states and trajectories well this is where they introduce the information geometry and they call this partitioning between the probability of the state and the probability encoded by the state as intrinsic versus extrinsic information geometry so in line with this distinction neural dynamics are movements on this intrinsic statistical manifold and then the computational component what the observer is modeling them as what's the shortest algorithm that generates that behavior that is movement on the extrinsic manifold and so this relates to what is being conscious versus simulating it for the hurricane let's just say those two manifolds are actually mapped like there's no hidden states the microstates and the macro states are they're they're one and the same whereas when you have systems that are engaged in these very vanishingly rare microstates but then they converge upon reproducible algorithmic or computational or behavioral features that partitioning can tell us something if there's a system where those are one and the same like it's a double pendulum and you're curious about its behavior it still could be chaotic but there's no like hidden interactors whereas there's other systems where computationally we have to approach them in a really different way I'm not sure if that's correct but that's kind of what we tried to read about in this paper and explore where is the intrinsic and extrinsic concepts coming into play and how do we how do we take all these interesting things that are happening here and move towards a definition or some sort of operationalization Adam? Yep, go for it I don't know butch butch that won't stop me so I guess so as I was saying before I think actually we might want to think of consciousness as a kind of virtual machine and I think especially once we get to this sort of generative passages and these richer senses of consciousness I think we're definitely dealing with a virtual machine in these cases but in terms of this maybe this place where it's this one-to-one mapping between the extrinsic and intrinsic manifolds it's this one-to-one mapping of information geometries I guess at least within kind of come back to brains so for instance I'd focus on alpha frequencies because these are though basically if you look at like the scope of what could be synchronized it's this big swaths of posterior courtesies so about 8 to 12 times a second you can generate these metastable states where the neurons are all basically more aligned in their activity and so I think Pascal Freese would call this communication through coherence there you can actually get integration and message passing and so I'm claiming that this is entailing basically some joint marginal of the sensorium the engagement with the world or potentially a maximum posterior or a estimate thereof I don't know which but the and part of the relation though is I'm thinking that this would be like maybe the first interloop type process that you might get and in terms of its relation to the color of the world it might have certain special properties in terms of so I have this kind of weird idea that by the time you get to like the Percunius I mentioned like these posterior medial courtesies that you're going to get these ensemble activity of neuronal ensembles, neuronal populations that would have a kind of chronotopic or consistent time varying relationship of the neuronal ensembles evolving and what's happening in the world on the organismic scale so it would be like if I'm moving my hand like this it's going above my eyes and there might be some mapping now into the ensemble activity which is then changing roughly at that time scale and you can detect this what it would partially do to bring it together into this like Cartesian theater-esque thing would be as this would I suspect it's potentially not even there's some sort of not just chronotopic in terms of like a correspondence of time there might even be kind of quasi topographic Euclidean properties of it in terms of the ability of for instance what's in the mind's eye to actually couple with the oculomotor system which itself is going to have a kind of coherent topography but I guess the idea is that the idea would be that thinking of consciousness as discrete updating of the generative model which at 8 to 12 times a second in terms of phenomenal consciousness and that these discrete updates if you look at them they're actually inner they're divorced and so there's not a direct mapping by the time that these functional cycles achieve closure and so I don't know if this is consistent or not with or with Wange and Carl's argument so the 8 to 12 part seems like that's an implementation specific detail right because that's related to our so it could be totally important for us but it may not be a general feature right now let's look at this equation set we're thinking about this intrinsic extrinsic information approach and it's saying it's saying the free energy of the policy of the it's a little confusing because pi is used to mean policy in almost all situations but here I think pi is actually comprising blanket states and internal states so it's a slightly different pi variable we could get that clarified but the point is that the system is going to be acting as if it's minimizing the difference between accuracy and complexity now in the case of let's call it a one-to-one system like whether it's a double pendulum which is a chaotic system it's simple but it's chaotic or it's a hurricane with turbulence and Reynolds numbers if it's a one-to-one system it's not acting as if it's co-optimizing this trade-off of accuracy and complexity even if you said well the hurricane is like a model of its environment it's just it's actually a perfect model it's not minimizing the complexity at all just not making an absolute claim just saying on the other hand if you take the intentional stance and you're thinking about an agent and that agent is acting as if it is minimizing the difference between accuracy and complexity that means that its extrinsic information manifold is rich according to this LZ complexity metric so when they're the same there's no delta it just is that system so it's almost like when you have a total state space representation and the mapping there's actually not that much more you need to express you say well yeah the chatbot said I'm conscious and I love the Beatles but that's because that's what was preloaded so when you have the full and that's actually why the neuro-reductionist direction I believe is ultimately fallacious because let's just say you had every atom in the brain then what are we conscious or not but people are acting like it's just a few more measurements or a couple more EEGs no on the other hand when we think about separating the internal and the external information geometry if we find systems that have a big delta they're acting really really as if they're minimizing accuracy minus complexity and that extrinsic information manifold we could say whether it is LZ or another type of complexity but that extrinsic manifold is complex like a human might get a tweet every hour and then engage in some like super interesting behavior that would be a big divergence between the intrinsic and the extrinsic manifold and the extrinsic behavior would be difficult to compress with respect to the inputs so that might land it a higher complexity so there's sort of two things there's how much does it look like it's minimizing it's free energy according to this structure and then is the extrinsic information manifold complex and then what are the measures of complexity I know that's like a bunch of random stuff I'm trying to trace the argument of the paper because these might be the kinds of measurements or comparators even relative measurements or ratios that help us understand Adam so I guess the it's the first speak to like again what you're saying about like the generalization and to not get like too bogged down by the specifics of how it might be realized within like for instance the mammalian nervous system I think that's really important and that would don't want to do that the I think some of the principles would general like it would generalize and so for me the I'm wondering whether actually in terms of this closure with respect to action perception cycles but actually this is potentially double the frequent or it's you're nesting actually two quail states within a broader envelope of functional cycle that that closed that closes on a slower time scale and so for the brain this would be something like theta potentially orchestrated by something like the Campbell system which might be doing this kind of comparator operation as a kind of like contrastive energy based learning but this would be like a more general thing where you might need to be doing is comparing things and like for the organism to adaptively navigate the world and generate a causal model it might be that you have this more extended generative passage level unfolding and that within there you're forming other nest densities or harmonics other timescales that you can then use these as objects which you then compare between and put in relation to each other and so any system I'm thinking you would have to like essentially these nested Markov blankets where some of them would correspond to like the system as a whole and its engagement with the world where this is its own best model and it's just it's just the whole it's a perfect mapping with like as the the hurricane or the whirlpool example but then you're then getting basically on as you move inwards or as outwards you're getting these other nest densities forming on different timescales which are then encompassing their Markov blankets encompass the other ones and some of these I think we really should be thinking of these like in terms of virtual machines you can't point to it at any given point and say aha there you are and it might actually extend not even within the not within the system it would be within in between it would be relational out into the world and I'm personally suspecting that there's this like within like this onion of nested Markov blankets some of them call it corresponding to different some of them corresponding to virtual machines some of them corresponding to the world as its own best model or the world as model that there might be a point where it's like aha there there you are I'm personally thinking this is for the mammalian brain it's at the alpha frequency but it would just be at that and so I guess the one more thing would be in terms of the difference between simulation and emulation I actually think in the figure that Wanda showed where he had the description of the von Neumann architecture and then you had this memory register with the terms from active inference I think that I think that mixed ontology is revealing and it could potentially mislead and that I'm wondering whether by the time we've committed ourselves to having these kinds of active sensory and blanket states that you can put into that memory register of the von Neumann architecture we actually ought to be course screening differently and we shouldn't be thinking of the von Neumann architecture anymore it's actually I don't I don't know if you can actually do plain analysis on that I think it's like it might break the rules of causal inference I totally agree with that I think whether it's lambda calculus or some other algorithmic description the current implementations of computers are only one way and so if we mistake the map for the territory or the specific for the general then we're gonna be like well it's the RAM and then this channel you know if you lesion the RAM it doesn't have short-term memory doesn't that sound like these computationalist reductionist researchers because they are working sort of not clearly whether it's general specific brain or computer and then the quote that that reminded me of it's by John Muir the Californian forest lover he wrote I only went out for a walk and finally concluded to stay out till sundown for going out I found was really going in so it's like I went out for a walk it's action planning and finally concluded to stay out till sundown deep temporal inference for going out I was really going in because when he was in the niche acting and inferring he was being himself and so it really captures like a lot of these dimensions that we're getting at and then also I'll raise a question from from Cambridge breaths in the chat and it's related to what you said about the nested systems Adam Cambridge writes can we have a non-metacognitive conscious entity so to speak or our consciousness and metacognition inseparable so we're we've used this idea of like self reflexive auto poetic auto noetic it's come up a ton of times are those one in the same to be aware conscious and recursive or might there be some differentiating wedge somewhere I mean I guess it depends like what we mean by metacognition so for talking about a very minimal metacognition I think would would be necessary in terms of in a way I think you might be able to think of like precision waiting as a kind of minimal metacognition and and even like on what levels are you doing your precision waiting but this might be essential because you can't do good inference without it like base for to be base it's this combination of precision weighted probabilities that's what gives it its power is weighing things by the relative reliability or part of what does that and like the ability to like iteratively refine by leveraging your priors but the and so we're going to need at least precision waiting to have probably any kind of coherent world modeling whether it's exactly Bayesian or not you know but it's got to be Bayesian-ish enough I think in order for there to be coherence you need to be able to take the reliability of information and so these precision estimations could be for instance entailed on many different levels they might be for instance entailed mammalian so they might be just emergent and distributed through this like game of free energy minimization taking place among all these distributed coupling elements but there might even be also like centralized control structures which help to give you additional levels of precision waiting things like singular courtesies potentially or the pulvinar and like that might like the thalamus part of what might make that so essential for humans would be its precision waiting and you might see in a variety of systems there might be like even like within a cell I don't know but there could be like you need some sort of like centralized integration structures for this more global precision waiting to occur not sure but in terms of like metacognizm go on just to bring it to the frist in formalisms or formalisms like what what would you say that idea connects to when we're thinking about how it was formulated in the paper you know to ground to ground the conversation for listeners and for ourselves like when we when we ground it in the formalisms then we can say well I don't think the formalism is trying to describe the right thing or I don't think it correctly describes what we're agreed upon is the correct thing and a lot of times it's easy to invoke these formal frameworks but then never clarify what our mapping is between like okay there's a kinetic a path independent and a path dependent term sounds like kind of relevant but then I'm not exactly sure that it's going to transmute into qualia and that's like that gap that we want to be sort of hammering out again and again I mean in terms of actually thoroughgoingly connecting back to the formalism you got the wrong guy but and I think though this would be at least I think actually consistent consistent with like Mark Sohm's perspective also for him conscious is inherently metacognitive and he is relating into precision waiting but in terms of exactly with this more in terms of Markovian monism that I don't know how to describe that well I guess I had one more thing in terms of like the way I'm thinking about it which would be I personally think that you can have like in contrast to like a higher order thought type theory that you actually so there's some theories which for instance try to collapse these different senses of consciousness and they would reject for instance potentially the difference between phenomenal and access consciousness or and they would say you do need this you do need these high level features for there to be anything at all that and I personally if it depends on the richness what we mean by metacognition so if we're saying precision waiting then I'd say yes if we're saying though things like auto noesis or like reflective self-consciousness then I'd say probably not we you don't need that I think there could be something that it's like and you could be generating qualia and these could just pop up there was something that it's like as they were being they are emerging and then they go away as soon as they were there and you weren't able to reflect on them you weren't able to have to compare them potentially and you even less so were able to contextualize them with relation to yourself and time in a rich sense I think that's quite possible so I don't think you need like the more rich kinds of metacognition to have consciousness in terms of the appearance of the world for the appearance of the world to be like our appearance of the world then you do but in terms of just the appearance of any kind of world any kind of what it is like this or what it feels like this I don't think you need that I really love that connection it's like there's the what it's to be like whatever it is but then the way that we actually experience are any given individual experiences is so cultural it can really never be disentangled for example people will tell you just yeah there's XYZ it's three axes and they're orthogonal at 90 degrees except if you grew up studying synergetics and Buckminster Fuller then there's four spatial dimensions and the orthogonal angles are 60 degrees so is it really a fact of the world that it's XYZ or is that just something that's culturally reified in the way people talk about the world and how they measure and how they communicate and so it's always like this question about what are the general attributes but you never observe the the genus the general you always observe the species the specific and so we are in a little bit of a triple bind because we don't know what consciousness is how to measure it or what the observables are per se and it's sort of like it's a catch 33 you have to solve them all in one approach because if you're observing something and you're like oh it's all about the EEG and you're going way down the rabbit hole with a four-year transform I mean who's to say that might be just actually going down the wrong path or it might be a very valuable piece of information or it is reducing our uncertainty about something but it may or may not be related to the general question it might be related like yes it turns out that this wavelength is a differentiator of whether someone is going to experience pain or remember pain during surgery it's so important because we don't want to have people who are suffering unnecessarily but then again is that the general answer to consciousness interesting questions and then also to show WANJA's 2020 paper 7-11-2020 with these different ways that consciousness behavior and information could be linked and then I add a little qualia maybe consciousness is qualia you know what if there's a thought without a thinker or a thinker without a thought or there's all these different options and it's also just it's a space where there's a lot of possible perspectives and the scientific literature and the neuroscientific literature of the last say 200 years it's only one facet of a really nuanced story of humans understanding their local world so it's just really awesome and we'll have just time for a couple more thoughts and questions but yes Adam and then Dave if you'd like to add anything feel free you know I don't hear you for some reason Adam but you're not oh yeah I personally I personally like making consciousness anonymous with qualia and that's like the that's the IIT stance but in terms of what you were saying with respect to different types of senses of space I was and in general what you're like pointing out in general is like trying to find non-question begging uses of terminology and claims like so for instance like within like my own thinking I talk about spatial temporal and causal coherence but at which level and actually there's a little bit of an ambiguity there and in my thinking that like it's actually I think it's showing up and entering at every scale but in different ways so for instance you could think of a certain just entailed and for instance having a hierarchical predictive memory system just having something like a hierarchical structural cortex you might have itself some element of spatial and temporal coherence built in just there via things like at the initial levels having a topographic mapping to the world and then preserving that to some degree and maybe reintroducing it again later but this idea of like some degree of preserving of space time being preserved by some sort of coupling and then some sort of homomorphism and maybe some sort of increasing temporal depth of nested causes that unfold in different timescales via a hierarchy where the deep levels are things that are unfolding more slowly so you're getting some time just there and cause you might even think of something like spike timing dependent plasticity where you wire up the system based on the predictive information coming in the right order is being like a kind of proto-causal prior since like causes auto-proceed effects however that's a different sense than like for instance a richer sense that might be important for the appearance of a world of spacetime and cause where it would come in for instance in terms of potentially something that's more constructed so like what type of space are we dealing with if you're in like this Buckminster Fuller world that might be a very different sense of space that might not be x, y, z and there might be a different sense of time for you and a different sense of cause by the time we get to phenomenalities or I think it is a different sense and then there might be yet another sense in terms of like the intersubjective and the higher like the more narrative meaning making of spacetime and cause where it's like the metaphors we use about space and time and the ways we construct these like linguistically and like this higher order semantics and then these might couple as priors for each other in all sorts of interesting ways so there's like ambiguity there or like there's um and I guess to kind of come back to this idea of like so I've been thinking that you couldn't think of like within global neuronal workspace theory that I kind of generalize it a bit and that I treat the workspace as not just this like fronto parietal network I would consider it to be a type of workspace and potentially a type of virtual machine workspace of um I think where it's involving these action perception cycles being closed like these kind of fictive imagined of action perception cycles or real ones but you could also think of let's say you're just having like the estimates of the sensorium like you're getting these like these complete these nest densities or harmonic self-organizing harmonic modes like if you're getting big swaps of cortex all engaging in marginal message and all communicating their beliefs together to create a joint estimate of the engagement of the system with its world it may not involve the whole system but if it's big enough I still consider that to be kind of like consistent with a global workspace it's big it's allowing multiple things to kind of do this ensemble slash federated learning and inference creating the synergy getting a hole that's created someone's parts making information more globally available or creating like fame in the brain is then then I called it and so I'd say like that's still very that's a workspace it's a sort of iterated Bayesian model selection but like within but what do we mean by workspace and how much when we say workspace at some levels do we mean spatial itself is it like is the workspace itself a kind of space and but now to what degree is it a constructed space and what kind of constructed space ah and global oh well of course I don't mean like the globe global I mean like you know something else it when things are universal global basic there's so many adjectives that help people share memes and ideas but really don't always reduce uncertainty about what's being discussed because they're being used in explicitly a non helpful way and so it's um it's always a challenge to parse out and translate sometimes even on the fly which types of usage are actually like okay attention is actually supposedly about the thing we're talking about here but global okay it's a totally different usage and there's a bunch of times from the same sentence you have words that are like really drawn from different namespaces they're drawn from quite different views yeah adam and then Dave you ever want to you know add anything go ahead and raise your hand but go ahead adam just a quick thought as that's like in general I think part of it is like we need to use more words and not argue so much about like what words mean is if like we're receiving them from Plato's philosopher king they came on to us on highs this ideal form rather like words as tools and making them useful tools and like me you know we're gonna want to probably stick as much as we can to like common or folk like usage and like folk psychology as much as we can but sometimes we won't but basically trying to find the one that does the least that captures the senses of use that we tend to use as much as we can but also well sometimes pushing back on like our common understandings but you know I think along this though using like more words like phenomenal consciousness access consciousness maybe we want to break those out further you know there's a limit to what we can do before the tools become like not serviceable in our hands but I think we definitely should be using more than like I think people are tending to expect like this one to one mapping of word to the relevant natural kinds and I think that's a problem I think that's I think we should at least two words maybe as many as three more than that I think is a problem but I'd say two words we would do so much better for always using two Thanks Adam Dave Yeah the you mentioned Plato the latest edition of Sean Carroll's Minescape the guest is says there's two scholars in China who are explicit and proud followers of Leo Strauss and they have followed his method and read all the works of Plato each one of which proves that Xi Jinping is the perfect leader and that a world that he rules is the best of all possible worlds and could you please send me an invitation to the Markovian monism talk please Thanks Thanks for the statements and claims and questions yes so yeah yeah yeah go again go ahead so in terms of like one concrete debate in like consciousness studies for instance it seems like a lot of the dispute between like global neuronal workspace theory and in great information theory could be resolvable and them just talking about consciousness in different senses and if we add additional words this will become clear so instead of like fighting engaging this like deathmatch if the word will be this there can be only one it would be like okay so maybe IIT is focusing on phenomenality and maybe GNWT is focusing on access and when IIT says that the physical and essentially computational substance of consciousness are tend to be course by group posterior hot zone they might be largely correct and when GNWT says it's actually a more broadly distributed fronto parietal that they might be correct also but on one case they're talking more about phenomenal consciousness in one case you're talking more about access consciousness and so like there's this huge like adversarial collaboration happening but actually I'm worried that it's all in the wrong terms assuming that like one framework will be the victor when actually it should be not that you're all right and like awash that way but for the like sure let's go for a minimal unifying model like watch this thing but there might be like this minimally model might take in both parties and they might be happy IIT people might be happy GNWT people might be happy if we just use more words yep um okay no one has to be wrong you might be but no one has to be wrong I'll give a take on it Dave though yep maybe some of the folks who helpfully remind us that the map is not the territory should just go back and read science insanity sounds good and what I kind of got out of there with consciousness and this is I guess hopefully an approach we're enacting and carrying through is how are we going to share the tools maybe statistical parametric mapping SPM toolkit maybe the developing active inference toolkit how are we going to share the tools and align on action that is within everyone's affordance within everyone's niche rather than zero in on the eternal essential and truly fundamental differences in our perspectives and contexts you're not going to get people to agree on what a pizza is you're not going to get people to agree on what culture is and so we're way way way past physical objects here and so how are we going to have a story and an approach field where we can come together and reduce our uncertainty about like our physiology and our emotional responses which are part of our physiology and focus on potentially the before the decimal point shared reality rather than the after of the decimal point divergences in the seventh adjective that were the list ordering Adam I think that what you're saying actually in some ways that's the essence of like the way to proceed in that like I said like no one has to be wrong because we all have to be wrong all the time like all you know all models all metaphors are wrong some of them are useful and we're going for maximal use and it's not saying that like this is a post modern wish wash and anything goes but there's going to be some uses of terminology that are more or less helpful for getting at basically different grips on the latent world and for which all of our words are mere approximations there is a world there are better and worse ways to describe it but these better and worse ways of describing it there's always an interest and it's our interests and so I think bringing like basically our values back into the picture hopefully our shared values and this being and so we have to proceed yeah I totally agree I think that the movement that is reflected by people calling for transparency or interpretability or reproducibility in machine learning let's say it reflects a desire to be able to really weave together who is being served and who's being influenced and our values with technical details and so it's just it's been special and been really interesting to think about how something that you know you could call out some technical component but it provides a starting place for actually fully intertwining our values in and yeah it's going to be a tight rope to walk between everyone's correct no one's correct everyone is neither correct nor uncorrect or everyone's involved in a process of correction how are we going to move from the essentialist claims like who's right and who's not or which sub component of myself are correct or not to a broader story of whatever it ends up being in different people's perspective and there wouldn't be any sequence of words English or otherwise from any speaker that is going to end up being the end all because they're just it's like the finger in the moon you're pointing at the moon you're not pointing to the spot at the tip of your finger it's something far far away and something aspirational so that's how the words are and especially with the consciousness you know again even with the food like I would say that food could be an aspirational or an asymptotic or a platonic thing but then this is like quite wrapped up in our own experience of course so thanks so much to the two of you guys any few any last thoughts or just what does this excite you about moving forward or what would be some next thoughts or just imagine some archetype of some listener or viewer and what would you suggest to them ask them I mean for me and like it's both like fundamental the question like the just you know what is consciousness what is life who who am I what what's going on like it's it's something like an existential meaningful character these conversations but I also think basically I'm hoping that we're getting close to bringing back in consciousness in a meaningful way into science kind of overcoming basically a kind of behaviorist legacy where it kind of made sense to make some things somewhat taboo for a time because we didn't feel like we had purchase but then that taboo became maybe stifling and tyrannical I don't know if we've completely shrugged it off yet but there's a sense where by we try to be as precise as we can we contextualize you know what are our value commitments what is the specific way in which we're using terminology in this case trying to be as precise as we can I think we might be at a point where we can start to bring things like consciousness back in without having to be embarrassed about it and I think this embarrassment actually did something really bad which basically took your experience and removed it out of the scientific discourse and basically made it unreal and I think we're going to discover not only was this bad for us personally interpersonally but it was bad for us scientifically because I actually suspect that consciousness is the maximally important informational and energetic object in at least our minds and that this we basically hid the main story we made taboo telling the main story and that we made taboo different things that like part of the reason we didn't have purchase was that it was we were being imprecise as hard but part of it was we made taboo the things we need to make purchase so phenomenology we made experience taboo and then we wondered why was so hard to make purchase unconsciousness no one was talking about embodiment and then we're like hmm why aren't I able to cross explanatory gaps with a giant flailing leap and that unfortunately in a historical and a cultural context it led to divide and conquer of academia it led to the bifurcation of non-academic experience with academic research it led to disciplinary unneeded friction I really agree with that it's like it was a scientific error to think that the finger was pointing at the tip of the finger and say well because we don't know exactly the detail we're gonna like not just you know write a review paper or then go back to our lab we're going to go on stage and we're going to tell people that they're zombies we're going to tell people that they're robots or we're going to tell them this is how the brain generates consciousness and so that's why I'm always excited to hear about new research in the field but I would hope that kind of like coffee and cream like it would always be blended with a recognition that it's an open question it's an experienced it's something we're experiencing and so we're not just going to cut it and dry it leave on stage 15-minute talk it's done because think about how many thousands of those lectures have been delivered in the last 100 years from when you know today's nurse oh that was too early but they were just being enthusiastic to today and that's tens of thousands of people or however many people who got pushed in a different direction because using legacy credentials they were told their experience was not concordant with science which is really something that's hard to estimate the impact of one more thing would be I want I'm also the open to there might be some things that might be cut and dry and I don't know so for instance one of them could be at questions like it's possible that for like depending on which senses of consciousness we mean that there might be cut and dry versus more graded distinctions we should be looking for so it could be or could not be the case that basic phenomenality is cut and dry we don't know I have flossness but one thing would then kind of later come again this might have relevance now for things like animal rights and this might have relevance going forward for things like the future of AI maybe can we create artificial general intelligence and then when if and when we do if we survive long enough and if we're smart enough will these be morally relevant agents or that would they be part of our moral communities and so um whether things are graded or not it could end up being crucial there might be some cases where it's like aha once we nail down our use of words based on our interests and we have like this many qualifiers we feel like okay this is the optimal grip on like numinal space whatever this is like the optimal grip on the world on latent space and you say okay cut and dry but it might not be but this might be the most important questions ultimately that we can ever ask and here's where I think active inference might speak to it Dave then if you want to add anything or not is here's where it's cut and dry the observables sense and action and so we can be clear about what we're cut and dry and say here's how using the theory of information processing we're talking about auditory processing but then to go that next level of emotional salience and say and this is why you hear this when this happens that's the part where it can be clarified that there's things that are basically cut and dry models but those are cut and dry because they've been partitioned from a broader field of discussion not because one day the cut and dry is going to be everything it's actually like everything isn't that way and we've made these islands of stability that are despite that not like the other way around so Dave any last comments thanks so much this was really really awesome conversation as always appreciate the live participants and the those in the live chat and watching later this is it for actually the first quarter of 2021 for active stream pretty fun next week we're going to be having our quarterly roundtable so if anybody wants to make like a three to five minute video for this one or a future roundtable they're welcome to do that then we're going to be having these upcoming events so it's going to be just on with the show and the learning and doing and participating so thanks everyone and see you later